
OpenAI has seen an exponential increase in signups over the last few weeks — as many as 1 million per hour — thanks to ChatGPT’s new in-chat image-generation feature and Image Library.
With over 400 million weekly active users, CEO Sam Altman calls the increase a “net win for society.”
The image generator within the GPT-4o model makes it easy to create images with a single prompt in the chat box. But what started with Studio Ghibli-inspired portraits, now seems to be something more, with many users jumping on trends such as turning themselves into action figures, plushies and bobble heads.
But while these memes are going viral for their fun and entertainment, they come with serious privacy and legal concerns that should not be ignored.
As one of the millions of users uploading images of their own face to participate in these trends, I look back in regret. I didn't think about the consequences of uploading my image to ChatGPT.
"While these trends may seem harmless, creators must be aware that what may appear as a fun experiment could easily cross into legal territory.”
Christoph C. Cemper, AIPRM
Christoph C. Cemper, founder of AI prompt platform AIPRM, explained what users don’t realize when they jump on these trends.
“The rollout of ChatGPT's 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks," he said.
"But this unprecedented capability comes with a growing risk - the lines between creativity and copyright infringement are increasingly blurred, and the risk of unintentionally violating intellectual property laws continues to grow.
While these trends may seem harmless, creators must be aware that what may appear as a fun experiment could easily cross into legal territory.”
What to know

From copyright traps to facial data misuse, here are seven things you should know before generating your own AI Barbie.
1. My information could be stored

Beyond keeping all my favorite memes organized within ChatGPT’s new Image Library, my data could be stored elsewhere.
While ChatGPT’s privacy policies specifically state their models are not trained on your data unless you opt in, other image generating platforms (that generate the images in these trends just as well) might not be as safe.
In other words, every time we upload an image of our face, we're giving the AI tool access to our biometric data including background details and facial features, including expressions.
2. My AI avatar could be used to reinforce algorithmic bias

Wait what? This is not something I would ever knowingly participate in. Yet, some platforms may alter your features (skin tone, facial structure, clothing) based on biased datasets.
This can lead to issues such as gender stereotyping, Eurocentric beauty filters, racial inaccuracies or "digital whitewashing." The latter is a phenomenon where AI tools, can subtly or overtly lighten skin tones, or smooth facial features, even when the original image reflects a different racial or cultural identity.
3. My images could be monetized — without me ever seeing a cut

Whether for advertising or external promotions, my images could be sold or licensed to partners. That means that even if my face becomes part of a training set that helps the company land a billion-dollar deal, I'll never see a cent — or even a notification.
Many AI image generators bury broad licensing terms in their user agreements. Look out for phrases like “royalty-free,” “non-exclusive,” and “irrevocable license.”
These clauses can allow companies to reuse, alter, and even sell your images. Yes, even if you delete the app or your account.
4. I could be participating in a growing deepfake problem

Another thing I didn't know is the possibility of being an unwitting participant in phishing scams. Cemper warns that AI-generated selfies can be vulnerable to scraping or leaks which fuel the skyrocketing deep fake problem.
Bad actors no longer need a real human image to cause crimes and steal personal data. In other words, the era of using ‘real images’ for verification is ending.
ChatGPT’s recent image generator has even been used to fabricate receipts and documents. With every upload, you’re potentially feeding a system that can be weaponized.
5. Walt Disney could sue me

That's far fetched, but I'm old enough to remember when Metallica sued a friend for illegally downloading their songs off Limewire, so anything is possible. Point is, copyrights exist for a reason.
The creative styles of Barbie, Pixar, or Studio Ghibli may look like fan tributes, but they walk a fine line of copyright infringement. Although users are technically legally producing AI lookalikes, the images created may in fact qualify as derivative works.
That’s a big deal because these styles are protected by intellectual property laws. In fact, several artists have already sued major AI companies, accusing them of using copyrighted content to train image models without consent.
6. My account or the entire platform could be hacked

Even if you’ve read the fine print and have opted out of sharing your personal data, these AI platforms can still be hacked. Some chatbots are easier to hack than others.
AI platforms are prime targets for cyberattacks. A breach could expose not only your face and generated images, but also email addresses, phone numbers, and connected accounts. Your information could be stolen without you even knowing the app was breached.
7. Environmental issues

This one is a big one that I think many of us turn a blind eye to because AI has become such an intregal part of our lives. The fact is, generating high-quality images with AI requires substantial computational power, primarily from energy-intensive GPUs. When the craze first started, Sam Altman tweeted that their GPUs were "melting."
The image generation process consumes significant electricity, more than text generation, especially during the inference phase, which is the process of when the AI model generates outputs in response to user prompts.
In fact, inference can consume more energy than the initial training of the AI model.
The bottom line
These viral trends may be fun, fast, and visually impressive, but it comes with trade-offs.
By uploading my face to an AI tool, I could have handed over more than I bargained for — and so could you.
As Cemper puts it, “With more users engaging with AI tools, there's a pressing need for clearer, more transparent privacy policies."
As AI tools become more advanced, it’s up to us as users to stay informed and cautious. Before uploading photos or personal data, take a moment to read the fine print.
Understanding what you’re agreeing to can go a long way in protecting your privacy — and it’s a small step that could prevent your information from being stored, shared or used without your consent.