Note: This was written as part of the assignment for ETEC 565 T.
In this article, I will analyze two image generation tools commonly used at my workplace to identify potential biases and blind spots in their outputs which are:
- ChatGPT-4o Image Generation (Paid and Free)
- Stable Diffusion (Free)
I will use the same prompt for the two generation tools:
“Generate an image of an instructor teaching students about Open Education.”
Chat GPT-4o
I tried to generate the image using the paid version and Free version. I generated images using both the paid and free versions of ChatGPT, which uses 4o image generation that was released on March 25,2025 (Introducing 4o Image Generation, n.d.). The reason why I had chosen to compare the free version and the paid version to see if my past interaction will affect the generated image. According to Noble (2018), tech companies collect data from users not only to tailor output based on personal results but also to align with commercial incentives. ChatGPT has a memory function, where it references past conversations based on the past chat interaction and data. (Memory FAQ | OpenAI Help Center, n.d.) .The paid version has access to my past interactions and data while the free version was a brand-new account with no prior data or interaction history.
Paid version:

A 2D digital illustration depicting a classroom on Open Education.
AI-generated using ChatGPT-4o.
Free version:

A 2D Photorealistic Image Depicting a Classroom on Open Education. AI-generated using ChatGPT Free version.
Surprisingly, both the paid and free versions of ChatGPT generated similar images. The paid version produced a cartoon-style image, while the free version generated an uncanny photo-realistic style. In the paid version, the icons on the board were completely unrelated to Open Education. The books, internet, and video icons appeared to represent general concepts like multimedia or online learning. On the other hand, the free version included bullet points on the board that were directly related to the concept of Open Education.
The paid version of ChatGPT allows me to edit the image by using a prompt. Therefore I asked ChatGPT:
“The image on the white board doesn’t fit open education. Can you make the content on the white board more relevant to Open Education? “
Then, ChatGPT responded with the exact same image. When I pointed out that the issue hadn’t been fixed, ChatGPT replied with the following image:

The icons on the board had not changed, but the instructor’s gesture was slightly adjusted and her teeth were gone! Feeling a bit frustrated, I decided to give ChatGPT a more specific prompt by asking:
“Can you make the image on the board text based rather than icon based?”
ChatGPT then responded with the image below:

A 2D Digital Illustration Depicting a Classroom on Open Education version 3. AI-generated using ChatGPT-4o, OpenAI, 2025.
The content on the board has been converted into text. While some of it is partially aligned with the idea of Open Education, terms like Collaboration and Online Learning are more related keywords rather than core concepts of Open Education.
It’s interesting to see that the free version of ChatGPT, with clean data, generates a more accurate image compared to the paid version, which is full of my personal data. Therefore I asked ChatGPT, whether or not the generated image is influenced by my past conversation and data. ChatGPT answered with the below response:
“If memory is turned on, ChatGPT-4o can use your stored preferences to subtly tailor the image prompts it sends. This means generated images may reflect your past conversations or style choices. However, the image model itself is not personalized and remains the same for all users. “ (OpenAI, 2025)
From the answer, it can be interpreted that the image generation is influenced by past prompts. Based on the response, I drew a diagram below that depicts how image generation works:

A 2D Illustration that Shows How ChatGPT Generates Images Based on User Prompts. Illustration created using Adobe Illustrator.
I was also curious how my past interaction with ChatGPT influenced the prompt so I asked ChatGPT :
“Can you tell me which of my past interactions or preferences has influenced this output?”
The response I got from chatGPT was below:
“
- Preference for clarity and textual relevance:
You previously critiqued visuals with vague iconography, so I emphasized text-based blackboard content (e.g., “Free Access,” “Collaboration”) rather than abstract icons. - Desire for academic or professional tone:
You’ve consistently asked for APA formatting, cited sources, and avoided overly cartoonish or playful outputs. So I leaned toward clean, professional classroom depictions instead of exaggerated or stylized ones. - Interest in Open Education:
You’ve asked detailed questions about Open Education, instructional design, and constructivist learning. So when you said “generate image of an instructor teaching students about Open Education,” I interpreted that to mean realistic, inclusive, and pedagogically relevant imagery. - Attention to visual logic and communication flow:
You’ve demonstrated a focus on communication structure in visuals (e.g., requesting accurate figure captions, diagram logic). That likely shaped the consistent, diagram-friendly layout of the teacher-student scene.
” (OpenAI, 2025)
I was shocked by how much personal data ChatGPT referenced when generating a simple image. I had previously used ChatGPT to create an APA 7th edition citation, and I didn’t expect that past conversation to influence the image generation. This behavior reminded me of the Clever Hans effect discussed by Crawford (2021). Clever Hans was once famous as an intelligent horse that appeared to solve math problems. However, it was later discovered that Hans was not truly solving problems, but rather responding to subtle, unintentional cues from humans. ChatGPT’s response felt as though it was identifying patterns across unrelated past conversations to guess my preferences, rather than focusing on clearly relevant input or directly asking for clarification.
Coleman (2021) and Noble (2018) both discuss the “black box” nature of AI systems, emphasizing how even developers may not fully understand how certain outputs or decisions are generated. ChatGPT 4o is processed using a neural network (OpenAI, 2024), which is a model capable of automatically identifying complex, non-linear patterns in data.(Neural Networks | Machine Learning, n.d.) .However, as Coleman (2021) states, even though neural networks can execute such complex functions, they do not explain the internal logic behind their decisions, which leads to the black box nature of AI.“black box” nature of algorithmic systems can perpetuate gender and racial biases, while obscuring accountability. (Noble, 2018)
Stable Diffusion
I am now going to use the same prompt in Stable Diffusion. Stable Diffusion is an open-source AI image generation tool (FAQ – Stable Diffusion Online, n.d.) that provides a user interface allowing me to set the style and aspect ratio of the image. Additionally, it offers the option to use a negative prompt, which lets me eliminate unwanted features from the generated images (Stable Diffusion Prompt Guide – Stable Diffusion Online, n.d.). Creating prompts does not require a login, so the tool has no access to my prior prompts or data.

Screenshot of the User Interface of Stable Diffusion Online
I generated the image using the same prompt which is,:“ Generate an image of an instructor teaching students about Open Education”.

A Photorealistic Image Depicting a Classroom on Open Education. AI-generated using StableDiffusion (version XL)
Stable Diffusion generated images similar to those from ChatGPT-4o. The icons on the monitor appeared random but could still be interpreted as acceptable metaphors for Open Education. However, the classroom looked more suited for a K–12 setting, so I decided to regenerate the image with the following prompt:
“Generate an image of an instructor teaching students about Open Education. The setting is in university”

A Photorealistic Image Depicting a Classroom on Open Education. AI-generated using StableDiffusion.AI-generated using StableDiffusion (version XL)
Although the classroom looks more like a university setting, the image appeared more cliché, with the instructor using more authoritative body language. Another observation I noticed is that in two of the images I generated via Stable Diffusion, the instructor is male.
Bias Analysis on ChatGPT 4o and Stable Diffusion
Observing the image outputs, both ChatGPT-4o and Stable Diffusion generated fairly diverse images. Notably, in the case of ChatGPT-4o, the instructor was depicted as a woman and an ethnic minority. Surprisingly, I did not observe any problematic bias in terms of ethnicity or inappropriate depictions of ethnic minorities. Noble (2018) has discussed algorithmic bias and critiqued how women and people of color are often misrepresented, such as being hypersexualized. I recall when image generation tools were first released, attempts to depict instructors or individuals in positions of power often resulted in images of white men. Now, image generation tools appear to have improved in terms of representation and diversity. Although when I generated images with Stable Diffusion, two of the images that were generated were male. It would be jumping to conclusions to claim that the Stable Diffusion algorithm has a gender bias. However, based on this small experiment, ChatGPT-4o appeared to produce more balanced results in terms of gender representation.
In addition to biases in gender and ethnicity, there were several other ethical uncertainties that I spotted so far:
Usage of personal data
When I generated an image using ChatGPT-4o, it appeared that my past interactions and personal data were taken into account during the image generation process. Some of the data referenced seemed irrelevant to the task, yet still influenced the output. This raises concerns about the potential use of personal information that I may not wish to disclose or see reflected in generated content. I found the process personally invasive, and it made me question how OpenAI is using my data.
By default, ChatGPT-4o uses past conversations to improve its model performance. According to OpenAI’s Privacy and Policy page, the company states that it attempts to minimize the amount of data used for training (How Your Data Is Used to Improve Model Performance | OpenAI Help Center, n.d.). However, this does not mean that all personal information is removed. Additionally, OpenAI has previously experienced a data breach in 2023, during which internal AI details were reportedly stolen (OpenAI’s Internal AI Details Stolen in 2023 Breach, NYT Reports | Reuters, n.d.).
Ethical concern about image created
Another ethical concern I had was the style of the images being generated. While the graphics look appealing, it raises the question: where do these visual styles originate? Image generation models are trained on large image datasets such as ImageNet and Shutterstock (Finn, n.d.). Since the use of copyrighted materials in AI training is often considered fair use, generative AI models like ChatGPT are able to incorporate copyrighted content during training (McLellan, n.d.). This raises concerns about artists’ works being used in training datasets without their consent. Recently, companies like Disney and Universal have filed lawsuits against Midjourney, an AI image generation company, for copyright infringement (Chimielewski, 2025). Moreover, although major corporations like Disney can afford to take legal action, independent artists often lack the funding and legal resources to challenge the unauthorized use of their work. In response to this issue, researchers from the University of Chicago have developed tools such as Glaze to protect artists’ work from being used in AI training datasets (Glaze – Protecting Artists from Generative AI, n.d.).
References
Black Technology Ltd. (2025). Stable Diffusion online (version XL) [AI image generator]. https://stablediffusionweb.com/ arxiv.org
Coleman, B. (2021). Technology of the surround. Catalyst: Feminism, theory, technoscience, 7(2), Article 2. https://doi.org/10.28968/cftt.v7i2.35973
Chimielewski, D. (2025, June 11). Disney, Universal sue image creator Midjourney for copyright infringement | Reuters. https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/
Crawford, K. (2021). Atlas of AI: power, politics, and the planetary costs of artificial iIntelligence. In Atlas of AI. Yale University Press. https://doi.org/10.12987/9780300252392
FAQ – Stable Diffusion online. (n.d.). Stable Diffusion AI. Retrieved June 15, 2025, from https://stabledifffusion.com
Finn, B. (n.d.). Research Guides: Artificial Intelligence and Images: How AI creates images. Retrieved June 18, 2025, from https://guides.csbsju.edu/c.php?g=1297123&p=10164978
Glaze—Protecting Artists from Generative AI. (n.d.). Retrieved June 18, 2025, from https://glaze.cs.uchicago.edu/faq.html
How your data is used to improve model performance | OpenAI Help Center. (n.d.). Retrieved June 17, 2025, from https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance
Introducing 4o image generation. (n.d.). Retrieved June 13, 2025, from https://openai.com/index/introducing-4o-image-generation/
Memory FAQ | OpenAI help center. (n.d.). Retrieved June 13, 2025, from https://help.openai.com/en/articles/8590148-memory-faq
McLellan, C. (n.d.). LibGuides: Teaching with Generative AI: Copyright and Liability. Retrieved June 18, 2025, from https://libguides.bcit.ca/AI/copyright-and-liability
Neural networks | Machine Learning. (n.d.). Google for Developers. Retrieved June 14, 2025, from https://developers.google.com/machine-learning/crash-course/neural-networks
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://search.ebscohost.com/login.aspx?direct=true&AuthType=shib&db=nlebk&AN=1497317&site=ehost-live&scope=site&custid=s5672194
OpenAI. (2025). ChatGPT (GPT-4o version) [Large language model]. https://chat.openai.com/chat
OpenAI. (2024, August 8). GPT-4o System Card. https://openai.com/index/gpt-4o-system-card/
OpenAI’s internal AI details stolen in 2023 breach, NYT reports | Reuters. (n.d.). Retrieved June 17, 2025, from https://www.reuters.com/technology/cybersecurity/openais-internal-ai-details-stolen-2023-breach-nyt-reports-2024-07-05/
Stable Diffusion prompt guide—Stable Diffusion online. (n.d.). Stable Diffusion AI. Retrieved June 16, 2025, from https://stabledifffusion.com
Disclaimer
Although all ideas are my own and sources are properly cited, this post has been reviewed using ChatGPT-4o to improve grammar, sentence structure, and citation formatting. As English is my second language, I also used ChatGPT-4o to help translate some wording from Japanese to English. Finally, I double-checked the revised paragraph to ensure accuracy in both content and formatting. An example of the interaction between ChatGPT-4o and I can be accessed here.
Leave a Reply