top of page

Inventing Reality: The Threat AI Image Generation Poses

by Hanna P.





Artificial intelligence has taken the world by storm the past few years. Almost everything digital has integrated AI software in some way, with every sector of the world attempting to take advantage of its new advancements. And AI keeps getting better, too. Using training models, AI software is able to learn from users’ input, and alter itself to better imitate real intelligence.


Some find these advancements alarming, concerned about how AI could replace their work, or produce content inferior to the content produced by humans. Others are fascinated by it, seeing the ways in which AI could improve daily life beyond what we could currently imagine. Whether looking at artificial intelligence from a positive or negative point of view, it’s undeniable that AI is growing. As models continue to learn, they become better at imitating reality, and humans become more unable to differentiate AI-generated content from real content. When extending this to image production, this can result in worrying consequences for the future.



Our Brains Aren’t As Smart As We Think


While AI image generation is not a new subject of research, it is one that has been gaining traction lately in the cognitive psychology and neuroscience fields. Questions such as how our minds perceive AI-generated photos are very worthy of study and can give us unique insights into both our visual processing methods and the intricacies of AI photo generation.


Something that comes up quite frequently in this research is our ability (or, as it turns out in many of these studies, inability) to differentiate between AI-generated images and real ones. Another concern is not just the differentiation between AI and real images, but AI-generated artwork, as compared to human-made artwork. In both of these cases, it’s shown we have difficulty correctly identifying things as AI-generated.


In a study published in Communications in Computer and Information Science and presented as part of the 2022 International Conference on Human-Computer Interaction, participants were tasked with identifying AI-generated human faces from real human faces; results show that people consistently had difficulty accurately differentiating between the two. 


Another paper published in Lecture Notes in Computer Science and presented as part of the 2023 Computer Graphics International Conference consisted of a similar study and survey, in which people reported having major difficulty in differentiating the AI images of people from the real images. Both of these studies revealed the inadequacies of people’s visual perceptions when it comes to accurately identifying AI-generated images of humans and being able to properly distinguish them from real images of humans.


Research like this involving AI-generated artwork is even more expansive. Researchers at University of Chicago determined through their study that a major mistake artists make differentiated AI-generated artwork from human-made artwork is in falsely classifying human artwork as AI-generated. The implication here is that over-vigilance is one’s search for AI-generated artwork can end up hurting real, human artists when their art is identified by viewers as AI-generated. 


Another study from researchers at University of Colorado Boulder revealed something unique in our perceptions of AI artwork. While, consistent with other studies, they showed that people are relatively poor at differentiating between human-made artwork and AI-generated, they also showed that we will see artwork that is abstract in its style as being AI-generated more often, and we will see other artwork as human-made more often. We may interpret abstractness in art as easier for an artificial intelligence to emulate than more realistic portrayals of art—however, given that the study still found our ability to differentiate between the two as so poor, this is likely not true, and is instead just a reflection of how we perceive AI.


These studies highlight one important fact: we may no longer be able to rely on our cognitive perceptions to distinguish AI from real images. This opens the world up to new opportunities in obscuring reality and misleading the public, and the consequences of this can already be seen with countless examples in politics, entertainment, science, and more.


Below are 6 images of cats; 5 of these images were created using Photoshop’s AI Generative Fill. Can you spot the real one?





Answer: The top right image is real.



The Age Of Misinformation


The spread of AI imagery and our lack of ability to discern it from real imagery can lead to the disastrous spread of misinformation in things like politics and science. AI-generated images can even endanger people when depicting false events or behavior, especially in the case of vulnerable individuals such as minors, or groups of those with large influence such as American voters.


One of the most popular examples of this of late happened in March 2023, where several AI-generated images were posted by Eliot Higgins to X, formerly known as Twitter, depicting Donald Trump getting arrested. While it doesn’t appear that Higgins attempted to pass these images off as real, the nature of social media makes it easy for these images to spread out of context. After this post was made, an Instagram post reshared them without the same indication that they are not real, gaining over 79,000 likes. 


Another example involving the former president shows him posing with Black voters, all of which were AI-generated. Creating photos such as these spreads false information and can be used to stir emotions in American voters, furthering political agendas based in fiction rather than fact. Given the upcoming presidential election, these images become even more potent.


Using AI-generated images to spread propaganda and further political agendas is certainly not unique to America, as can be seen in Israel’s escalating attack on Gaza. Shortly after the Hamas attack on Israel in October, AI-generated images depicting gruesome scenes involving infants began to spread on social media. Images such as these are extremely emotionally-charged and can be key tools used in propaganda, contributing significantly to the spread of misinformation in already dire circumstances.


Outside of politics, AI-generated images still have the power to mislead the public. This is especially the case in the realm of academia, such as is the case in an article published in Frontiers in Cell and Developmental Biology, which has since been retracted. The article features an AI-generated illustration of a rat’s intestinal organs, consisting of entirely fabricated information regarding the biology of rats.


While passing off AI-generated images as real can have many overarching consequences, it can also have personal consequences when used to violate an individual’s privacy or exploit them. There have been several cases of explicit deepfake images created and distributed, with victims ranging from high-profile celebrities such as Taylor Swift to teenagers at New Jersey and Illinois high schools. These cases raise questions of how assault-adjacent crimes involving AI image generation should be dealt with moving forward, and how the safety and dignity of potential victims can even be ensured with these generative tools available.


An inability to differentiate AI-generated images from real images only heightens these consequences, and without action being taken, this issue will only get worse as AI gets better.


Looking Beyond Simple Perception


Knowing what we do about AI image generation and human perception, finding solutions for these issues is difficult. As AI continues to advance in complexity, it becomes harder for us to know what is real, making us even more vulnerable to misinformation. 


Some of the biggest predicted negative impacts could be in politics: it’ll become harder to spot fake news and easier to manipulate the public into supporting issues that they would not with the correct information. Impacts on education could also be a heavy hitter, with children internalizing inaccurate AI-generated images, establishing incorrect information about concepts early on in their development.


This is why media literacy is more important now than ever. We cannot rely solely on our faulty cognitive perceptions, and must instead look at the context of the images we see, and keep credibility in mind before jumping to conclusions or spreading potentially AI-generated images. NPR suggests using a model called SIFT to do this; through this model, a media consumer is encouraged to pause and consider how the content they view may be used to convey a certain message, evoke strong emotions, or target an individual or group of people. Artificial intelligence is often used to depict extreme or unrealistic scenarios, and AI-generated images are typically not spread by credible sources without proper information about their nature.





Media literacy isn’t a foolproof plan against AI-generated misinformation, however, it can be useful in mitigating their distribution. When we can’t trust our perceptions alone, taking context into consideration is crucial in determining what is real.

3 Comments


Guest
Apr 12, 2024

AI has always been something that I was weary of. This articles shown me that AI is getting a little to accurate and smart for my taste. There's a lot of positive possibilities with it, but there's just as many negatives about it. I always thought I had a good eye for misinformation but I'll defiantly try to be more careful. -Maren Franklin

Like

Aaron Riley
Aaron Riley
Apr 12, 2024

I've never realized how bad we are at telling apart AI from the real thing. Most of my experience with AI images have just been more silly images that are very clearly AI but I never realized how hyper realistic AI images can be. Misinformation can definitely get carried away so being able to pick it out is super important.

Like

Alli Brown
Alli Brown
Apr 02, 2024

AI has always been a topic that intrigues me. Many of the questions that you address in you blog, I have had myself. I find it so interesting just how fast AI is developing. You do a great job giving an example and then backing it up with facts. I agree, as AI becomes more advanced it will be harder and harder to tell what is real and what is fake. You do a great job of explaining how media literacy is important now more than ever in the world of AI

Like

Digital Rhetoric

a blog collective by ENGL397 at the University of Delaware

© 2035 by Train of Thoughts. Powered and secured by Wix

bottom of page