top of page

Why AI Isn't All It's Made Out To Be.

  • Writer: Andrew Ranonis
    Andrew Ranonis
  • Apr 25
  • 6 min read

Artificial intelligence, language learning models, and chatbots are seemingly inescapable in the current technological landscape. Every company now has some form of AI integration. There’s the obvious companies like OpenAI, who develop some of the mainstream AI’s, and then companies like Google and Apple have their own forms of AI with unique Google and Apple spins on it. But now companies like TurboTax and even Walmart are using AI. Those are just companies. Anecdotally, I know of several people who use ChatGPT almost daily for assignments and work, or other personal uses.

Despite AI finding its way into every company and most people’s lives, I have tried to make a conscious effort to avoid using it. It’s easy to not use ChatGPT for things, I simply just use my brain, however it is more difficult to not use AI if it shows up in everything. Whenever I tell someone this I am met by three reactions; “Same, I hate AI,” “Cool, I still use AI,” “You’re weird for not using AI, it makes life so much easier.” Now obviously I agree with the first reaction, but the question is why, and there are several answers to why we shouldn’t use AI, especially Generative AI.

Let’s start with the simple fact that it’s just not necessary. Most people I know use AI to answer questions. They’ll type a question into ChatGPT and it’ll spit out a digestible answer, or give it some assignment and it’ll do it. What’s interesting about this feature is that it’s just a worse Google search. Yes it gives you an answer, but the accuracy of that answer is put into question, because there is no way to gauge any level of credibility. If I Google, “Who was the third president of the United States?” I will be given several websites that not only answer that question, but also may provide other auxiliary information or sources to learn more. Furthermore, each of those websites has an author. People who purposefully shared that information and in most cases, publicly displayed their qualifications that prove that they are a trustworthy source of information. If I type into ChatGPT and it gives me an answer, it may give me extra information, but I can’t guarantee that it’ll be the same facts each time. And most importantly I can’t know for certain if it’s true or false.

AI programs, specifically ChatGPT, have an absurd number of sources that they draw from and are fed. That means that they can double and triple and quadruple check their facts to make sure that the information they provide is correct. However, because it’s the internet and people can write and post whatever they want, it’s just as likely that the information being provided was double and triple checked against similarly false information, and there would be no way of knowing, because you don’t know where it gets its information from. You also can’t ask it to cite a source, because it either can’t or it will provide a fake made up source. According to Microsoft’s own website “ChatGPT will make attempts to provide sources for its content, but its primary function is to reproduce patterns in text, not to actively consult sources to provide accurate information.”

This problem of not being able to cite a source brings me to the other problem, AI can’t actually think for itself. It’s not like the Artificial Intelligence we see in movies that can think and even feel all by themselves. No, the AI we have are not chefs that purposefully craft a meal with the tools and ingredients it’s provided, they're more like a smart blender. They’re just given loads of information and it just shoots it out and occasionally it sorts out bad ingredients. If I ask ChatGPT to write me a Disney movie, whatever it spits out is not an original idea. It’s a random amalgamation of what it understands a Disney movie to be, whereas an actual person would be able to come up with an original and creative idea that an AI would functionally never be able to compete with.

The other problem is that “AI art” just looks bad. Some of the more realistic looking pictures are decent enough at some of the bigger and more important details, however they usually end up having 38 fingers on one hand, or shadows that don’t make sense. And AI videos are just as bad, because compared to an actual video or film someone has made it’s just boring pans around an object or environment. And again, if it creates something based on someone else’s art style it’s entirely derivative and soulless.

In fact, AI itself is very soulless and sometimes depressing to see. I’ve seen many people online talk about how they talk to ChatGPT like it’s a friend. They’ll vent about their day and life to it, and I find this very dystopian and very sad. It seems people view it as an easier social interaction than actually talking to people, but the problem is that it’s not a person. It’s the same as just talking to a wall, because not once can you get a satisfying answer if you ask about it’s day, because it’s a machine that can’t have a good or bad day. It’s very shocking to me that some people genuinely prefer talking to a machine rather than an actual person. However, some people seem to just want AI to be and do everything.

The problem with letting AI replace people is that first of all, there are hundreds of jobs it just can’t do. For example, right now, ChatGPT can’t be a chef because it physically can’t cook anything. The other problem is that so many people are just content with AI replacing the fun activities. When I was a kid, I would sit through an entire day at school then come home excited to draw a picture. I didn’t know what it would look like, because I was excited to draw, not see what I drew.



When an AI spits out some image, it is removing the entire process that you’re supposed to enjoy. If I draw a house, no matter how bad it looks, I’m going to feel satisfied and fulfilled because I made something with intention. It looks the way it does, because I made the conscious choice to make it like that. If you tell ChatGPT to make a house, every decision is based on an algorithm. It can’t say “I saw this cool house while I was on a road trip and I added some of that into this” because it’s incapable of doing something like that.

I haven’t even mentioned one of the biggest and most tangible reasons that people should avoid using AI, and that is the enormous and terrible effect it has on the environment. An article from MIT has many examples of the environmental impact AI and its data centers have. One portion I find incredibly striking is “By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).” It goes on to explain that though not all of these are from AI, a large majority are. There is absolutely no reason for these data centers to be competing with literal countries in terms of power consumption. That’s not even getting into the noise pollution that comes from these data centers, or the insane amount of water that they use on a daily basis to cool down their systems. The MIT article even mentions that the training process for some of these AI’s generate 552 tons of carbon dioxide. And that’s just the training process. That doesn’t include the daily usage.

The last thing is the words someone told me about AI: “It really helps my writing, because it’s like an editor and it saves so much time.” Maybe it’s just the way I was raised, but I have always believed that if something is worth doing, then it’s worth doing correctly. That means no shortcuts. It means not using an AI “editor” because it’s easier, but actually using your brain or someone else’s to do the work. And ultimately that’s my second biggest problem with AI. Besides the environment stuff, it just makes people lazy. I know people that use ChatGPT to generate a story, and when I ask why, they say something along the lines of “It saves me the time so now I don’t have to write it myself,” but then they never use that story for anything. How are you saving time if you were never interested in the process itself?

In conclusion, AI is starting to have some actual tangible effects on our environment and anecdotally, it’s creating some very lazy people, so next time you think about using AI, consider the other options you have. Because in my experience, you tend to get better results and you feel more satisfied doing it yourself.


5 Comments


Alexis Vogt
May 03

As someone who hates AI, I appreciate seeing that others share the sentiment. Seeing people treat AI as a person really makes me uncomfortable. I particularly hate AI art and the discourse around it.

Like

daroh6
May 03

I definitely agree that the environmental effects of AI are horrible, and we should not be using it for the sake of our environment. However, if it were more sustainable, I personally think AI has some good qualities as well. I have personally used it to spark inspiration and ideas for topics when I feel stuck. I also know some people who use AI for personal problems like you talked about, and they typically have a really difficult time opening up to other people in person. It has in some instances helped me more creative and helped friends of mine who struggle calm down.

Like

Bianka Trezza
Bianka Trezza
May 03

It annoys me that as soon as there's a new thing, all companies think they need it, right now, that thing being AI. Like, I don't want Google to show me AI results when I google something. I agree with what you said about AI feeling soulless and depressing. I also agree that AI is replacing the fun activities (I also feel this way about most technology). AI and other things are taking away the process of creating that makes life rewarding. Everything being instantaneous makes life lose its sparkle.

Like

bnauta10
May 02

I like that you talk about several important issues regarding AI. I agree that AI is not good enough to provide accurate results/information, and that we shouldn't be relying on it for everything. I also agree that the environmental impacts are a major concern and it's something that needs to be resolved if we want to continue to improve current AI models.

Like

Guest
May 01

Jackson Gould


Totally agree that AI is not advanced enough at this point to provide good results in society. Especially with the communication aspect it really struggles with replicated the way people talk and interact. Ai is a very powerful tool in that you must try to avoid it speaking for you, instead of you speaking with it. Artificial intelligence has been a very touchy subject for debate and I feel like you presented your argument, very well. Great job

Like

Digital Rhetoric

a blog collective by ENGL397 at the University of Delaware

© 2035 by Train of Thoughts. Powered and secured by Wix

bottom of page