Graduate students from our Theories of Literacy course are sharing insights from our weekly sessions in weekly blog posts. They’ll rotate the responsibility throughout the fall 2025 semester, sharing how we’re making sense of the ideas that emerge in our time together.
Throughout the semester our class has been discussing artificial intelligence and its burgeoning role in society, the economic sphere, and the classroom. We’re all hesitant, concerned, and a little freaked out by the future AI poses, but we’ve lessened some of our paranoia knowing that people have been terrified by pretty much every information distribution technology ever created. In past weeks, we researched the public reaction to the invention of the pencil, the printing press, the typewriter, and the computer and found that people worried things would drastically change for the worst just like we do now. Well, except that you probably won’t stab yourself in the eye with an AI chatbot. Granted, you might want to if you’re stuck using one for customer service. Anyways, we
kept all this in mind last week when we went home and jumped down the ole AI rabbit hole reading whatever books, articles, and social media posts piqued our interests before bringing our research to class to solve this AI problem once and for all.
During my research, I poured through Chunpeng Zhai’s scholarly review, “The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review.” Zhai reviewed the currently existing studies on how AI usage affects students when they start using it heavily in lieu of more traditional research methods. As the title suggests, studies show that students can become overly reliant on AI chatbots, foregoing critical thinking and rationality and accepting output without question. According to Zhai, this leads not only to plagiarism and misinformation but to decreased cognitive abilities and retention as the student starts relying on quick answers over slower and more thorough research methods. So, while AI can help with “surmounting challenges like writer’s block or navigating complex parts of manuscripts,” increased usage can also hurt the student in the long run by holding him or her back mentally and educationally or getting them in trouble for plagiarism or the distribution of sensitive information.
This is what was buzzing through my head when I came into class. We began our discussion talking about the analogue technologies I mention above, asking which themes applied to technologies of past and present. We quickly realized that there were several themes that were always relevant including access, standardization, sustainability, and fear that AI will begin to mine itself like a snake eating itself or some sort of self-contained toilet that ingests and expels its shit in a never-ending cycle of… uh well of shit… or content or something. I guess that fear is new unless you count all those old stories about self-writing typewriters or the way that the internet, social media, and algorithms push people towards uniformity in what they read write.
From there, Hailey moved on to an article by Mark Watkins about how AI is “unavoidable not inevitable.” Seemingly identical definitions aside, Hailey explained that the article took a middle ground approach instead of looking at AI as some kind of savior or villain. Perhaps the academic impact can be lessened with ethical teaching, and the environmental impact isn’t that bad compared to streaming your favorite show on Netflix. “Actually,” said Sel, “AI is worse.” That’s strike two, Mark. Michelle brought up an article that claimed Microsoft’s greenhouse gasses increased by 23% after investing in AI and data centers. This aligned with research Sel did about the often-forgotten environmental effects of initial investments into the data centers, training, manufacturing, and transportation required when creating an AI system.
The conversation then shifted towards race and gender issues caused by AI’s embedded prejudices. Lourdes talked about Joy Buolamwini, who’s spent several years
following AI as a researcher, activist, and artist. When Buolamwini was a grad student her class had an AI program that worked as a mirror reflecting your image and adding something onto your face based on how it reads you. “For instance, if you’re not feeling brave it might give you a digital lion mask,” said Lourdes. Well, when Buolamwini used the program, it failed to recognize her face until she put on a phantom of the opera mask. As a black woman, it seemed likely that the data embedded into the program was largely not coming from people who looked like her which is why it failed to recognize her. She ended up doing her dissertation on these concepts and spending her life pushing for a more inclusive AI.
The conversation continued for some time after this. I know this because I still have lots of notes that I don’t think I can get to without either writing too much or falling asleep. We probably could have talked all night or not actually that gets pretty depressing after a while. Anyways, after some time the conversation ended with Sel showing everyone spotthetroll.com which is an online quiz that asks visitors to guess whether the ridiculous social media account snapshots it shows are real or not. Turns out you can’t really tell, but real or fake, they’re all assholes. I can take some comfort in that, I suppose. So, here I am full circle back to the research I did about critical thinking and the overreliance on believing everything your AI chatbot pumps out to you. I didn’t do very well on that quiz, so whether I’m overly reliant on AI or not, I guess it’s still a struggle to find out what’s true. Our research in class, however, shows that it’s not that hard to identify who’s an asshole though. So, when you’re worried about all the synthetic things online right now and whether you or anyone else is sorting whether its real or fake, just remember you can still tell whether it’s shitty or not, and honestly I think that’s more important a lot of the time.
Mitchell St. John is an English master’s student at Chico State. He has a daughter and a dog and he writes sometimes when the situation warrants it.
Further Reading:
- long list of curated AI resources here
- Zhai, C., Wibowo, S. & Li, L.D. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learn. Environ. 11, 28 (2024). https://doi.org/10.1186/s40561-024-00316-7


































