Is AI dependency a choice or a trap?

OpenAI’s largest study on AI use reflects U of M students’ understanding of AI

ChatGPT has become the leading artificial intelligence (AI) large language model (LLM) since its introduction in 2022. It now receives nearly six billion monthly site visits and 700 million active weekly users — the equivalent of 10 per cent of the adult population.
In September, OpenAI released the largest study to date of how people are using AI. Drawing on an analysis of 1.1 million conversations sent between May 2024 and June 2025, this study revealed that 27 per cent of messages are classified as “work-related.” Of that percentage, about 56 per cent of prompts are classified as “doing,” or performing job tasks, with nearly three-quarters of those relating to writing.
Although the study attributes this domination to the LLM’s unique ability to generate large writing samples, these statistics raise concerns. One may argue that ChatGPT’s success is the product of computer-engineered innovation, leading the human race into a new age of productivity defined by efficiency — a new “Age of Enlightenment.” Critics, however, may attribute ChatGPT’s success to a lack of users’ own innovation, resulting from a failure to control LLM usage, leading to an AI dependency.
University students, though not explicitly highlighted in the report, make up a healthy demographic of LLM users. An earlier report from OpenAI revealed that more than one-third of college-aged young adults in the U.S. use ChatGPT. At the U of M, students’ views on AI vary widely, revealing that students can be found on either side of the debate.
On the skeptical side is one fourth year computer science student in the faculty of science, who has observed how AI use at universities has evolved. She recalled, when ChatGPT first launched “no one was really talking about the morals or ethics of making it do your assignment, so […] people would just jump the gun and just were using it for the entire thing rather than [as] a tool.”
Three years into the post-ChatGPT world, she noted AI use among students is taking on a new form. “I have talked amongst my peers and a lot of people from other majors, and I’ve definitely seen it now more as a tool, something that you can use to help you study, create flash cards, summarize assignment instructions, clarify things, ask questions,” she said. “It’s more used as a tool now, but when it first came out, just because there [weren’t] really a lot of conversations about how to use it […] it was probably abused in some [way].”
Although this shift should signify increased efficiency in learning, this student argued it has only furthered dependency on AI tools. “I definitely think [AI dependency] has increased. Instead of doing the assigned readings, people just jump straight to getting AI to summarize it, getting it to do your flashcards for you, and this is all work that we were doing before.” She agreed the menial work of academia that develops crucial organizational and time-management skills has been completely traded for the convenience of AI.
In explaining how LLMs operate, she shared her belief that the largest threat AI poses is not to skill development, but rather to the creativity of minds. “The way AI generates text is by predicting what’s the most probable outcome of the next word based off of all the data that’s been generated, so it’s limited to what already exists in this world,” she said. “AI is so limiting in a lot of the things that it outputs, so people limit themselves to the ideas that AI generates and […] that reduces creativity.”
While some students remain cautious about AI, others view it with optimism. One second year JD candidate in the faculty of law explained her understanding of AI as a teacher. “Honestly, I’m a big advocate for the use of AI,” she said. “If you’re ever confused by anything and you have the facts and the right information, I think AI can really help explain anything that you might be confused about. And I think that it also helps because it can be kind of scary going to profs and asking them questions.”
Common Sense Media’s 2025 AI Risk Assessment showed that AI teaching assistants — platforms specifically designed to aid teachers and students in learning — have a moderate risk of acting as “invisible influencers,” providing inaccurate and biased information to students. One can only imagine the risk with AI platforms that have not been designed to provide an educational service. This use of AI as a supplementary teacher then requires a great deal of discretion on the students’ part.
As generative AI becomes more integrated into everyday life, its influence continues to grow beyond the workplace. A recent study by Collis and Brynjolfsson (2025) suggested that ChatGPT is having a wide-reaching impact on the global economy, with non-work use growing even faster than professional use. While its long-term effects on education and work remain to be seen, its impact is already reshaping how people think, study and produce.