Artificial intelligence (AI) discussions are quickly becoming a regular conversation in the classroom. Each professor and student is tasked with managing the AI conversation and deciding what is allowed and what is not. Alongside the conversation has come debate, both spoken and unspoken. The reality is that the technology is here. Exactly how we will manage it is yet to be seen.
AI is simply technology that aims to mimic human intelligence. And it comes in many shapes and forms.
AI tools have been around for some time. Grammarly, for example, is a commonly used resource. Released in 2009, it outdates ChatGPT, which came out in 2022. Even before that, we had the predictive text technology from T9, which was invented in 1995. T9 was a predictive text technology designed for cell phones, creating ease when texting with only nine keys.
All of these tools count as AI. So why are they viewed so differently?
For one, they are all different types of AI.
T9 used machine learning register keys pressed to words in the dictionary, incorporating users’ frequency of use into their predictions. This technology developed into what we see now in Microsoft Word, Gmail and even search engine’s predictive text. While the guidelines on academic misconduct using AI at U of M are somewhat vague, predictive texts count as AI and run the risk of committing academic misconduct when used. Despite them being automatically integrated into most software.
Grammarly could get you in trouble too and is also included in the list of software that U of M counts as AI and, therefore, prohibited. Grammarly uses natural language processing. Natural language processing (NLP) is a type of artificial intelligence which uses machine learning to interpret human language fluently. Not unlike T9, it uses machine learning driven by user data to check grammar and spelling, indicate tone, and edit your writing for clarity. The difference is the complexity at which it can understand the human language inputted and the complexity of suggestions it can output.
In 2023, Grammarly integrated generative AI into its technology. Generative AI is the same AI technology used by the notorious ChatGPT. AI is littered throughout all our technology. Even search engines use weak forms of AI. So, what makes these technologies different? Why did ChatGPT spark a significant conversation?
Generative AI, unlike the other forms of AI discussed above, can create novel content without repeating the data that trained it. It takes AI a step further than NLP. ChatGPT is the face of it; coming out in 2022, being entirely free to access via internet and being an easy-to-use chatbot made it groundbreaking in both generative AI technology and accessibility and usability.
However, this is not the first time educators have worried about students’ reliance on technology. Googling was especially problematic when it gained popularity. Teachers are concerned about students’ expectations of finding information easily and whether those sources would be reliable or credible. If you grew up in the internet age, you remember hearing “don’t believe everything you read online” from a teacher or parent.
Now, we have clearly stated guidelines around the usefulness of Google and where it is lacking. The U of M Libraries state that Google is helpful for finding general information but is not practical for academic research.
As students, we know. We have been taught to identify reputable sources and use others for comprehension, not citations. We have been taught and learned how to use Google to our advantage, not our detriment. If Google had been banned, and you feared being reprimanded for using it, would you have gained this crucial knowledge and experience?
New technology welcomes technological panic. The bad press and overemphasized debate surrounding new tech can lead to ill-advised bans and restrictions. I believe this is what we are seeing with generative AI. Currently, the U of M bans all forms of AI, from predictive text to NLPs to generative. Could this negate an opportunity for us as students, educators and administration to be transparent about the benefits and cautious about the harms? This is the conversation we should be having.
Even critics of AI list both the benefits and pitfalls. Pitfalls that might first come to mind are academic integrity and plagiarism. Misusing AI can mean cheating, taking a robot’s work and passing it off as your own.
Generative AI can also imagine information and come up with citations to academic articles, court cases, etc. And users need to be wary of these inaccuracies.
It comes with bias. The data fed to AI is biased. Therefore, the data exported from AI may also be biased. Among others, AI also takes an immense amount of computing power, making energy use an issue of sustainability.
On the other hand, Marc Watkins, a writing and rhetoric lecturer at the University of Mississippi, experimented with an AI working group to explore AI’s possibilities. The group used AI as a counterargument generator and brainstorming tool. The result was that AI as a writing tool helped boost creativity and aid the students’ writing process. Watkins’ account was hopeful and practical while stressing the importance of understanding the limitations of AI.
The biggest issue is that students remain unclear about what usage is acceptable and what is prohibited. It is essential that educators and students alike can make clear and informed decisions about AI use while honouring the pursuit of knowledge and academic integrity.
My call to action: use AI. Use it just once or use it daily or weekly — experiment with how it can be used in academia and how it can be used personally. It has been found that those who actually use AI, even moderately, quickly begin to understand its benefits. Treat it as a tool like we do any other technology.
I am not saying you should use it to complete an assignment. If not for any other reasons, it produces crappy writing. But it is an advanced technology with many possibilities that are to our advantage. How will we learn what those are if we are too stubborn or too scared to try them? And even worse, to talk about them?