A UM Knowledge Exchange panel discussion last Wednesday brought together experts from a variety of backgrounds to explore the challenges generative artificial intelligence (AI) poses to artists, legal professionals and others.
Entitled “Next Steps with Generative AI,” the presentation featured U of M professors with expertise on the topic. The event took place at Degrees Diner in University Centre.
The panel was moderated by David Gerhard, head of the U of M computer science department. Gerhard said while the “core technologies” behind AI have been in development for decades, it caught the public’s attention in the past few years with the rise of models that could “generate content.”
“The fact that these models could create images based on any description or attempt to answer any question, the public became very excited about these models,” he said.
Katie Szilagyi, an assistant professor at U of M’s faculty of law, said she is “pretty critical” of the potential use of AI in legal practice and argued that it was only capable of mimicking what writing is supposed to look like based on statistical likelihood.
At the panel, Szilagyi said while some argue lawyers should make the most of the latest technology to provide good service to clients, some judges question how the use of AI impacts the duties of a lawyer.
She highlighted a recent story concerning a British Columbia lawyer who used ChatGPT to prepare legal briefs for a B.C. Supreme Court civil case. The AI was found to have “hallucinated” legal cases that had never happened, which were subsequently included in the brief.
“When you submit that as a lawyer, there’s duties of ethics that are implicated there,” she said to the panel.
Associate professor of philosophy and director of the Centre for Professional and Applied Ethics at U of M Neil McArthur’s research on AI focuses on its impact in human interactions and the development of relationships between humans and AI. During Wednesday’s panel he noted that people often “freak out when it comes to our relationships” with technology — drawing particular attention to robots and AI. People also worry that technology will “invade our society and destroy our relationships with each other,” he said.
He said that some people have begun to form relationships with chat bots, including intimate relationships.
“They’re looking to them as therapists, they’re looking to them as friends, they’re looking to them as grief counsellors and they’re looking to them as religious leaders or religious gods,” he said.
He urged people to avoid stigmatizing humans forming attachments to AI, and instead, embrace the technology while being realistic about its limitations, pointing to possible companionship, and the “therapeutic uses” of AI.
McArthur told the Manitoban that one of his biggest misgivings about the possible negative effects of this technology is privacy concerns. He recommends “strong regulation” for private AI companies.
“When you’re divulging these kinds of personal details to an AI, I think that the security of your information and how it’s being used becomes very important,” he said.
Örjan Sandred, a professor of music composition at the faculty of music, uses generative AI to explore “new ways of structuring music for [himself].”
“I strongly believe that the tools you use will affect your output,” he said.
In his portion of the panel discussion, Sandred explored whether generative AI could compose music without the help of a human. He argued that simply showing an AI a particular type of music, such as compositions by Mozart, would simply result in music that sounded similar to Mozart rather than anything original, unless the computer makes a mistake.
By teaching an AI the rules of music theory, however, Sandred argued a composer could then “start to tweak” the rules in order to “create music that no one ever heard before.”
The panel was followed by questions from Gerhard and the audience.