Inside the mind

Understanding how meaning emerges from memory

Understanding how people think might seem abstract, but for Randy Jamieson, it is a question that sits at the centre of human intelligence and, increasingly, the future of artificial intelligence (AI).

Jamieson, a U of M professor of brain and cognitive sciences in the department of psychology, studies how people learn, remember and understand language. Using laboratory experiments and computational modelling, his research examines how meaning forms in the human mind.

While his work intersects with AI, Jamieson said his focus is not on building technologies. “I’m really focused on understanding how people work,” he said.

Jamieson’s interest in cognitive science began in the 1990s while studying auditory cognition and music perception during his master’s degree in auditory cognition. During that time, he encountered computational psychology — a field that uses computer models to simulate human thinking. The discovery reshaped his academic path.

“I switched over for my PhD to study how people learn artificial languages,” he said.

That work later expanded into natural language processing, a field that underpins large language models (LLMs) and modern AI systems. Although the recent surge in AI development has drawn attention to machine language systems, Jamieson believes insights from psychology remain crucial.

“I’m really excited about integrating what we know about how the human mind works to build cognitive machines that are more psychologically like us,” he said. “It’s a very interesting moment in cognitive history.”

One of Jamieson’s major research projects explores how humans construct meaning from language experiences.

In 2018, his team developed what he calls an “instance-based model of semantic cognition.” Unlike many AI systems that assign fixed representations to words, the model assumes people remember individual language experiences and construct meaning dynamically.

“The modern LLM approach is to develop methods for machine representations of words, one for each word,” Jamieson said. However, his model proposes a different process. “In my approach, I assume that people remember each of their individual language experiences and that the brain constructs a momentary meaning by ad hoc parallel retrieval,” he said.

The model has already shown promising results. Jamieson believes that it can successfully predict how people interpret words and judge meaning in experimental settings. “Humans are more like a meaning making machine than a machine that stores and retrieves words,” he explained. Recent discussions in the field suggest that this instance-based perspective might overlap with the architecture of modern AI systems, he noted.

Beyond theoretical modelling, Jamieson’s lab is also investigating everyday cognitive experiences, including the frustrating moment when a word seems just out of reach.

One of his graduate students is currently studying the “tip-of-the-tongue” phenomenon, where people feel certain they know a word but cannot fully recall it. These moments reveal surprising details about how memory works. Even when people cannot recall a word, they often remember pieces of information about it. “How can people claim they don’t know the word and yet tell you the word starts with an ‘s’ sound and has three syllables[?]” Jamieson asked.

By modelling these states computationally, researchers hope to better understand how knowledge moves between awareness and memory.

“We’re excited about getting that down in a computational model, to figure out how words come to mind, recede and hang around in intermediate knowledge states at the experiential and threshold of consciousness,” he said.

For Jamieson, the main goal of his work is not technological innovation but intellectual contribution.

“All I ever wanted was to be a part of the history of ideas,” he said. “I hope the things we’ve been thinking about are interesting to others and might even help them to step forward into even more interesting ideas.”