Making AI more transparent

Measuring what people know — and think they know — about AI

Ariful Islam Anik, a PhD candidate in the department of computer science at the U of M, has spent the past seven years focused on the question, “How can artificial intelligence (AI) systems be made more understandable for everyday users?”

At a broad level, Anik’s work addresses the lack of transparency in AI systems by exploring how people make sense of these systems. “Nowadays we use AI systems in different aspects of our life,” he said. “But if you think about it, despite all the use, maybe very few users […] might actually know how these systems work, what sorts of data [are used], and the decisions that might have been taken in the makings of these systems that have different impacts on the system’s performance.”

Instead of focusing on the algorithms behind AI, Anik looks at the data used to train these systems. He designs methods to share information about data in ways that non-experts can understand. “My focus is on creating or providing information that a user can understand without having [a] technical background,” he explained. “So general-purpose users […] can have an informed judgment about the system they are using, to make their interactions better.”

A recent project led Anik to create an instrument to measure users’ AI knowledge and confidence. By surveying 500 people, he developed and validated a scale that can assess people’s capacity to understand AI systems. The results revealed a surprising pattern.

“We found that people who have low self-reported AI knowledge, they underestimate their knowledge […] Their performance shows that they actually know more than […] they think,” Anik said.

On the other hand, the study showed the opposite effect for more experienced users. “People who are highly confident about their knowledge, they’re overestimating [their knowledge] based on their performance,” he said. The finding reverses a well-known pattern in psychology called the Dunning-Kruger effect, that suggests people with low ability often overestimate their skills, while highly skilled individuals tend to underestimate theirs.

This study is still under submission, but Anik sees it as the culmination of his PhD research. Earlier in his program, he experimented with different ways of presenting information about data, such as narrative storytelling versus structured question-and-answer formats. Both approaches not only improved user understanding and trust but also gave people the tools to judge a system critically rather than simply accept it.

“I love doing user-centred research because it allows me to understand how people think, what they need, and how design can either empower them or hold them back,” Anik noted. “A big part of what excites [me] about my research is […] the direct connection with users during both formative and evaluation phases, and the deeper reflection that comes with analyzing results to generate insights.”

Looking at the broader picture, he hopes his research contributes to a shift in how AI is designed and used. His vision is to keep technology human-centered by providing users with tools and information that help them decide how to use AI systems.

“In our research area, different people are involved in developing a system — developers, system designers, administration — but users are not really part of the development process,” Anik said. “My research […] tries to be the voice for the users, so they can have a good experience,” he added.