AAU Research: How the chatbot earns your Trust
: 08.08.2025

AAU Research: How the chatbot earns your Trust
: 08.08.2025

Do you trust the chatbot? Language matters
: 08.08.2025
: 08.08.2025
By Peter Witten, AAU Communication and Public Affairs
Artificial intelligence (AI) is becoming an increasingly integral part of our lives - but not without challenges.
Can we trust the information we get from ChatGPT and other chatbots? Or do we risk following incorrect advice because we blindly believe in AI?
New research from Aalborg University shows that language plays a significant role in how much we trust chatbots.
Graduate students Cecilie Ellegaard Jacobsen, Emma Holtegaard Hansen and Tania Argot from Department of Computer Science explored how people perceive and respond to answers from chatbots in their master’s thesis.
The three AAU students conducted experiments using a custom-designed chatbot based on ChatGPT. 24 participants were asked 20 yes/no questions on topics such as music, health, geography, and physics.
The key wasn’t the questions, but the answers. The chatbot responded in four different styles:
The goal was to examine how the level of certainty and the chatbot’s self-presentation (“I” vs. “the system”) affected participants’ trust - both in terms of how they perceived the answers and how they reacted.
When the chatbot responded confidently, users’ perceived trust increased, especially regarding the chatbot’s competence. Participants rated confident answers as more credible and more often chose the chatbot as their primary source of information.
However, some of the 24 participants became skeptical when the chatbot seemed overly confident - especially if it couldn’t substantiate its answers. “If it was too assertive, I lost trust immediately,” said one participant.
Some felt the chatbot seemed more honest, human, and humble when it responded with uncertainty. “It felt honest when it said ‘I’m not sure,’” noted another participant.
Others preferred a more neutral language style.
As part of the experiment, participants could also view Google’s top search result as an alternative source. Many used Google as a kind of “truth check” and often trusted it more - even when the chatbot and Google gave the same answer.
Trust in AI is therefore not just about language, but also about preconceived attitudes and habits.
Based on the study, the three students offer the following recommendations for designers of AI systems like chatbots:
“The thesis shows that our trust in AI is complex and situational. It’s not just about whether we trust AI, but how and how much. Too much trust can lead to AI dependency, while lack of trust can cause us to reject helpful and useful assistance. The fine line varies from person to person and depends not only on AI’s language but also on individual attitudes and habits,” says Cecilie Ellegaard Jacobsen, Emma Holtegaard Hansen and Tania Argot.
Professor Niels van Berkel from Department of Computer Science at AAU supervised the master’s project. He emphasizes the importance of understanding how people assess and choose to trust AI.
“The students demonstrated that both perceived and actual trust can be influenced by how AI presents its own certainty and how it refers to itself. This insight can be used to better calibrate users’ trust in future AI systems, reducing the risk of people blindly following incorrect AI advice,” he says.
Facts