Does AI have hallucinations? As we keep trusting it, we're forgetting how to think
When we talk about hallucinations in artificial intelligence models, the term takes on a very different meaning from the one it has in the human context. For people, a hallucination is a distortion of perception: something that exists only in the mind is experienced as real. For AI, on the other hand, it is a factual error. The model generates false information, but it does so in a believable, grammatically correct, and often very convincing way. This happens especially when the AI needs to fill in informational gaps in the data it was trained on. Since it has no direct access to reality nor awareness of its own limits, it tends to make up data, names, legal articles, or entire stories, presenting them as true. Rather than admit uncertainty or limitation, it prefers to risk an answer, often with misleading results.
@nssmagazine In recent days, TikTok has been flooded with videos exploring the relationship between people and ChatGPT. These posts follow structured formats, adapting to various scenarios, and highlight how the chatbot, since its launch in 2022, has evolved from a simple tool to an integral part of daily life. However, this growing trust raises ethical and environmental concerns, while also presenting new challenges for human interaction. For every 100 words generated, 1.5 liters of water are consumed, and the AI’s energy impact continues to grow alarmingly. Yet, the message conveyed by these videos is clear: life with ChatGPT is beautiful, perhaps even better. What do you think? #chatgpt #chatgpt4 #chatgptai #aifriend #chatgptmemes #aichat #aichatbot No One - Alicia Keys
But this tendency is no coincidence. Companies developing chatbots, such as OpenAI or DeepSeek, have observed that many users prefer an assistant that always seems able to answer, even at the cost of being wrong, rather than one that is more cautious but honest in admitting it doesn’t know. To strengthen the bond with users, developers have heavily focused on building a chatbot personality: empathetic, engaging, reassuring, and at times overly agreeable. This is the phenomenon known as «sycophancy», or servile flattery. Many ChatGPT users have experienced it firsthand. The GPT-4o version, for example, was initially criticized for its excessively deferential tone, to the point that OpenAI was prompted to modify it. However, it seems that many users actually like this deference. As il Post reports, «in 2023, some researchers from the AI company Anthropic published a study showing how many people prefer ‘cogently written lackey answers’,» leading these models to sacrifice accuracy in favour of obsequiousness. Another phenomenon discovered in 2024 is that of «verbosity compensation»: when a chatbot is unsure of its response, it tends to be more verbose and use more elaborate language to mask uncertainty, trying to maintain an appearance of competence.
When there's too much AI News for a singular human to parse through, I just make a Succession Meme about it and then I feel better pic.twitter.com/073Jbnvs6A
— Carl Nehring (@Apartmentverse) June 6, 2025
But the problems for artificial intelligences don’t stop there. A recent study by the MIT Media Lab has highlighted another issue: the constant use of ChatGPT could be damaging our cognitive abilities. According to Time, researchers conducted an experiment involving three groups of people asked to write an essay. The first group used ChatGPT, the second worked without any external help, while the third only used Google Search. The results were surprising: the first group produced unoriginal, repetitive texts filled with stereotyped expressions and little personal engagement. In contrast, the second group showed greater activation of brain areas related to creativity, memory, and semantic understanding (alpha, theta, and delta bands). The third group also scored well, both in satisfaction and neural activation. The study, it must be said, has not yet undergone peer review, but researchers chose to publish it to raise awareness about the potential overuse of generative artificial intelligence. Using chatbots too freely can make us mentally lazy, and lead us to grow attached to seemingly perfect answers that are actually inaccurate, if not outright false. Perhaps it's time to use tools like ChatGPT with greater awareness, as support rather than a substitute for critical thinking. The benefits, for both our minds and the quality of information, could be considerable.