A search engine acts as a librarian; it does not generate knowledge, it directs you to it. A generative AI, by contrast, acts as a storyteller. It does not search; it predicts.
Generative artificial intelligence (AI) has rapidly gained popularity. In just a few years, tools like ChatGPT and Gemini have shifted from the edges of Silicon Valley to the heart of our digital lives.
With their remarkable ability to generate smooth, conversational and seemingly knowledgeable responses, many people now generative AI as the next step in search engines. Generative AI is a faster, friendlier and more intuitive search engine.
While the interface may seem like a search, the engine behind generative AI operates on a completely different basis. Generative AI is not meant to retrieve facts; it is intended to produce plausible text.
Confusing the two is not just a technical mistake; it is a category error with significant implications for how we access, trust and act on information.
Retrieval vs. generation
The best way to understand the difference is to imagine a librarian and a storyteller. A search engine acts as a librarian for the world’s most extensive library: the internet. Its role is to retrieve information. It indexes billions of documents, and when you ask a question, it searches for and ranks the most relevant sources.
It does not generate knowledge; it directs you to it. The responsibility for judgement rests with the user who can click on links, assess credibility and consider evidence.
[A generative AI's] goal is coherence, not accuracy. It is designed to sound right, not necessarily to be right.
A generative AI, by contrast, acts as a storyteller. It does not search; it predicts. Trained on vast amounts of text, it calculates the most statistically likely word to follow, constructing sentences one token at a time. Its goal is coherence, not accuracy. It is designed to sound right, not necessarily to be right.
It achieves this by utilizing deep neural networks that maximize the plausibility of the story, although this is not an exact science but rather an approximation.
This distinction may seem abstract, but the consequences are very real. When we confuse retrieval with generation, we risk accepting plausibility as truth.
The Venda identity test
To illustrate this flaw, I examined a domain with which I am familiar, the Venda ethnic identity. As a pan-Africanist, I despise tribalism and all its expressions, but I am simply using Venda as a test case that clearly reveals the weaknesses of generative AI.
I asked a well-known generative AI to identify famous Venda people. The results were eye-opening and flawed. In addition to me, whom it labelled correctly, it had some severe limitations.
Generative AI simplifies complexity into false confidence. It replaces the subtleties of heritage, geography and history with statistical estimates, creating a story that sounds certain but is often inaccurate.
For example, the AI claimed that South African President Cyril Ramaphosa is not Venda, but can merely speak the language. This is incorrect. The mistake probably stems from a simple association: Ramaphosa was born in Soweto, and the AI appears to have mistaken his birthplace for his heritage, overlooking his family lineage.
Regarding another prominent South African, Professor Tinyiko Maluleke, the AI confidently identified Maluleke, a respected theologian and academic, as Venda instead of Tsonga.
The confusion here seems geographic. Maluleke has strong ties to Valdezia in Limpopo, a historic Tsonga settlement near Venda territory. The AI confused regional proximity with ethnic identity, leading to a plausible yet incorrect connection. This misunderstanding is not unique to AI; many South Africans might make the same mistake.
It described Reverend Frank Chikane, a prominent church leader and anti-apartheid activist, as ethnically Venda, which he is not. It seems to be a pure “hallucination”, possibly caused by the AI noting Chikane’s political activism at the then University of the North, which is geographically close to the Venda region.
These mistakes are serious. They reveal how generative AI simplifies complexity into false confidence. It replaces the subtleties of heritage, geography and history with statistical estimates, creating a story that sounds certain but is often inaccurate.
Why plausibility is dangerous
These examples show why treating generative AI like a search engine can be dangerous.
First, the AI Black Box character, stemming from the lack of transparency in deep neural networks, is a fundamental flaw. Unlike a Google search, which offers verifiable links, a generative AI’s response is delivered as a finished product, without its origins. You cannot easily trace the answer back to credible references. Truth needs a chain of evidence; generative AI often breaks that chain.
Second, the AI confidence trap is another flaw. Generative models produce language that is fluent, confident and authoritative. This creates an “illusion of authority”, exploiting our natural tendency to trust well-presented information. The polished prose makes falsehoods sound true, reducing the scepticism we might have towards a poorly written source.
Generative AI models are static, with knowledge fixed at the date of their last training cut-off. They can’t tell you about yesterday’s news or today’s scientific breakthroughs — yet they often try to do so, filling in gaps with estimated details.
Third, the AI time capsule gap. Search engines constantly crawl and update their web index, providing near real-time information.
In contrast, generative AI models are static, with knowledge fixed at the date of their last training cut-off. They can’t tell you about yesterday’s news or today’s scientific breakthroughs — yet they often try to do so, filling in gaps with estimated details. Reducing the AI time capsule gap is possible, but it is more computationally expensive than reducing the search time frame gap.
Fourth, generative AI has an accountability gap. When a search engine presents a faulty source, you can see who published it. But when an AI hallucinates, who is responsible? The developer? The company that deployed it? The user who trusted it? The lack of accountability allows misinformation to spread unchecked.
The way forward: hybrid systems and digital literacy
Generative AI is an impressive tool for summarization, tutoring, coding and creative exploration. However, like a hammer, it is only adequate for the task for which it was designed. A hammer isn’t a screwdriver; generative AI isn’t a search engine.
The solution is not to discard this technology, but to use it responsibly and to push for improvements. The future probably lies in hybrid systems that merge the strengths of retrieval and generation.
A hammer isn’t a screwdriver; generative AI isn’t a search engine.
A model that starts by searching a verified database of information and then crafts an answer, complete with transparent citations, would be a significant advancement. This method, called retrieval-augmented generation, grounds plausible storytelling in verifiable facts.
Until such systems become commonplace, the responsibility lies with us as users. We need to develop healthy scepticism and practice digital literacy. We should treat every AI-generated answer not as a final truth, but as a starting point for further investigation.
Grave dangers
Generative AI holds great potential, but misuse poses grave dangers. If we view it as a search engine, we risk replacing knowledge with confusion, facts with guesses, and truth with fiction.
The Venda identity test is just one example. Today, it confuses ethnicity; tomorrow, it could mislead medical advice, legal decisions or diplomatic history. The risk isn’t in the technology itself, but in our failure to understand what it can and cannot do.
Let’s be clear: generative AI isn’t Google. It’s a powerful tool for creativity and synthesis, but it’s not a fact repository. Confusing the two risks building our future on illusion, instead of truth.
Suggested citation: Marwala Tshilidzi. "Is Generative AI a New Frontier in Digital Interaction, or Just a Mirage of Truth?," United Nations University, UNU Centre, 2025-10-14, https://unu.edu/article/generative-ai-new-frontier-digital-interaction-or-just-mirage-truth.