In January 2025, the World Economic Forum identified the spread of misinformation (unintentional) and disinformation (deliberate) as the top short-term risk to global stability and prosperity.
This threat is amplified by the decline of traditional media, the rise of information warfare, and the rapid advancements of AI.
We may have a world of information at our fingertips, but distinguishing 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 from 𝘲𝘶𝘢𝘯𝘵𝘪𝘵𝘺 is increasingly difficult.
For example, a study by the Tow Center for Digital Journalism, analysing eight common AI search engines, revealed significant issues with citation accuracy. Collectively, ChatGPT, Perplexity, Perplexity Pro, Deekseek Search, Copilot, Grok-2 Search, Grok-3 Search, and Gemini produced incorrect answers to 𝘰𝘷𝘦𝘳 60 𝘱𝘦𝘳𝘤𝘦𝘯𝘵 𝘰𝘧 𝘲𝘶𝘦𝘳𝘪𝘦𝘴. Worryingly, several tools confidently provided wrong information and fabricated links, often citing syndicated and plagiarised articles.
Further studies show how AI search engines can be manipulated by the proliferation of false news, effectively poisoning their knowledge base. The rather sinister term ‘LLM grooming’ is rapidly entering the digital lexicon.
Given that generative AI is, by nature, generative, these issues are escalating quickly.
What can we do?
While the adage 'a single truth is worth a thousand lies' holds, it doesn't offer a very practical solution.
Instead, the next phase of the Information Age will be defined by our demand for transparency, commitment to honesty, and access to reliable verification tools.
As AI tools continue to wow us with their speed and proficiency, it's easy to forget the role humans play in guiding insight.
Digital systems make calculations based on the data they are fed. AI may be able to construct its own logic based on that data, but it will never know what we haven't shared. Thus, critical thinking remains a core competency in leadership and decision-making.
When humans make decisions, we instinctively evaluate more data points than we could ever identify, let alone codify.* When we make decisions based on computerised data, we've already put our human fingerprint on the output by determining the input.
This is not necessarily a problem, but it is worth remembering nonetheless.
“You are what you read” is a well-known saying, reflecting the sentiment that we are shaped by what we consume.
In knowledge management theory, this concept is mirrored in the “DIKW Pyramid”. This model, which has its roots in the mid-20th century, shows how data (D) can become information (I), knowledge (K), and ultimately wisdom (W)—and then what?
The DIKW Pyramid, though useful in depicting the transformation of data into wisdom, presents an incomplete picture. It implies a static endpoint, neglecting the dynamic nature of intellectual development. In reality, wisdom empowers us to refine our perspective, creating a new foundation for subsequent learning.
What this also implies is that even small differences in starting points can lead to vastly different results over time. We call this phenomenon the DIKW butterfly effect (see illustration).
Therefore, when exploring data, it’s crucial to understand not only 𝘸𝘩𝘢𝘵 we know, but also 𝘸𝘩𝘺 we know it. To develop a comprehensive worldview, we must consistently challenge our assumptions, actively seek out new perspectives, and interrogate our blindspots.
The rise of AI, particularly Large Language Models (LLMs), underscores the urgency of this approach.
LLMs are increasingly used to process vast datasets, generating information and knowledge that shapes our thoughts and actions. However, no system is without its limitations. LLMs are inherently constrained by their (digital) data horizons, and the opacity of their reasoning can mask underlying assumptions or flawed logic.
The onus, therefore, remains on us. Recognising that both human and artificial intelligence are works in progress, we must maintain our curiosity and continue to expand our understanding beyond the confines of 'the known'.