Part 1: Understanding Data Corruption in AI and Its Real-World Consequences
Artificial Intelligence (AI) has become a powerful tool in modern organisations, driving automation, insights, and even customer engagement.
However, as Michael Nieman of NTT Data highlighted during the recent BCM Institute Meet the Experts webinar, the effectiveness of AI depends on the integrity of the data it consumes.
If that data is corrupted—whether through technical glitches, human mistakes, or malicious intent—the results can be damaging just for systems, but for entire organisations.
Michael explained that data corruption comes in different forms. At the technical level, something as small as a “bit flip”—a change in a single binary digit—can completely alter stored values, leading to errors in calculation and decision-making.
Hardware failures and software bugs also contribute to data loss, while inconsistent data entries often stem from human error or poorly coded applications.
Though these may seem like isolated problems, their impact is amplified when AI models use that corrupted information to make predictions or recommendations.
The consequences are far-reaching. Faulty data leads to incorrect outputs, which undermine confidence in AI systems. Once users lose trust, the damage spreads to brand reputation, customer loyalty, and financial stability.
Michael emphasised that organisations cannot treat these as “technical errors” alone—when AI is part of decision-making, corrupted data can quickly escalate into organisational crises.
To illustrate, Michael shared several real-world cases. OpenAI’s ChatGPT was once manipulated through a prompt injection attack, where hidden instructions in a web page caused the system to leak user information.
Similarly, Microsoft’s Copilot experienced the “EchoLeak” zero-click vulnerability, where attackers were able to extract sensitive data from email-based prompts until the issue was patched.
Perhaps most relatable to the public was Air Canada’s chatbot incident, in which a passenger received incorrect fare information from the AI system. The misinformation led to legal action, financial loss, and reputational damage for the airline.
These cases underscore a simple truth: AI is only as trustworthy as the data on which it is trained and the safeguards built around it.
Organisations adopting AI must understand that corrupted data is not just an IT challenge—it is a business continuity risk, one that can compromise compliance, reputation, and customer relationships in ways that are difficult to repair.
Summing Up Part 1 – Key Points
Topic | Key Insights |
---|---|
Forms of Data Corruption | Bit flips, data loss, and inconsistent entries can disrupt AI accuracy. |
Main Causes | Human error, software bugs, hardware failures, and malicious attacks. |
Impact on AI | Leads to incorrect predictions, poor decisions, and loss of trust. |
Case Studies | OpenAI was exploited via prompt injection, a Microsoft Copilot zero-click flaw, and misinformation in an Air Canada chatbot. |
Overall Message | AI is only as reliable as its data; corrupted inputs can cause financial, legal, and reputational risks. |
Dr Goh Moh Heng, President of BCM Institute, summarises this webinar. If you have any questions, please speak to the author.
Summing Up for Parts 1 & 2 ...
Click the icon on the left to continue reading Parts 1 & 2 of Michael Nieman's presentation.