Failures in generative AI can have safety-critical consequences, so why isn’t the technology being monitored in the same way as other safety-critical technologies such as aviation or medicine?
The Importance of Incident Reporting
In safety-critical industries like aviation and medicine, incidents are meticulously tracked and investigated. However, similar mechanisms are notably absent in the rapidly advancing field of AI, warns a UK think tank – the Centre for Long-Term Resilience (CLTR). As AI systems become more integrated into critical real-world applications, the need for robust incident reporting frameworks becomes increasingly urgent.
AI systems sometimes malfunction or produce erroneous outputs when faced with scenarios beyond their training data. Problems can also arise when AI’s objectives are improperly defined or when the system’s behavior cannot be adequately verified or controlled. Notable AI safety incidents include:
- Trading algorithms causing market “flash crashes”
- Facial recognition systems leading to wrongful arrests
- Autonomous vehicle accidents
- AI models spreading harmful or misleading information via social media
Incident reporting can help AI researchers and developers learn from past failures. By documenting cases where automated systems misbehave, glitch, or jeopardize users, we can better identify problematic patterns and mitigate risks.
The Current State and Recommendations
Without an adequate incident reporting framework, systemic problems could arise. AI systems could directly harm the public, such as improperly revoking access to social security payments. The CLTR’s investigation into the UK reveals a lack of a central, up-to-date picture of AI incidents as they emerge.
The UK’s Department for Science, Innovation & Technology (DSIT) does not currently have a comprehensive system for collecting and managing AI incident reports. CLTR recommends that DSIT establish a framework for reporting public sector AI incidents and identify gaps in existing incident-handling procedures based on expert advice. They also suggest enhancing the capacity to monitor, investigate, and respond to incidents through measures like establishing a pilot AI incident database.
Industry Reactions and Calls for Regulation
Industry experts have had mixed reactions to CLTR’s report. Ivana Bartoletti, chief privacy and AI governance officer at Wipro and co-founder of the think tank Women Leading in AI, supports improving incident response. “Incident reporting is a key part of AI governance at both government and business levels,” stated Bartoletti. “Incident analysis can inform regulatory responses, tailor policies, and drive governance initiatives.”
Crystal Morin, cybersecurity strategist at Sysdig, believes existing regulatory frameworks are sufficient. “AI-specific reporting regulations seem unnecessary when comprehensive guidelines like NIS2 exist,” stated Morin.
Veera Siivonen, CCO and partner at Saidot, advocates for a balance between regulation and innovation. According to Siivonen, we need guardrails without stifling the industry’s potential for experimentation.

Industry-Specific Needs and Internal Measures
Nayan Jain, executive head of AI at digital studio ustwo, suggests that AI governance should be industry-specific. Different industries need tailored approaches, according to Jain. “AI itself can be used to monitor live systems, report incidents, and manage risks with automated solutions or fixes.”
To effectively record and manage AI incidents, enterprises should implement a comprehensive incident logging system. Real-time monitoring tools are essential, according to Luke Dash, CEO of risk management platform ISMS.online. “Implementing robust version control for AI models and datasets is crucial to track changes and allow for rollbacks if necessary,” stated Dash. Accordingly, regular testing and validation of AI systems can help identify potential issues proactively.
Adopting standards like ISO 42001—a framework for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS)—can help organizations manage AI incidents and develop governance strategies. Dash also recommends setting up an AI ethics committee to oversee governance and incident management, with input from development teams, legal departments, and risk management.
Whistleblowing and Legal Considerations
Raising concerns about AI system issues also brings up questions about employment law. If a company uses AI in a way that breaks the law or endangers health and safety, whistleblowers reporting this to their employer are protected against retaliation under the current regime, according to Will Burrows, partner at Bloomsbury Square Employment Law. However, if there is a broader incident-reporting regime, whistleblowing laws need to be extended to protect those reporting AI incidents to DSIT.
Burrows warns of potential group litigation claims for harm caused by AI, emphasizing the importance of encouraging internal staff to report problems. “Whistleblowers often spot issues at an early stage and therefore ought to be listened to and not silenced,” he said. Organizations should establish formal internal procedures for reporting AI incidents.
Moving Forward
As AI continues to evolve and integrate into critical aspects of society, the need for robust incident reporting frameworks becomes imperative. By learning from other safety-critical industries and implementing comprehensive reporting systems, we can mitigate risks and ensure AI’s safe and ethical deployment. Balancing regulation with innovation, industry-specific governance, and protective measures for whistleblowers will be key to navigating the complex landscape of AI safety.