On February 11, 2026, techUK convened a timely discussion exploring the intersection of AI insurance, assurance, and risk mitigation. The panel brought together Philip Dawson (Armilla AI), Professor Lukasz Szpruch (University of Edinburgh/Alan Turing Institute),
On February 11, 2026, techUK convened a timely discussion exploring the intersection of AI insurance, assurance, and risk mitigation. The panel brought together Philip Dawson (Armilla AI), Professor Lukasz Szpruch (University of Edinburgh/Alan Turing Institute),
Sue Turner (AI Governance Limited/Bristol University), and Matthew McDermott (Access Partnership) to unpack this emerging but critical space, together we spoke around a core question: What is AI Insurance and how is it related to incentivising AI assurance and risk mitigation?
The following insight is a summary of the webinar session which you can watch below:
If you find this insight interesting and you would like to learn more, you can read this deep dive insight here. For further conversations on this topic, please email the programme manager that moderated the session Tess Buckley at [email protected].
The State of AI Insurance: A Market in Transition
Philip Dawson opened by describing Armilla’s journey from a SaaS AI evaluation platform founded in 2021 to launching AI liability insurance products in April 2025. His key provocation: AI insurance incentivises AI assurance when underwriting embeds the safety and risk mitigation tools that government and industry already rely uponThe insurance market has shifted dramatically even in recent months. Just two to three years ago, AI-specific insurance was barely discussed. Early 2025 saw debate about whether traditional products such as cyber insurance, professional liability, errors and omissions could adequately cover AI risks. By late 2025, major insurers began exploring specific AI exclusions in liability products, signaling recognition that existing forms may not suffice.
This evolution reflects growing awareness that AI presents distinct risks beyond conventional technology. As systems become more autonomous in critical decision-making contexts (HR, insurance underwriting, credit decisions), questions about coverage gaps have intensified. Whether AI insurance becomes a niche product for high-risk sectors or a baseline necessity for all businesses deploying AI remains an open question.
Learning from Cyber Insurance: Cautionary Tales
The panel drew extensive parallels with cyber insurance, both its successes and failures. Matthew McDermott noted that cyber insurance helped normalise security controls and incident response planning, with insurers requiring evidence of good practices. Risk pricing through premiums and exclusions creates commercial incentives for better governance.
However, Lukasz Szpruch offered a sobering assessment based on research into cyber insurance outcomes. Despite initial optimism that insurance would incentivise better security posture, the reality proved more complex. Assessing effectiveness of cybersecurity comprehensively is difficult, leading insurers to rely on superficial surveys rather than rigorous analysis. Market competition meant organisations often chose less intrusive insurers who demanded minimal security scrutiny.
Perversely, the threat of lawsuits following data breaches made companies reluctant to share breach information, the opposite of the transparency that would improve risk modeling. Insurers often sent lawyers first rather than technical staff to investigate incidents. Even after 15-20 years of data, historical cyber incidents have limited predictive value as the threat landscape evolves so rapidly. Leading advanced cyber insurers have essentially become cybersecurity companies themselves, actively scanning for vulnerabilities rather than relying solely on historical data.
For AI insurance, these lessons are instructive. Without mechanisms to ensure accurate risk pricing, the AI insurance market may not realise its promise of incentivising safe AI deployment. In particular, he noted that certain approaches to underwriting, for instance, by companies relying on macro trends in litigation data sets, are unlikely to act as an incentive for safer systems.
Lukasz also shared his provocation that, without standards for rigorous AI system auditing and accountability—potentially established via the insurance market—third-party AI assurance is likely to become a box-ticking exercise in which vendors compete on price rather than quality of service.
The Underwriting Challenge: Quantifying the Unquantifiable
Sue Turner, drawing on her experience as a non-executive director for an insurance company, outlined why AI poses unique underwriting challenges. Traditional insurance relies on quantifying two factors: probability of loss and severity of loss. Insurers are capital-constrained and cannot confidently underwrite risks they cannot reliably estimate.
AI systems differ fundamentally from traditional software. Deterministic software fails consistently under the same conditions, bugs can be found, fixed, and eliminated. AI systems are probabilistic and context-dependent. A system may work perfectly for months then suddenly fail because underlying data distributions have shifted. The Zillow house-pricing algorithm example illustrated this perfectly: it worked in testing but lost $300 million and wiped $9 billion off the company’s share price when it encountered real-world conditions outside its training data.
Three specific challenges complicate AI underwriting:
-
Risk accumulation: Insurers may unknowingly ensure multiple companies all using the same third-party AI system, creating correlated losses across their portfolio without visibility into this concentration risk.
-
Novel failure modes: Unlike fires or floods with centuries of data, AI creates entirely new types of failures (algorithmic discrimination, systematic errors in critical decisions) with no historical precedent for pricing.
-
Causation complexity: When an AI system in healthcare causes harm, who is liable? The hospital deploying it? The software vendor? The data provider? The clinician who followed its recommendation? Which version of a continuously learning model caused the harm?
Systemic Risk: The Hidden Danger
Sue Turner raised the specter of systemic risk, failures in one part of the system cascading across entire sectors, analogous to the 2008 financial crisis. AI presents three systemic vulnerabilities:
-
Infrastructure concentration: A small number of companies (OpenAI, Anthropic, Google, Meta) provide foundation models underlying countless applications. While developers add “guardrails” around these models for specific uses, the fundamental correlation remains poorly understood.
-
Decision-making correlation: When multiple organisations use similar AI models, herding behavior can emerge, similar to algorithmic trading flash crashes. This could manifest in hiring, lending, supply chains, or other domains.
-
Opacity of interconnections: Organisations are already using AI embedded in third-party tools without full awareness. They don’t ask the right questions to understand dependencies and correlations in their AI supply chain.
The House of Commons Treasury Committee recently criticised UK financial regulators for being too passive on AI, warning they risk repeating mistakes from the dot-com bubble and 2008 crisis by failing to proactively manage risks in this rapidly evolving sector.
The Path Forward: Standards, Testing, and Reporting
The panel converged on several priorities for making AI insurance both viable and genuinely beneficial:
Standardise AI system evaluations: Philip Dawson emphasised the need for common assessment frameworks applicable across both assurance and insurance underwriting. While organisational-level governance standards exist, methodologies for evaluating AI systems themselves (what testing to conduct and how) remain fragmented across competing initiatives.
AI incident reporting: Lukasz Szpruch advocated strongly for standardised incident reporting regimes, similar to aviation. AI systems adapt and operate in complex environments, eliminating all failures is impossible. But learning collectively from incidents, understanding what controls were in place, how systems were tested, and observing failure modes would create feedback loops to improve testing and controls over time.
Parametric insurance approaches: Rather than trying to insure everything, parametric insurance identifies specific performance metrics relevant to an application and provides payouts when systems deviate from expected behavior. This requires trustworthy mechanisms to evaluate triggers using tamper-proof telemetry data.
Automated risk assessment: Current manual AI risk assessment doesn’t scale. As Lukasz noted, “human in the loop” becomes “human overwhelmed.” AI systems themselves will need to monitor and evaluate other AI systems, a challenging prospect that introduces new risks but may be unavoidable.
Regulation and Market Discipline
Matthew McDermott positioned AI insurance as a “third pillar” alongside regulation and voluntary standards, providing financial accountability tied directly to risk management behavior. When insurers price AI risk and require evidence of testing, monitoring, and governance, insurance becomes a commercial prerequisite rather than a theoretical exercise.
However, he warned that without proper engagement now, AI insurance could follow cyber insurance and slide towards mandatory requirements that do not sufficiently protect end users and customers from the underlying risks. Regulators like the FCA are beginning to engage (such as the Mills review on AI in financial services), but the dialogue between policymakers, insurers, and technology companies needs to intensify.
Conclusion: An Ecosystem in Formation
AI insurance is emerging from an immature market where many risks remain unquantifiable. The power imbalance currently favors insurers, who can exclude coverage broadly, leaving organisations under-protected. Yet the need is clear: as AI systems take on more autonomous roles in high-stakes decisions, financial accountability mechanisms become essential.
The path forward requires collaboration across stakeholders, developing rigorous but scalable assessment standards, creating transparency through incident reporting, ensuring assurance practices have genuine rigor rather than becoming rubber-stamping exercises, and building the data infrastructure to price risks accurately. Whether AI insurance becomes foundational to the AI ecosystem or remains a niche tool will depend on choices made in the next few years. The panel’s insights suggest cautious optimism is warranted,if the lessons from cyber insurance are heeded and stakeholders work together to build the necessary foundations.
Audience Questions – Answered
We were unfortunately unable to create space for audience questions at the end; however, it was great to receive five very thoughtful responses which we as a panel have typed responses to. Thank you to those that attended and for your engagement with the conversation:
1. Question for insurers:
[1] How do you quantify AI-related risks and price AI (and related cyber since the two are interrelated) insurance premiums for liabilities, and [2] at what point do you draw the line on whether a company deploying and adopting/using AI is uninsurable?
Panel Response: Quantifying AI-related risks requires rigorous technical evaluations of the systems themselves, either performed by the insurer or provided by the insured. At Armilla, we’ve adopted an approach based on detailed system assessments that examine performance across various risk dimensions, going beyond organisational-level governance to evaluate the actual AI models and their deployment contexts. These technical evaluations serve as core inputs for underwriting decisions, which today places us in a more sophisticated category focused on mid-to-larger, complex AI developers or deployers.
Regarding insurability, we have declined coverage in cases where governance controls were insufficiently mature and the sector and use case presented risks we couldn’t confidently assess. However, declination isn’t necessarily permanent, we often refer prospective insureds to partners who can provide advisory work, GRC platforms, or legal support to improve their insurability. The line between insurable and uninsurable isn’t fixed; it evolves as risk assessment methodologies mature and as organisations strengthen their AI governance practices. The interrelation with cyber insurance is significant, as AI can both create new cybersecurity attack vectors and introduce distinct liability pathways beyond traditional cyber coverage.
2. Has the panel considered the Risk Prevention element of AI and its ultimate removal of the need for insurance?
Panel Response: This is a fascinating provocation that touches on a fundamental tension in insurance philosophy. While AI certainly has potential to improve risk prevention through better monitoring, prediction, and control mechanisms, we’re skeptical that it will eliminate the need for insurance entirely. AI systems are probabilistic by nature and operate in complex, dynamic environments, which means failure modes cannot be completely eliminated, only managed and mitigated.
Moreover, AI itself introduces new categories of risk that didn’t previously exist: algorithmic bias, performance degradation over time, systematic errors in critical decisions, and cascading failures across interconnected systems. Even as AI helps prevent some traditional risks, it creates novel ones. The more relevant question may be how AI transforms the insurance landscape, shifting from purely reactive loss coverage toward more active risk monitoring and prevention partnerships between insurers and insureds. We may see parametric insurance models that provide payouts when systems deviate from expected performance metrics, creating incentives for continuous monitoring and improvement rather than simply paying out after catastrophic failures. Insurance will likely remain necessary, but its form and function may evolve significantly alongside AI capabilities.
3. What is the AI insurers’ wishlist to the AI assurance industry to help reduce (as far as is possible) uncertainty in risk quantification?
Panel Response: The AI insurance industry’s primary wishlist centers on standardisation and rigor in AI assurance practices. First, we need common, credible frameworks for evaluating AI systems, not just organisational governance policies, but actual testing methodologies that can assess model performance, safety, robustness, and specific risk dimensions across different architectures and use cases. Currently, assurance approaches are fragmented across competing industry initiatives and standards bodies.
Second, we need mechanisms to prevent AI assurance from becoming a “race to the bottom” box-ticking exercise. Without strong standards and market incentives for genuine rigor, there’s risk that assurance providers will compete primarily on price and ease rather than thoroughness. Third, we need trustworthy ways to verify that assured systems remain compliant over time,AI systems can drift, be modified, or encounter new data distributions that change their risk profile. Continuous monitoring and attestation mechanisms are essential.
Fourth, standardised incident reporting would be invaluable—learning collectively from AI failures would help both assurance providers and insurers better understand actual failure modes rather than just hypothetical ones. Finally, we need assurance frameworks that can scale through appropriate automation while maintaining integrity, as manual assessment of every AI system simply won’t be feasible given the proliferation of AI applications. The assurance industry that can deliver credible, scalable, verifiable evaluation frameworks will be essential partners in making AI insurance viable and effective.
4. Whether the panel have any views/insights on how the insurance industry is preparing for the EU AI Act’s requirements, particularly around high-risk AI systems?
Panel Response: The EU AI Act represents a significant regulatory development that insurance markets are actively monitoring and preparing for, though implementation is still evolving. The Act’s risk-based categorisation of AI systems particularly the designation of “high-risk” systems in areas like employment, critical infrastructure, law enforcement, and essential services) aligns closely with how insurers naturally think about exposure and liability.
For high-risk AI systems subject to stringent requirements (conformity assessments, risk management systems, data governance, transparency, human oversight, etc.), insurers will likely require evidence of compliance as a baseline for coverage. The Act essentially creates a regulatory floor that insurers can build upon, potentially offering more favorable terms to organisations that exceed minimum requirements. The challenge is that many technical standards referenced by the Act are still being developed or harmonised, creating uncertainty around what “compliance” concretely means in practice.
We anticipate the EU AI Act will drive convergence between regulatory compliance, assurance practices, and insurance requirements, organisations will increasingly seek integrated solutions that address all three. This creates opportunities for alignment but also risks of complexity and conflicting requirements. Insurers operating across jurisdictions will need to navigate the EU Act alongside other frameworks like the UK’s proportionate, sector-specific approach or evolving US state-level regulations. The Act’s extraterritorial reach means even non-EU insurers need to understand its implications for global clients. Overall, while preparation is underway, the insurance industry’s response will mature significantly over the next 2-3 years as implementation guidance clarifies and case law develops.
5. Do we think that everything that is being done with the help of AI needs even more human oversight than before?
Panel Response: This question touches on one of the fundamental paradoxes of AI governance. In the near term and for high-stakes applications, yes, AI-assisted processes often require more rather than less human oversight because the systems introduce new types of errors and failure modes that humans need to monitor for. The notion of “human in the loop” as a safety mechanism is widespread in current AI governance frameworks, particularly for high-risk decisions.
However, we also recognise that “human in the loop” increasingly becomes “human overwhelmed.” There’s a profound cognitive mismatch between what humans can realistically oversee and the scale and speed at which AI systems operate. A single human cannot meaningfully review thousands of algorithmic decisions per day, and attempting to do so creates the illusion of oversight without its substance. This means we’re inevitably moving toward AI systems monitoring other AI systems, a prospect that introduces its own risks but may be unavoidable.
The more nuanced answer is that we need different kinds of oversight rather than simply more oversight. This includes: automated monitoring for statistical anomalies and performance drift; periodic deep audits by qualified evaluators; clear escalation protocols when systems encounter edge cases; and robust governance frameworks that define accountability even when humans aren’t reviewing individual decisions. The goal shouldn’t be human review of everything AI touches, but rather appropriate oversight mechanisms matched to the risk level and context,ranging from intensive human involvement for novel high-stakes decisions to automated monitoring with human escalation for routine, well-understood applications. The art is in designing governance that’s both rigorous and scalable.
For more information, or to get involved, please get in touch with the team below:
Tess Buckley

Tess Buckley
Tess is a digital ethicist and musician. After completing a MA in AI and Philosophy, with a focus on ableism in biotechnologies, she worked as an AI Ethics Analyst with a dataset on corporate digital responsibility (paid for by investors that wanted to understand their portfolio risks). Tess then supported the development of a specialised model for sustainability disclosure requests. Currently, at techUK, her north star as programme manager in digital ethics and AI safety is demystifying, and operationalising ethics through assurance mechanisms and standards. Outside of Tess’s work, her primary research interests are in AI music systems, AI fluency and tech by/for differently abled folks.
Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/
Gather insights from the India AI Summit
![]()
techUK – Seizing the AI Opportunity
The UK is a global leader in AI innovation, development and adoption.
AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
Upcoming AI Events
Latest news and insights
Subscribe to our AI newsletter
AI and Data Analytics updates
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team

Kir Nuthi

Kir Nuthi
Kir Nuthi is the Head of AI and Data at techUK.
She holds over seven years of Government Affairs and Tech Policy experience in the US and UK. Kir previously headed up the regulatory portfolio at a UK advocacy group for tech startups and held various public affairs in US tech policy. All involved policy research and campaigns on competition, artificial intelligence, access to data, and pro-innovation regulation.
Kir has an MSc in International Public Policy from University College London and a BA in both Political Science (International Relations) and Economics from the University of California San Diego.
Outside of techUK, you are likely to find her attempting studies at art galleries, attempting an elusive headstand at yoga, mending and binding books, or chasing her dog Maya around South London’s many parks.
Email:
[email protected]

Usman Ikhlaq

Usman Ikhlaq
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK’s AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader’s Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Email:
[email protected]
LinkedIn:
https://uk.linkedin.com/in/usman-ikhlaq,https://uk.linkedin.com/in/usman-ikhlaq

Sue Daley OBE

Sue Daley OBE
Sue leads techUK’s Technology and Innovation work. This includes work programmes on AI, Cloud, Data, Quantum, Semiconductors, Digital ID and Digital ethics as well as emerging and transformative technologies and innovation policy. In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List. She has also been recognised as one of the most influential people in UK tech by Computer Weekly’s UKtech50 Longlist and was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
Aâ¯key influencer in driving forward the tech agenda in the UK, in December 2025 Sue was appointed to the UK Government’s Women in Tech Taskforce by the Technology Secretary of State. She also sits on the UK Government’s Smart Data Council, Satellite Applications Catapult Advisory Group, Bank of England’s AI Consortium and BSI’s Digital Strategic Advisory Group. Previously, Sue was a member of the Independent Future of Compute Review and co-chaired the National Data Strategy Forum. As well as being recognised in the UK’s Big Data 100 and the Global Top 100 Data Visionaries in 2020,â¯Sue has been shortlisted for the Milton Keynes Women Leaders Awards and has been a judge for the Loebner Prize in AI, the UK Tech 50 and annual UK Cloud Awards. She is a regular industry speaker on issues including AI ethics, data protection and cyber security.
Prior to joining techUK in January 2015, Sue was responsible for Symantec’s Government Relations in the UK and Ireland. Before that, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Master’s Degree in International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Email:
[email protected]
Phone:
020 7331 2055
Twitter:
@ChannelSwimSue
Visit our AI Hub – the home of all our AI content:

Enquire about membership:
Authors

Tess Buckley
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.
Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace.
Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical.
Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.
Email:
[email protected]
Read lessmore
