The eighth annual Digital Ethics Summit, hosted by techUK on 4 December 2024, marked a moment in the evolution of AI ethics and governance. Following 2023’s “AI Epiphany” year of breakthrough developments in generative AI, 2024 emerged as a period of practical implementation and organisational introspection. The summit brought together leaders and experts from across the digital ethics landscape to reflect on and assesses the progress made in moving digital ethics forward in a year where the focus for organisations, policymakers, and regulators has been on finding the answers to the many questions surfaced and raised by the events of the year before.
Key developments highlighted at the summit included the UK government’s balanced approach to AI regulation, focusing on safety and ethical deployment while fostering innovation, particularly through initiatives like the Responsible AI Toolkit and AI Management Essentials (AIME) for SMEs. The discussions emphasised the critical importance of international collaboration in AI governance, showcasing various regional approaches in the US and the significance of frameworks like the G7 Hiroshima Code of Conduct. Throughout the event, speakers stressed the essential need for substantial investment in digital infrastructure and workforce development, alongside the crucial task of building public trust through transparent development processes and meaningful stakeholder engagement.
2024 marked a pivotal year where “the rubber hit the road” for AI implementation, while 2025 is being positioned as the year of AI Assurance, return on investment (ROI) and practical scaling. Core themes that emerged include the growing importance of geopolitical considerations in international AI collaboration, with AI increasingly viewed as a strategic national asset; the evolution toward process-based evaluations rather than model-based assessments; and a shift in focus toward smaller, more efficient AI models and multimodal AI agents. The UK is emphasising a context-based regulatory approach through sector specific regulators, with particular attention to building up an AI assurance ecosystem that will enable widespread adoption.
Looking ahead to 2025, the summit identified pressing challenges including workforce displacement, regulatory harmonisation, and the importance of expanding digital ethics discussions beyond specialist circles to engage broader public understanding and participation. Key priorities for 2025 include developing AI infrastructure and skills, moving from pilots to production scaling, ensuring energy infrastructure can support AI development, and establishing robust frameworks for transparency, interoperability, and third-party audits. The summit highlighted that the central question has shifted from what AI technology can do to what it should do, emphasising the critical importance of maintaining trust while balancing AI maturity, risk, and impact.
The event was made possible through collaboration with headline sponsor Microsoft, alongside Clifford Chance, Kainos, Workday, KPMG, and Infosys. Our distinguished academic and institutional partners included the Ada Lovelace Institute, The British Academy, The Royal Academy of Engineering, Open Data Institute, Oxford Internet Institute, Socitm, The Alan Turing Institute, and The Royal Society.
Thank you to everyone that was able to join us, please note we will share recordings of all the sessions shortly.
Reflections and Progress: The 2024 Digital Ethics Summit Agenda
Feryal Clark MP
Feryal Clark MP and Antony Walker
Antony Walker, Feryal Clark MP and Neil Ross
The UK government is focused on building an inclusive, globally competitive AI sector by working with businesses to ensure AI is safe, ethical, and fosters economic growth. DSIT is leading efforts to create the necessary infrastructure and talent to maximise AI’s potential, while addressing risks like unequal access and harmful practices. Upcoming AI legislation, as announced in the King’s Speech, will promote responsible development of powerful AI models.
To build trust in AI, tools like DSIT’s Responsible AI Toolkit and AI Management Essentials (AIME) help SMEs and startups deploy AI ethically. Government insights will support better decision-making in public sector AI investments. Efforts are underway to modernise public services, with initiatives such as Caddy (an AI-powered co-pilot for citizen support), gov.uk’s streamlined platforms, and fraud-prevention measures like One Login simplifying access to services and saving time.
DSIT is committed to responsible innovation and digital ethics, with frameworks ensuring AI is used efficiently and safely. Linked data programmes, like the Ministry of Justice’s Data First, demonstrate AI’s potential to drive better outcomes. This vision reflects the UK’s determination to harness AI’s opportunities responsibly while fostering a modern, efficient government and a thriving, equitable AI ecosystem.
Jeff Bullwinkel and Sue Daley
Jeff Bullwinkel and Sue Daley
Jeff Bullwinkel and Sue Daley
Microsoft is advancing the generative AI conversation with innovations like Microsoft Co-Pilot, showcasing AI’s potential to transform industries and economies. In healthcare, AI is already simplifying services and contributing to advances like improved diagnoses and cancer treatments. However, challenges such as ethical concerns, transparency, and trust need to be addressed to encourage widespread adoption. It’s not just about what AI can do but about ensuring it is implemented safely, reliably, and responsibly.
Governments play a crucial role in building trust and setting ethical frameworks. The UK has emerged as a leader in ethical tech adoption, engaging in global forums like the G7 and OECD to shape policies that promote responsible AI use. Other nations, such as Belgium, Singapore, and Japan, are also progressing on AI regulation, with varying approaches that balance risk and innovation. The UK has an opportunity to set a global precedent in AI ethics and governance.
To ensure the effective adoption of AI, investment in infrastructure and digital skills is essential. Expanding education and training will empower more Britons to leverage AI effectively, driving innovation and ensuring that technology benefits society while adhering to high ethical standards.
Sue Daley, Melissa Heikkilä, Leanne Allen, Alice Schoenauer Sebag, John Lazar
The AI Landscape in 2024: What are the latest technical breakthroughs and what does this mean for digital ethics?
The opening session explored the evolution of AI throughout 2024 and anticipated developments for 2025. The discussion highlighted the shift from generative AI to agentic AI applications, with AI increasingly handling autonomous tasks across business workflows. Key developments included specialised models for multilingual processing, advances in robotics and embodiment technologies, agentic AI and growing focus on AI assurance frameworks.
The panel emphasised several critical challenges facing the industry. These include the need to translate abstract ethical principles into practical frameworks, improve transparency in AI development, and ensure cultural inclusivity in AI systems. Real-world applications showcased successful AI integration across sectors, from healthcare documentation to security workflows and agricultural monitoring.
Looking ahead, the discussion identified urgent priorities: operationalising ethical guidelines, fostering cross-disciplinary collaboration, and addressing sustainability concerns. The panel stressed the importance of expanding AI education and literacy programs to prepare for future developments. The session concluded that while AI advancement continues rapidly, balanced attention to ethical implementation and inclusive development remains crucial for responsible progress.
What lies ahead for UK AI regulation, governance and safety?
What lies ahead for UK AI regulation, governance and safety?
What lies ahead for UK AI regulation, governance and safety?
The panel discussion revealed the delicate balance the UK faces in AI development: nurturing innovation while ensuring responsible deployment. DSIT noted plans for targeted legislation focusing on the most powerful AI models, while maintaining existing regulatory frameworks for broader applications. This approach aims to support public service transformation while providing appropriate oversight.
Industry perspectives, including from Microsoft, demonstrated how AI is already transforming both public and private sectors. The conversation highlighted that successful implementation demands comprehensive infrastructure—modernised planning systems, expanded data centers, and robust skills programs. Andrew Pakes MP underscored that public trust must be rebuilt and maintained, pointing to how recent high-profile IT failures have damaged confidence. He stressed the importance of ensuring AI adoption happens with people, not imposed upon them, so that innovation supports every community rather than widening existing disparities.
The global context highlighted the value of diverse approaches to AI development. Mozilla emphasised the benefits of open-source innovation in driving technological advancement, noting that while the UK leads Europe in AI venture capital funding, the country may find additional opportunities through greater engagement with open-source development. Successful initiatives in Spain and Greece, where large language models are publicly developed, illustrate the potential benefits of transparent, collaborative innovation. Chatham House reinforced that global AI governance is becoming more complex, suggesting governments must also serve as builders of “public AI,” not merely regulators. In this evolving landscape, collaboration between government, industry, parliament, and civil society will be central as the line between the digital and traditional economies continues to blur.
Return of ‘Meet the Regulators’
Return of ‘Meet the Regulators’
Return of ‘Meet the Regulators’
The panel examined UK AI governance and its future direction following the February 2024 AI White Paper response, with regulators sharing their key initiatives from the past year.
Kate Jones from the Digital Regulation Cooperation Forum (DRCF) set the scene by noting the central role of regulation for ethical AI implementation, and highlighting how coordinated regulatory approaches are essential given AI’s cross-cutting nature. The DRCF supports this coordination through several mechanisms: their AI Digital Hub helps businesses and innovators navigate guidance across ICO, CMA, FCA and Ofcom jurisdictions, while regular roundtables and working groups facilitate engagement with both DRCF and non-DRCF regulators, recognising that AI impacts extend beyond traditional digital sectors.
Regulators spoke about their distinct priorities and achievements in the AI space. The ICO has focused on bringing clarity to generative AI’s intersection with data protection through consultations and effective “light touch interventions” with industry. The CMA developed a competition-focused framework built around six key principles for foundation models, while the FCA leveraged their regulatory experience to establish a new AI Lab. Ofcom published strategic papers on AI benefits and risks, while Ofgem brought crucial perspective on the infrastructure modernisation needed to support AI development.
Cross-cutting themes emerged across all regulators: the ongoing priority of attracting and retaining specialised technical expertise. As implementation becomes the focus, regulators are working to enhance industry guidance and establish appropriate enforcement mechanisms, while continuing to develop their capabilities to support these activities.
Responsible Diffusion of AI in the Public Sector
Responsible Diffusion of AI in the Public Sector
Responsible Diffusion of AI in the Public Sector
This breakout panel explored the integration of AI in public services, focusing on responsible adoption and real-world impact. The discussion centered on healthcare applications, where AI shows promise in tasks like x-ray analysis and clinical documentation while maintaining human oversight.
The panel emphasised building public trust through transparency and stakeholder consultation. They highlighted challenges in data infrastructure, noting that siloed systems currently limit AI’s effectiveness. Success requires modernising data systems and co-designing solutions with workers and citizens.
Key themes included identifying high-value use cases through sandboxing, ensuring global inclusivity in AI development, and managing expectations around implementation timelines. The panel advocated reframing AI’s role beyond efficiency, focusing instead on innovation and empowerment. They stressed that meaningful adoption requires clear strategies, realistic goals, and robust governance frameworks to protect citizen interests.
The session concluded that while AI offers significant potential for public services, successful implementation demands careful balance between innovation and ethical considerations, with continual feedback from stakeholders.
Facilitated by Thoughtworks
Facilitated by Thoughtworks
Facilitated by Thoughtworks
Facilitated by Thoughtworks
Facilitated by Thoughtworks
Facilitated by Thoughtworks 3
Facilitated by Thoughtworks
This hands-on session provided an overview of the Responsible Tech Playbook, a practical guide to nurturing a responsible tech mindset across large, complex organisations. The playbook included an open collection of workshops and tools that teams can use to incorporate responsible tech thinking into their day-to-day work. Session attendees also got a chance to try out one of the recommended tools – the Tarot Cards of Tech (created by Artefact). This short, facilitated activity provoked conversations about various ethical questions to help teams think more deeply about their designs and ultimately deliver better solutions that minimise the risks of unintended consequences.
Bridging Borders
Bridging Borders
Bridging Borders
This key breakout session examined the complex dynamics of international AI governance and policy alignment. The discussion highlighted diverse regional approaches: the EU’s comprehensive AI Act, the UK’s sector-specific framework, APAC’s lighter regulatory touch, and the US’s executive order focusing on competition with China.
Notable progress in global coordination includes the G7 Hiroshima Code of Conduct, though US-China tensions continue to impede unified frameworks. Microsoft’s initiatives in the Global South demonstrate efforts to address digital divides, even as China leads in open-source and local language AI development.
The session drew parallels between AI governance and nuclear-era challenges, emphasizing the critical need for robust international frameworks. The proposed AI Safety Institute represents a step toward coordinated oversight. Looking ahead to 2025, participants expressed cautious optimism about achieving policy alignment through organizations like the OECD, despite ongoing geopolitical challenges. Success will depend on prioritizing inclusivity and trust-building while maintaining practical implementation strategies.
Martin Tisne
Martin Tisne
Martin Tisne
During this fireside Martin Tisne shared insights on the UK’s current AI regulatory landscape and potential areas for improvement. The conversation explored AI Collaborative’s mission and its role in shaping responsible AI development.
A significant portion focused on the upcoming AI Action Summit in France, examining its objectives and the crucial role of civil society and the digital ethics community in its proceedings. The speaker outlined desired outcomes and potential deliverables from the Summit, particularly regarding their specific track’s goals;
-
International Governance (Henri Verdier) – Strengthen coordination between AI stakeholders, with emphasis on bridging the gap between technical experts and regulatory bodies. Focus on creating effective communication channels and collaborative frameworks.
-
Future of Work (Sana de Courcelles) – Development of proactive strategies and actionable recommendations to address emerging workforce transformations driven by AI. Emphasis on anticipating and preparing for future labor market changes.
-
Security & Safety (Guillaume Poupard) – Building upon foundations established at Bletchley Park and Seoul Summits, with focus on developing shared security best practices and working toward standardised safety protocols. The initiative aims to establish consistent security frameworks across jurisdictions and stakeholders.
-
AI for General Interest (Martin Tisne) – Beyond showcasing applications, this initiative focuses on identifying necessary resources and infrastructure required for successful AI implementation. The work includes developing comprehensive strategies for deployment and ensuring equitable access and distribution of benefits.
-
Innovation and Culture (Roxanne Varza) – This workstream addresses the dual priorities of enhancing competitive advantage through AI innovation while managing intellectual property considerations in the AI landscape. The focus encompasses both fostering technological advancement and establishing appropriate frameworks for IP protection.
Looking ahead, the discussion addressed pressing challenges in AI development and deployment, highlighting key ethical and governance issues requiring attention from the digital ethics community, policymakers, and industry stakeholders. The conversation concluded by identifying emerging concerns and potentially overlooked topics that warrant greater focus in AI ethics discussions, emphasising the need for comprehensive dialogue across all sectors as AI technology continues to evolve rapidly.
Truth in the Age of AI
Truth in the Age of AI
Truth in the Age of AI
This panel explored the critical challenges of verifying synthetic media and managing disinformation in our increasingly AI-driven world. The session included Ofcom sharing recent research that revealed a striking divide in public confidence regarding misinformation detection: equal thirds of respondents were either confident, unsure, or completely uncertain about their ability to spot false information. This data sets the stage for a broader discussion about the World Economic Forum’s identification of disinformation as a primary concern for the coming years.
The importance of both prevention and enforcement in tackling disinformation was emphasised. OfCom highlighted their ongoing “Untold Stories” project, which operates in socially deprived areas to educate communities about news production processes. The Royal Society contributed insights from their research on content warning tools and their effectiveness in detecting AI-generated content, while also announcing upcoming work on adult information literacy.
The discussion evolved to address the limitations of current approaches, with participants acknowledging that media literacy alone cannot solve the challenge of synthetic content becoming indistinguishable from genuine material. There was strong consensus that educational reform must incorporate media literacy earlier and more comprehensively, extending beyond children to include adult education.
The panel emphasised the need to empower and fund trusted organisations to deliver media and information literacy programs effectively. The session concluded by highlighting the Department for Education’s recent curriculum review and the critical importance of developing a multi-stakeholder approach to address these challenges, recognising that the solution requires collaboration across technological, educational, and social sectors.
Meet the AI Ethicist
Meet the AI Ethicist
Meet the AI Ethicist
Meet the AI Ethicist
This breakout session explored how AI ethicists navigate the complex intersection of ethics, technology, and organisational objectives. These professionals serve as crucial mediators, particularly in smaller organisations without dedicated ethics teams, balancing governance requirements with innovation goals.
Organisations increasingly adopt hybrid models where internal teams apply flexible ethical frameworks across diverse AI applications. Ethicists employ practical tools, including questionnaires and one-on-one consulting, to guide stakeholders through ethical challenges while maintaining operational efficiency.
Key challenges include the multidisciplinary nature of AI ethics spanning legal, technical, and psychological domains, limited accountability in the tech sector compared to other industries, and growing demands for public ethical disclosures. The session highlighted the importance of independence for ethicists to provide unbiased guidance while establishing clear accountability structures beyond individual practitioners.
Looking ahead, the field may move toward certification systems for high-risk AI applications to enhance trust and accountability – such strong career pathways must be balanced with respecting the multidiscplinary nature of the role and current varied backgrounds of the profession. Success will require maintaining ethicists’ independence while ensuring broader organisational responsibility for ethical outcomes.
Facilitated by The Alan Turing Institute
Facilitated by The Alan Turing Institute
Facilitated by The Alan Turing Institute
Facilitated by The Alan Turing Institute
Using a world café approach, this interactive session brought together attendees with ethics experts from The Alan Turing Institute’s Public Policy Programme to explore putting ethics and governance principles into practice across the AI project lifecycle. The session explored components of the official guidance on AI Ethics and Safety, developed with the UK’s Office for artificial Intelligence and the Government Digital Service. Attendees engaged in discussions around the challenges and opportunities for AI innovation in the public sector, and how to ensure that AI is produced and used ethically, safely, and responsibly.
Ethics beyond AI
Ethics beyond AI
Ethics beyond AI
The UK holds a competitive advantage in technology but must prioritise addressing the ethical consequences of innovation. As one technology ethics expert highlights, transparency in AI systems often lacks contextual understanding, which is essential for shaping effective technology. Historical AI failures demonstrate the need for collaboration across sectors to prevent harm. The ethos of “just because you can, doesn’t mean you should” should guide AI development.
An innovation policy specialist stresses the need for accountability in fast-evolving sectors like immersive tech, where cameras abstract biometric data. A quantum computing expert emphasises the importance of proactive frameworks for quantum technologies, ensuring responsible development before these technologies mature.
A data ethics specialist points to power asymmetries in data use, as companies often prioritise output over transparency, leaving the public uninformed. The UK excels in regulatory frameworks and innovation but must better support its key research and advisory institutions.
Next steps involve leveraging the UK’s leadership in AI, fostering collaboration across disciplines, and maintaining ethical inquiry. Transparency, responsible innovation, and cross-sector partnerships will ensure technologies are developed and deployed to benefit society, protecting humanity and trust in the process.
Charting a course for the future
Charting a course for the future
Charting a course for the future
The summit’s final panel assessed the rapid evolution of AI technology and governance, highlighting the shift from predictive to generative and now agentic AI. The discussion emphasized the urgent need to reunify fragmented governance approaches across the EU, US, and UK while expanding digital ethics discourse beyond specialist circles.
Key challenges include the potential displacement of 300 million jobs by 2030, requiring extensive reskilling initiatives. The panel noted how varying regulatory frameworks, particularly the EU AI Act and US federal-state divisions, complicate global harmonization efforts. They stressed the importance of enhanced public engagement through relatable storytelling about AI’s societal impacts.
Looking ahead to 2024, priorities include rebuilding international alignment, increasing public awareness of AI risks, and strengthening corporate accountability through improved transparency and data privacy measures. The panel concluded that successful AI integration depends on balancing innovation with workforce preparation and maintaining public trust through demonstrable control and accuracy measures.