AI Made Friendly HERE

AI ethics still lost in translation as Europe struggles to implement principles

Europe has spent the past decade building a distinct approach to “trustworthy artificial intelligence”, turning ambitions in AI ethics, such as transparency, accountability and non-discrimination, into guidance and, now, binding rules under the AI Act.

Yet translating those principles into concrete design choices remains a problem for organisational processes and day-to-day decisions made by engineers, researchers and public authorities. Does ‘transparency’ mean the same thing to a strawberry-picking robot and to disinformation detection online?

That ‘operationalisation gap’ is the starting point for AIOLIA, an EU-funded project that aims to move AI ethics from abstract values into working practice.

Instead of drafting another set of principles, AIOLIA examines how ethics is implemented in real systems – and then converts those lessons into training for the people who build, review and govern AI.

Why the gap persists

According to Alexei Grinbaum, project coordinator of AIOLIA, senior scientist and chair of the Operational Digital Ethics Committee at CEA-Saclay in France, the gap has existed since the earliest debates on AI ethics.

“Regulation and ethical thinking have been formulated in terms of principles,” Grinbaum tells Euractiv. “We have seven principles of AI ethics, and any engineer is supposed to be working according to these principles. But do they understand what these principles imply for their specific AI system in their specific design context?”

The challenge is not a lack of awareness. Policymakers, researchers and industry actors have long acknowledged the problem. Instead, the difficulty lies in translation: turning abstract values into measures that can be embedded in code, workflows or organisational governance.

“Everyone has been aware of this gap for years,” Grinbaum says. “In AIOLIA, we’re suggesting concrete solutions.”

A bottom-up response

AIOLIA’s response is deliberately different from previous top-down efforts. Instead of prescribing what ethical principles should mean in theory, the project starts by examining how they are already being applied in practice.

Across 10 use cases in different domains, involving both professionals and citizens, the project observes how organisations operationalise ethical principles within their own constraints – and what technical and organisational measures they rely on.

“We look at how one or two principles are implemented in a given context,” Grinbaum explains, “and we learn from both the similarities and the differences across these cases. This diversity is essential.”

The intent is to capture repeatable technical and organisational measures that can travel across sectors, without pretending that one model will fit every context. The output is meant to be practical: examples and methods that make ethical requirements more actionable for the people expected to implement them.

Training at the core

Training sits at the core of AIOLIA’s mission, particularly in relation to the European Commission’s Ethics Appraisal Scheme. Every EU-funded research and innovation project is subject to ethics review, including a specific assessment of AI-related risks.

“The experts evaluating these proposals do not necessarily have deep expertise in AI or AI ethics,” Grinbaum notes. “There are hundreds of evaluators, which makes training both necessary and challenging.”

One priority is helping evaluators identify what the Commission calls “serious and complex ethical issues”.

“The challenge is not to list them all,” Grinbaum says, “but to distinguish which ones are genuinely critical and require deeper attention. This discernment is something that must be learned through training.”

Beyond Commission evaluators, AIOLIA also targets research ethics committee members at national and institutional levels, as well as researchers themselves – including early-career scientists.

The emphasis here is on ethics-by-design: understanding how to respond to issues such as data bias, manipulation risks or governance gaps during the design process, rather than treating it as a box-ticking exercise at the end.

Behaviour, cognition and Europe’s role

The project’s focus reflects how quickly AI systems are changing. “We now interact with non-human agents that communicate in highly convincing human language,” Grinbaum says, pointing to generative AI and large language models.

“Even when users know they are interacting with a machine, they spontaneously project human qualities onto it,” he adds, “and influence and manipulation can still occur, even with full disclosure.”

Europe’s comparatively reflective pace matters, Grinbaum argues. “This may slow deployment, but it is not a weakness – it is an asset,” supported by a stronger culture of risk control and public debate.

He flags areas where caution is essential, from democratic processes to education. “The influence of generative AI on young people and learning is enormous,” Grinbaum warns. “We are only beginning to understand both the risks and the opportunities.”

For AIOLIA, the promise is not a new set of slogans, but practical pathways – narrowing the distance between ethical language and operational decisions and building training that helps Europe’s AI ecosystem act on what it already claims to value.

As Grinbaum stresses, “there is no easy answer to that, but we need a process of reflecting on these questions, of testing certain approaches, and of including safeguards where they are needed”.

(BM)

Originally Appeared Here

You May Also Like

About the Author:

Early Bird