AI Made Friendly HERE

AI ethics – why Globant is calling for AI bias to be as measurable as uptime

Founded in Buenos Aires, but now headquartered in Luxembourg, software engineering and consulting digital transformation company Globant, recently held its annual conference, Globant NXT, focusing on data and AI. A large part of the conference addressed ethics and observability in AI, exploring how to build trustworthy AI so that organizations can innovate while mitigating risks associated with the technology.  

Speakers considered what needed to be in place to stop valuable data being leaked, as well as acknowledging that Large Language Models (LLMs) are unpredictable and can produce bizarre results. The call was to act responsibly as AI grows in use.

Are ethics becoming an optional extra for AI deployment?

Avijeet Dutta Senior Technical Director, Globant began by setting out the case for ethical AI:

Observability is the foundation of responsible systems, it is the basis for example of self-driving cars giving priority to avoiding a pedestrian, even if it means knocking a lamppost.  We need this transparency into how agents are making decisions. We must explicitly demand socially ethical AI or we will end up with sociopathic bots. With AI we have highly performant systems but we do not have the level of transparency we need. Agents are like teenagers in that they can become accidentally evil by taking the path of least resistance in their choices.

Working with clients it is clear that they are more worried about hallucination than bias because efficiency and performance are the primary principles driving deployment. Ethics and transparency are seen as optional extras; as tick boxes on compliance forms. Ethical AI requires telemetry where every decision is tracked for ethical observability. This is not a technical first principle, it is a moral first principle, that the system flags us the moment AI starts treating people unfairly.

Dutta put forward three recommendations: 

Firstly ethical monitoring should be non-negotiable like security; secondly organizations should embrace observability rather than treating it as a tick box exercise; and thirdly, we should all be working on observability standards for enterprises that reflect societal values. We need to build systems that are provably fair, where bias is as measurable as uptime.

Governance in the age of AI agents

Roberto Contreras, Head of AI MX at Globant explained how organizations need to fine-tune the level of autonomy that agentic solutions have. Agents talk to workflows but their behaviour is shaped by data and we need to ensure that their behaviour aligns with organizational values. Contreras began by talking about governance for AI agents:

We need to track inputs to be able to explain why an agent chose a particular action in order to build trust with users. We also need to have clear human accountability based on role assignment – who is responsible for each stage of the agent’s lifecycle, from design to ongoing monitoring. Another requirement is continuous auditing and for this we need dashboards so that we have continuous visibility into agent decisions and KPIs to ensure they remain aligned with corporate goals. And then we need to consider the human in the loop – what is the escalation path for high-risk decisions?

He continued by pondering internal organizational governance requirements. He suggested that:

We need to create an AI Governance Board, composed of a cross-functional team that is formally structured to reflect different points of view and should include technical, business and legal professionals. This should not be a one-man band effort. The Board should define policies for deployment, authorise updates (to deal with unintended consequences) and manage the decommissioning of AI agents in the software development lifecycle. Agents should be classified and catalogued by risk level, rather than by functionality and Key Risk Indicators (KRIs) could be based on level of trust in agent outputs, degree of autonomy granted and incident rates. The on-boarding and off-boarding of AI agents should be treated with the same rigour as are human employee processes – agents should be tested and documentation should be requested. Retired agents need to be fully de-activated and their data needs to be managed.

-Contreras concluded:

Governing AI is simply governing the business because AI-generated results without human oversight can lead to wrong decisions, no matter what industry you operate in. Begin with governance pilots in areas such as customer service and refine your policies before rolling out from the centre with documentation. An agent without governance is not intelligent – it is uncontrollable.

My take

AI agents have reasoning in order to problem solve, and they can trigger actions without human intervention. The fact that they can do so does not mean they necessarily should do so. As dynamic systems of autonomous agents increase in enterprises, understanding and governing their behaviour is a strategic necessity. 

Without taking governance seriously organizations may find themselves in challenging situations if, say, an agent takes sustainability goals more seriously than customer service targets; or makes opaque, seemingly biased decisions on hiring human staff or granting loans; or simply misunderstands cultural nuances in different regions creating embarrassing PR incidents. 

As The Bible, Voltaire and SpiderMan’s Uncle Ben have all explained, with great power comes great responsibility and so it behoves us to work on agentic AI governance now, so that agentic decisions are as unbiased and explainable as it is practical for them to be.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird