OXFORD — On a recent afternoon inside the Bodleian Library, in a city that has stewarded human knowledge for the better part of a millennium, the University of Oxford awarded its Bodleian Medal to a technologist who arrived in the United States decades ago with thirty-four dollars in his pocket.
Shekhar Natarajan, the founder and chief executive of Orchestro.AI, accepted the honour for what the university described as contributions to artificial intelligence in the public interest. “To stand in Oxford, in a city that has been a beacon of human knowledge for nearly a thousand years and receive the Bodleian Medal is a moment I could not have dared to imagine,” EasternEye he told those gathered for the ceremony.
The recognition caps a remarkable few months for Natarajan, an Indian-origin engineer and inventor who has emerged as one of the more closely watched figures in the global debate over how artificial intelligence should be built. In February, he addressed the AI Summit on Trust, Safety, and the Future of AI Governance at New Delhi’s Bharat Mandapam, where, according to organisers, his remarks drew a sustained standing ovation from a hall of policymakers, technology executives and journalists.
His argument that morning was unusually direct. “The entire world is debating how to govern AI after the fact. We are putting fences around a horse that has already left the barn,” Herald Globe he told the audience. The session, by several accounts, reframed the day’s conversation.
A technical critique, not a philosophical one
What distinguishes Natarajan from the broader field of AI commentators is the specificity of his claim. He is not arguing for a new code of conduct or a stricter compliance regime. He is arguing that the dominant approach to AI safety — train a large model on whatever the internet provides, then layer guardrails on top — is structurally flawed at the level of engineering.
In his telling, today’s large language models are trained on a corpus that mixes expert analysis with misinformation, satire and noise, all treated by the system as roughly equivalent signal. The failures that follow — confident but fabricated medical advice, contradictory answers to identical questions, the much-publicised episode of an AI system suggesting glue as a pizza ingredient — are not, in his view, anomalies to be patched. They are the predictable output of an architecture designed to predict and please rather than to reason and deliberate.
“Ethics cannot be a patch. It cannot be a compliance checklist,” Natarajan said in Delhi, according to a press release distributed by the news agency PNN and republished by Business Standard and other outlets. “If you have to teach a machine not to be harmful, you have already built the wrong machine.” Herald Globe
His proposed alternative, which he calls Angelic Intelligence, is built around a deliberative system of specialised agents — what he terms Digital Angels, each representing a cross-cultural virtue drawn from philosophical traditions spanning Sanskrit, Abrahamic, Confucian, and Indigenous frameworks. No single agent determines an output. All 27 deliberate to consensus. NY Weekly
Stripped of the language of virtue, the design is a multi-agent deliberation architecture in which ethical reasoning sits inside the inference pipeline rather than downstream of it. It is slower than a single-model call. It is more expensive. It also produces, as a byproduct, an auditable trace of the reasoning behind any given output — a property that builders in regulated industries are increasingly going to need.
The regulatory clock
Natarajan’s framing has landed at a moment when the legal and commercial environment around AI is shifting quickly. According to coverage of his Delhi remarks, the European Union’s AI Act enters full enforcement in August of this year, with penalties that can reach 35 million euros or seven percent of global revenue, and Gartner has projected that half of governments worldwide will impose responsible-AI regulations by the end of 2026.
For enterprises building AI into healthcare, finance, education and the public sector, this changes the calculus. A system whose reasoning cannot be explained to a regulator is, increasingly, a liability. A system that can produce a defensible chain of deliberation is an asset.
Natarajan has positioned Orchestro.AI directly in this gap. His company identifies a total addressable market of $4.5 trillion, with a serviceable addressable market of $520 billion and an initial market position projected at $12–18 billion. Jaipurtimes Whether those projections hold is another question. The thesis behind them — that every serious enterprise will eventually need a trustworthy AI deliberation layer, and that someone is going to build it — is harder to argue with.
He is not the only person making this case. He is, however, among the more credentialed. Holder of more than two hundred patents across logistics, supply chain and AI architecture, he spent twenty-five years inside the technology and operations functions of Walmart, The Walt Disney Company, Coca-Cola, PepsiCo, Target and American Eagle Outfitters before founding Orchestro.AI. He has been educated at Georgia Tech, MIT, Harvard Business School and IESE. The combination of industrial credibility and engineering specificity is what colleagues say has given his argument unusual traction.
A personal arc that informs the engineering
Coverage of Natarajan rarely fails to mention the biographical details, and they are difficult to leave out. He grew up in southern India, studying under streetlights because his family had no electricity. His mother, by his account, once pawned her wedding ring for thirty rupees to pay his school fees and stood outside a headmaster’s office for an entire year to secure his admission. He arrived in the United States with thirty-four dollars and, in lean periods, lived in his car.
In a different kind of profile, this would be colour. In his case, it is closer to specification. The argument he makes — that the priors of those who design AI systems determine what the systems optimise for, and that the current priors are too narrow — is harder to dismiss when it comes from someone who has been on the wrong end of institutional indifference.
“My mother stood outside a headmaster’s office for 365 days so I could get an education,” Natarajan said in Delhi, according to the press release. “That kind of love — that sacrifice — is what I want to encode into the machines we build. If AI cannot understand dignity, it has no business making decisions about human lives.” Herald Globe
What builders are watching
For practitioners shipping AI products today, the interest in Natarajan’s framework is less about its branding than about the architectural pattern it points to. Multi-agent deliberation, separation of prediction from evaluation, auditable reasoning traces — these are becoming table-stakes design choices in any domain where an AI decision can be appealed, audited or sued over.
The era of the foundation model with a thin wrapper is, by many accounts, ending. What replaces it is still being designed. Natarajan’s wager is that the replacement will look less like a single intelligent system and more like a small society of cooperating components, some of which exist specifically to reason about whether the others can be trusted in a given context.
It is a wager that has now been backed by a standing ovation from one of the world’s largest AI policy gatherings and a medal from a library that has outlasted plagues, wars and revolutions. Whether his particular company captures the resulting market is, in some sense, a secondary question. The framing — that the AI industry’s safety problem is an engineering problem, not a governance one, and that the engineering needs to move up the stack — is the more durable contribution.
The fences, as Natarajan put it in Delhi, are being built around a horse that has already left the barn. The question for the next phase of the industry is whether anyone is willing to go back and rebuild the barn.
This is the same material reworked into a straight reported feature — third-person throughout, attribution-driven, no first-person builder commentary. If you want it tighter (a 600-word news brief), looser (a long-form profile in the New Yorker mould), or pitched at a specific outlet’s house style, tell me which and I’ll re-cut it.
