AI Panel at Davos: AI+Future
Author
Davos has always been a place where big tech sells the future. This year, it feels more like a place where governments, investors and civil society are pricing the conditions under which that future will be allowed to ship.
The World Economic Forum’s Annual Meeting runs January 19–23, 2026 in Davos-Klosters, and the AI conversation has matured into something sharper than model demos and productivity promises. The question floating through the week isn’t just “what can AI do next?” It’s “who controls it, who benefits—and what rights people keep when AI systems reach deeper into daily life?”
That shift is why a Davos-week side gathering framed around competition, ethics and the “AI-native” generation matters. The subtext is blunt: trust is no longer a soft virtue. It’s becoming a competitive advantage—and a competition problem. In 2026, AI leadership isn’t only about performance. It’s about legitimacy: the ability to prove restraint, accountability and user rights at scale.
What changed? The industry has outgrown the “move fast and patch later” era. As AI moves from chatbots into agents that can act—buy, book, approve, deploy code, initiate transactions—governance stops being a policy footnote and becomes a product requirement. You can hear the shift in the World Economic Forum’s own framing: trust and verification are being cast as prerequisites for unlocking AI’s economic upside, not obstacles to it.
And when trust becomes an engineering requirement, it inevitably becomes a market filter. Some companies will be able to meet that bar, quickly and credibly. Others won’t. That is where AI’s “trust problem” turns into a competition problem.
Trust is now the moat—and the tax
For years, AI competition was discussed in familiar inputs: talent, compute, data and venture funding. Those still matter. But Davos is increasingly focused on a different axis: who owns the foundations and who sets the rules of use.
This is where “open vs. closed” stops being a technical preference and becomes an economic power question. Open ecosystems can accelerate adoption and experimentation. Closed systems can centralize control, security and monetization. Both create different dependencies—and therefore different geopolitical risks.
That geopolitical lens is no longer theoretical. Competition authorities are already treating platform control as a lever of market power, and AI is pushing them to extend that logic into new terrain. The European Commission’s finding that Apple breached the Digital Markets Act’s anti-steering obligation is one clean example of where this is heading: regulators are increasingly suspicious of gatekeepers that can shape downstream markets through default rules and friction.
AI adds a new twist. The “gatekeeper” may not just control distribution; it may control the model layer—what systems are available, what they can access, and how easily users can leave. If your AI assistant becomes your interface to services, commerce and information, the winner isn’t simply the best model. It’s the model people will trust to operate inside their lives, and the platform that can credibly demonstrate it deserves that trust.
This is why governance is becoming a strategic asset. Frameworks that used to look like compliance paperwork are turning into market infrastructure. The U.S. government’s NIST AI Risk Management Framework, for instance, is essentially a template for operationalizing concepts like accountability, transparency and safety across an organization. OECD’s AI principles make an even more explicit link between trust, innovation and competition: the policy goal is trustworthy AI that still supports competition.
The commercial takeaway is uncomfortable for parts of the AI industry: trust will raise the cost of doing business, at least for companies that want to operate at scale in regulated markets. But it will also create a moat for those who can build auditable systems quickly.
Put differently, we’re moving from an era where trust was marketing to an era where trust is a line item. If you can’t show your model is safe, governed and accountable, you won’t just face reputational risk—you’ll face distribution limits, procurement bans, higher insurance costs, and slower enterprise sales cycles. Trust becomes a tax on the unprepared.
The ethics debate is becoming a design debate
In the last decade, tech companies often treated trust and ethics as messaging problems: publish principles, add transparency language, create a review board. Davos is increasingly treating ethics as an architectural question: what is the system allowed to access, and on what terms?
The most immediate risk is no longer only what AI can generate. It’s what AI can touch: personal data, financial accounts, health records, workplace systems, identity credentials. As AI agents become more capable, the consent model that dominated the platform era—an occasional checkbox and a long privacy policy—starts to look unfit for purpose.
That’s why “permissioning” is showing up as a serious design theme in the agent era. In security, the philosophy is familiar: least privilege, scoped access, auditable actions. The same logic is now being pulled into consumer AI. There’s a growing body of work exploring authenticated delegation—how a user can authorize an agent to act on their behalf while keeping clear accountability and limitations. And practitioners are increasingly arguing that consent for AI agents must be granular, time-limited and easy to revoke, because agents act, not just respond.
Your essay’s “double authorisation” framing fits into that broader direction: treat consent not as a one-time acceptance but as a layered, explicit rule—permission to access and permission to act. The point isn’t that this specific label has already become a standard. It’s that the market is moving toward the idea that AI needs built-in permission layers—and that those layers will be fought over like any other competitive surface.
Once you see consent and permissioning as design, the next step is obvious: design becomes strategy. Companies that bake consent, auditing and user control into the product will be better positioned in markets where regulators are skeptical and consumers are jumpy. Companies that treat permissioning as an afterthought will find themselves stuck—either blocked by regulation or forced into expensive retrofits.
This is also where the “AI-native generation” becomes more than a slogan. The next generation of users won’t be impressed by novelty. They will assume intelligence is embedded. What they will scrutinize is agency: Can I control what this system knows about me? Can I shape how it behaves? Can I leave without losing my digital life?
That expectation pushes companies toward a different posture: from offering “access” to offering a sense of ownership and participation. In business terms, it means the next wave of platforms may compete not only on capability and cost, but on user rights as a feature set—permission controls, portability, transparency into what data is used and how decisions are made.
Davos is often accused of producing narratives rather than outcomes. But the AI discussion is converging on practical deliverables: policy guidance that can travel across sectors, governance structures that can be implemented, and product principles that can be audited rather than proclaimed. That’s what happens when the market realizes that the “trust problem” is not a reputational risk at the margin—it’s the operating environment.
The companies that win the next decade of AI won’t necessarily be the ones with the flashiest model. They’ll be the ones that can scale capability and restraint: models that perform, systems that can be proven safe, and platforms that can earn consent at scale.
If that sounds like a slower world, it isn’t. It’s a more competitive one. Because once trust becomes measurable, it becomes differentiating. And once it becomes differentiating, it becomes the battleground.
