AI Made Friendly HERE

Navigating the future of AI governance

Even as organizations continue to navigate the landscape of AI governance, the buzz is growing stronger around traditional regulatory frameworks and the notions of ownership. 

Understanding the Complexity

The fusion of Artificial Intelligence (AI) and computing power has ushered in a new era of complexity, especially with the emergence of Generative AI (GenAI). In 2024, a survey of 12,000 people globally revealed that only 6% trust social media companies with their personal data, marking a significant lack of trust in these platforms. Moreover, the global AI market is on a trajectory to expand by over 35% annually over the next half-decade, underscoring the transformative potential of GenAI and driven by escalating computing capabilities. 


The risks are sky-high, including misuse like AI-driven fake news and bias in decision-making. Ethical concerns over AI weapons underscore the need for robust regulation.

Evolving the Conversation on Regulation

The conversation around how to regulate and who gets to own this tech is heating up. While it’s crucial for governments to step in and protect us from harm, the truth is, AI is a whole new ballgame that might need some fresh thinking on the regulations.


And this isn’t just some high-level policy discussion; it’s the talk of the town, extending even to family dinners. Take my father and me, for example. Having spent over 25 years in an Indian nationalized bank, my father’s all about weighing the societal costs of racing ahead with tech. He often reminds me of the days when banking was more about serving the community than chasing profits, drawing parallels with today’s tech advances. He worries about what happens to jobs, like drivers if cars start driving themselves, or bank employees if AI takes over customer service. On the other hand, I talk about the productivity gains and the complex problems we can solve using the AI and GenAI. 

So now the pertinent question how do we apply AI/GenAI in such a way that it results in equitable distribution of benefits ? and there comes the ownership dilemma. 

The AI Ownership Dilemma


The question of ownership looms large in the context of AI: Should these transformative technologies be controlled by private entities driven by profit motives?

Should they be owned and operated by the government to serve the broader public interest? Striking a delicate balance between innovation and oversight, between corporate autonomy and societal responsibility, is key.

Informed by the Past, Building for the Future


Reflecting on historical precedents, such as banking regulations and public-private partnerships, can offer valuable insights. Like how banking regulations or public-private partnerships (think of the Delhi Metro) have managed to blend the best of both worlds. By combining private sector smarts with public oversight, we’ve seen some incredible projects come to life, offering a hint at how we might handle GenAI.

We could use the same principles where the private sector provides expertise for building infrastructure, while the training data for the model could be from the larger internet, but regulated to control the bias. However, it’s crucial to acknowledge the challenges and limitations of PPPs, including issues related to transparency, risk allocation, and regulatory oversight. 

For GenAI, it is the training data which actually trains the LLMs to behave in a certain way. Perhaps data regulations for training data could be drafted by a committee represented by all the pillars of democracy (the legislature, the judiciary, the media and the executive). Despite strong training data regulations, checks should be put in place to ensure that LLM is tuned to learn from the regulated a data alone and not programmed to behave in a certain way. 


India’s success stories in Aadhar (technologies in public governance) and UPI (public governance in technologies) showcase the power of collaboration among the government, industry, and academia, driven by strong political will.

As governments navigate the complexities of regulating or owning massive technologies, a stronger collaboration with industry stakeholders, academia, and civil society to develop agile regulatory frameworks is imperative. Involvement of eminent and successful personalities from the industry will add more gravitas to the framework. For example, remember how the idea of Aadhar turned into a reality with Nandan Nilekani at the helm of affairs?

Overcoming Challenges in AI Regulation


The journey ahead requires addressing technological literacy among governance bodies and tailoring regulatory frameworks to reflect each country’s unique context. The ultimate goal is to ensure that AI adoption aligns with societal benefits and ethical considerations.

The vision of a country and its geopolitical, social, and economic situation will influence the regulatory framework on GenAI and AI adoption. Questions that bear thinking through are: Does the adoption of AI benefit people at large? What does the transitory framework look like? What could be its potential impact while transitioning? 

Wrapping up

Balancing innovation with responsibility, protecting individual rights, and ensuring equitable benefits are foundational to shaping the widespread adoption and impact of AI. This journey demands the participation of all stakeholders, calling for an informed and collaborative approach to navigating the complexities of AI regulation.

Sudhir Shenoy

Sudhir Shenoy

Sudhir Shenoy is Senior Director — Product Delivery at Publicis Sapient.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird