AI Made Friendly HERE

Can AI balance growth with ethics?

In a rapidly advancing digital age, artificial intelligence (AI) is at an intersection where it must decide whether to pursue technological progress or resolve its ethical dilemmas. Ed Watal, founder of Intellibus, shared his insights into AI’s future and its ethical implications in a recent interview on the Lexicon podcast.

Watal’s journey into AI began during his tenure teaching cloud computing at New York University (NYU): “I helped NYU launch their cloud curriculum back when cloud was not a thing,” he recounts. This experience exposed him to significant data challenges foundational in AI development. “Everybody wanted to do AI for a while except that it was hard. The hard problem was organizing data,” he explained.

This challenge led Watal to create BigParser, a platform designed to source, organize, and structure data to support AI applications. “The idea was to build something like a ChatGPT for the enterprise. The hard problem is how do you get AI to understand all that data?” Watal adds, “How do you source and organize and structure data in a meaningful way to feed AI so that it won’t hallucinate.”

Watal sees the solution in non-duplicative data organization. “The moment you’re duplicating data, it’s not going to necessarily be as effective and efficient. Bringing ownership into question and defining clear principles, policies, and guardrails around data ownership is essential,” he noted.

AI and data ownership

How AI companies gather data and what that means for ownership concerns Watal: “One approach companies like OpenAI and other organizations have shown us is to take the data off the internet. And we know the ethical concerns there, and it’s no surprise to anyone. That data is owned by individuals who put in the hard work and labor to put that information out there.”

“We saw very recently the case with Scarlett Johansson, where she had explicitly told OpenAI not to use her voice and then they use her likeness. So organizations for their own progress and trying to keep the attention of their audience are going to continue to do those things,” he adds.

Could an open ethical data model, similar to how Wikipedia transformed information, be a potential solution? “Imagine doing that same (Wikipedia) thing at scale for all data on the Internet,” says Watal. “Finding a mechanism where the data is organized and structured, so you don’t have to use enormous computing power, and it is then easily available in a mechanism where your copyrights are not being violated.”

“The more of those kind of solutions emerge and those problems get solved, the easier it becomes. So I think it’s a problem of ethics… and cuts across the entire fabric of AI that’s being implemented worldwide.”

Can AI be regulated?

Setting these policies and guardrails may be beyond the capability of government and require self-regulation by AI companies. “Today, the governments of the world may not even be capable enough in terms of their know-how to put adequate guardrails against these organizations. So, self-regulation becomes the only option,” says Watal. 

In self-regulating, how do AI companies balance rapid growth with responsible development? This has been highlighted recently with the departures and rumors circulating OpenAI’s ethics team. “We should progress and we should accelerate… but you need guardrails,” Watal states. “It’s pretty telltale when people leave those organizations because they feel that they can’t make the change.”

“So I would call out and urge all those people in those organizations, this is your time to not leave that organization and quit. This is your time to fight, be there, stay there, stay your ground, be who you are and fight the good fight, because that’s the only way these organizations will change from within. And it’s your responsibility to do that.” 

Ethical considerations in AI development

Watal is now spearheading the World Digital Governance (WDG) project, which is focused on the importance of building ethical AI. “We wanted to solve the problem of building an AI that is for the greater good and widely available, democratized and possibly humanized,” he said.

“This is one of those times and moments in history where it will not stop. This will go down and this is a very fast train. And so trying to stop it is kind of foolhardy. It’s like someone invented a gun. And you’re trying to say, ‘Oh, stop guns’. We can see what happens many centuries later. We have better and smarter and faster guns. So, this technology is here to stay. It’s not going away. The question is, what can you do to make the world more responsible around it?”

“The journey to solve the problem lies in an open, ethical approach like Wikipedia. It’s about bringing together policymakers, regulators, ontologists, standards bodies, and ethicists into a common forum to build a common framework,” says Watal.

He calls for global cooperation to establish ethical standards that protect individual rights while promoting technological advancement: “This is not a matter of let America define its standards, let China define its standards, let UAE define its standards. It’s a matter of let all 8 billion people for once and for all come to an agreement. And you would say that’s an insane idea. But guess what? That’s what we need.”

The path to an ethical AI future

Looking ahead, Watal envisions an ideal path for AI development that simplifies data ownership and promotes ethical practices. “The journey lies in organizing data and creating a model that works for everyone. It’s about building a common framework with input from all stakeholders globally,” he said.

The WDG initiative is central to this vision. “WDG brings together policymakers, regulators, ontologists, standards bodies, and ethicists to create a cohesive framework for ethical AI,” Watal explained. By involving a diverse range of voices, WDG aims to ensure that AI development benefits everyone and respects individual rights. 

“So my hope here is that WDG becomes a community of all 8 billion people where everyone has a voice. And then, guess what, AI gives us this power to now process 8 billion requests a second and actually make sense of that. So, the good thing (AI) that we have developed unethically, is a means to solve the ethical problems… I use the term ethical uses of unethical AI, and this could be one of those.”

NEWSLETTER

The Blueprint Daily

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird