AI Made Friendly HERE

How Open-Source AI Drives Responsible Innovation – Sponsor Content

Few understand the challenges and opportunities involved with building collaborative ecosystems better than Rebecca Finlay, CEO of Partnership on AI, a nonprofit organization that brings together academics, civil society, and industry leaders to collaborate on pressing AI issues. Finlay and her colleagues at PAI have been at the forefront of AI safety and responsibility since 2016, but even she is surprised at how fast the AI ecosystem has evolved in the past few years. It has only reinforced her belief that the future of AI must be open.

“The reason this wave of AI feels different is the speed with which the technology is being developed,” Finlay says. “AI is having just such a large impact on so many different sectors of the economy and society that many organizations have realized that they can’t figure this out alone. We need to be working with others to really identify what the right questions are, and that group needs to be diverse because the most important questions that we need to solve are the ones that sit at the boundaries between disciplines.”

The good news is that open collaboration has deep roots in AI research and development. In fact, it was the unique spirit of open collaboration among AI researchers that drove Joe Spisak, the director of product management for generative AI at Meta, to pursue AI research and product development in the first place. Spisak started his career in the semiconductor industry, a notoriously closed ecosystem where intellectual property and patents are closely guarded. When Spisak pivoted into AI about a decade ago, he was surprised to see how readily AI researchers in industry and academia exchanged knowledge and resources, and the profound impact this had on accelerating AI innovation.

“Open-source AI is where the roots are in academic AI research, and it’s translated over into industry because it’s really hard to do this work in closed environments,” says Spisak. “We’ve been talking about democratization for such a long time in the AI space, and I think people are starting to feel the real power of the work we’ve been doing. The results speak for themselves.”

We need an open approach to AI because if you want to build safe and trustworthy complex systems, you need to be able to inquire and understand what’s happening behind the curtain.

Consider, for example, a recent collaboration between IBM and NASA to develop open-source scientific foundation models. These models, trained on vast troves of scientific literature, make it easier for researchers to access and build upon existing scientific knowledge. The models were designed with built-in safeguards that allow researchers to verify outputs and customize the tools for their specific research needs. This open, iterative approach not only democratizes access to advanced AI capabilities but also helps to mitigate risks by enabling continuous stress testing and refinement. By creating an open, shared resource that anyone can use and contribute to, the collaboration accelerates the pace of scientific discovery without compromising on safety or user needs.

“Open systems are beneficial because they allow you to progress through the crawl, walk, run phases of development to understand what’s going on, adapt, and build something new on top of it that will address the problem,” says Anthony Annunziata, IBM’s director of AI open innovation.

IBM took a similar approach in the development of the company’s Granite models, a family of open-source code-generation platforms that help developers write and deploy better software faster. The Granite models, which were released under a permissive license, are designed to allow anyone to use, modify, and build upon them for software development tasks that range from fixing bugs to generating entire programs based on natural language descriptions. This open approach enables the global developer community to continuously enhance and expand the capabilities of the models, leading to compounding gains in productivity and code quality over time. As software powers an ever-growing share of our economy and daily lives, these efficiency gains yield widespread benefits because businesses can bring new applications to market faster while mitigating risks from faulty code.

“We need an open approach to AI because if you want to build safe and trustworthy complex systems, you need to be able to inquire and understand what’s happening behind the curtain,” says Annunziata. “The community can red-team the system by getting a lot of people to figure out its problems and then go in and fix them. It’s a low-friction process that works great in other areas of open-source software, and it’s already working in many areas of AI.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird