AI Made Friendly HERE

Three Organizations Are Shaping The Future Of Responsible AI

Dream Together, Deposit Photos

Deposit Photos

As artificial intelligence (AI) continues to transform industries and societies, the conversation around responsible AI development is gaining urgency. Ensuring that AI systems are ethical, trustworthy, and inclusive is not just a technical challenge but a moral imperative. Emily Reid, CEO of AI4ALL, Kabir Barday, CEO of OneTrust and Rebecca Finlay, CEO of Partnership on AI are at the forefront, working to ensure AI development prioritizes human interests, promotes diversity, and adheres to ethical standards.

The Need For Responsible AI Development

The swift progress of AI technologies has brought unrivalled opportunities for innovation and progress. However, it has also raised significant concerns about privacy, bias, accountability, and the unintended consequences that continue to surface. As AI systems prevail and become more sophisticated, the risks associated with unchecked development grow dramatically.

OneTrust has pioneered a new category of software in the responsible use of data and AI when it comes to transparent data collection, simplifying compliance automation, and enforcing data policies has seen significant customer adoption fueled by privacy regulations and increasing AI adoption. Kabir Barday, CEO, admits privacy is good for the world and good for business: “We’re approaching $500M in annual revenue and have over 14,000 customers globally.” While, previously, responsible technology stifled speed of innovation, today Barday sees a different environment unfolding, “We’re seeing a shift where marketing and data teams are leading the privacy charge, realizing that trust is directly proportional to the success of their AI initiatives”. He adds that companies face an internal tug-of-war between innovation teams pushing to maximize data use and multiple risk management units tasked with protecting the business. This creates tension as the drive to innovate often clashes with the need to minimize potential legal, ethical, and security risks.

AI4ALL tackles another vital aspect of responsible AI development: diversity and inclusion. By providing educational programs and opportunities for underrepresented groups in AI, AI4ALL aims to reshape the demographics of the AI workforce and ensure that diverse perspectives inform AI development. Emily Reid, CEO of AI4ALL, articulates the organization’s vision: “Our mission to create the next generation of AI changemakers that the world needs. The fact is the train has left the station; however, we have a choice about what the future of AI looks like. It’s not something that is already set in stone but is going to be written by this next generation of AI technologists.”

Reid’s statement comes on the heels of the recent closing of Women Who Code (WWC), due to lack of funding. During its decade of operations, Women Who Code organized over 20,000 community-led events and granted $3.5 million in scholarships. Fearless Fund, a venture fund which had invested “nearly $27MM into 40 women of color-led startups” also faced legal setbacks in June when a federal appeals court ruled against its grant program, stating it likely violates the Civil Rights Act. Reid responded to these events: “There was a lot of progress made for a number of years and the kind of layoff period I think has rolled back some of that progress… I believe Ginni Rometti from IBM said that AI is going to change 100% of jobs, 100% of industries and 100% of professions, and I don’t think that that’s hyperbole.”

Partnerships on AI (PAI) emerged in 2016 from AI researchers across industry recognizing the need for collaboration across sectors and expertise, to advance responsible AI. Partnerships on AI includes companies, researchers, civil society, and the public, who work together to establish industry standards for transparency in AI. Rebecca Finlay, CEO explains their efforts, “PAI’s work goes beyond setting voluntary standards. We prepare organizations for emerging regulations by anticipating and understanding technological change and its impact on society.”

Building Trust: A Core Principle

Trust is a fundamental principle that underpins the work of the three organizations.

For OneTrust, building trust starts with transparency and compliance. Barday explains, “Trust is activated through privacy and choice.” He explains how first-party data can achieve this: “… marketing teams see reliance on third party data is not their future. It’s got to be first party data. Your ability to capture first party data is directly proportionate to the trust you have with your customers, and that trust is activated through privacy, choice, control and transparency. Companies are deploying these technologies around consent and preference management.” This shift towards first-party data represents a significant change in how companies approach data collection and customer relationships. Recent privacy infringement fines imposed on major tech companies support Barday’s argument. Meta faced a record €1.2 billion fine for data transfer violations, while TikTok and Instagram were penalized €345 million and €405 million respectively for mishandling children’s data. There were 438 total GDPR fines issued in 2023, totalling €2.054 billion.

Transparency from private tech seems elusive. Opaque algorithms persist as generative AI evolves. Finlay of Partnership on AI argued that it’s important for humans to interpret and interact with AI systems, explaining, “that’s why it’s important for organizations to be transparent about how their systems were built and their performance against current benchmarks related to both capabilities and harms.” Finlay points to public reporting to emerging entities such as the national AI Safety Institute, “In our Safe Foundation Model Deployment Guidance, we worked with experts in industry, academia and civil society to articulate transparency reporting across the development and deployment lifecycle. This is a good start for developing an auditing and monitoring regime for the most advanced AI model developers and deployers.”

In addition, Finlay calls on organizations to actively elicit the perspectives of those who develop, deploy and monitor AI systems to ensure solutions are reliable and sustainable. In addition, transparency, she emphasizes, will be key to building trust with external stakeholders, “That includes informed consent, clear disclosure of synthetically generated content and information about when and how an individual is interacting with an AI system.”

Reid, CEO of AI4ALL, observes a significant shift in industry focus regarding AI. “In 2023, many of my conversations with partners and advisors focused heavily on generative AI. There was a widespread anxiety about not wanting to get left behind and a desire to understand how to leverage this technology.” However, recent months she has seen a pivot towards AI governance, with companies grappling with risk management in the absence of comprehensive legal frameworks.

Reid highlights a critical gap identified by AI4ALL’s founder: “Dr. Fei-Fei Li, has emphasized the lack of overlap between expertise in policy and legal spaces and the AI field. This gap is particularly concerning because technology is advancing much faster than legislation and policy. We need more people with computer science backgrounds working in or advising on policy and legal matters related to AI.

Tackling Large Language Model (LLM) Human Challenges

LLMs have emerged that scale harms: discriminatory treatment, manipulation, amplification of existing societal prejudices, and unfair outcomes for certain groups, among many infringements that have eroded human agency.  Organizations are now quick to call out these harms against perpetrators. AI Ethics is now a practice and more than 40+ regulations globally in privacy and ethics have emerged.

As Large Language Models (LLMs) and advanced algorithms become increasingly prevalent, Partnership on AI (PAI), OneTrust, and AI4ALL are each addressing the associated risks and implications and how they are introducing change.

PAI recognizes the profound impact of LLMs on society and is working to establish guidelines for their responsible development and use. Finlay describes their proactive approach: “We issued some of the earliest guidance that focused on the release of open foundation models and believe that an open innovation ecosystem supports more competition and external oversight, if also some marginal risk.” These include, creation and use of synthetic media, deployment of foundation models, use of demographic data, ethical treatment of workers in the data supply chain, AI’s impact on labor and the economy and documentation in machine learning systems.

PAI’s efforts include developing frameworks for ethical AI deployment and promoting transparency in AI systems. They’re also addressing the potential misuse of synthetic media generated by LLMs, with Finlay demonstrating, “Our Synthetic Media Framework has institutional support from 18 organizations, including OpenAI, who recently shared a case study on how they considered the framework when building disclosure mechanisms into DALL-E.”

For OneTrust, tackling the data governance and compliance challenges posed by LLMs is where they focus. Barday illustrates the critical nature of these issues and indicates data governance frameworks are evolving to address new challenges in data management and usage. Companies are recognizing that all data, regardless of the source, requires a heightened level of governance that includes purpose-specific attributes. This shift goes beyond traditional data access governance, which focused primarily on data sensitivity and access control. Barday affirmed: “What companies are starting to realize is any data they have, whether it’s scraped, whether it’s collected first party, or whether it’s from a third party, there’s a new level of governance they need on that data and that new level of governance is a purpose specific attribute on the data.” This approach raisees critical questions for companies, especially those collecting data from various sources. Barday explains, “…if a company goes and scrapes a bunch of data online, the question for that company is what data have they collected? How were the consumers of that data able to give you the purpose specification and what documentation do you have that proves you have the right legal basis to have that?”

He further cautions that companies must now assess the risk of their current data collection methods, “All companies take risks… if a company gets that wrong, the repercussions are dramatic and are going to be seen in the next few years because of the consequences of data deletion orders. Imagine if OpenAI has an enforcement action on one small thing they scraped that violates a regulation. A tiny mistake that they make allows a regulator to set in motion a data deletion order on their entire algorithm. That pretty much shuts down the entire general purpose LLM.”

This recent article from Wired addresses to this very issue. Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face identified the “OpenAI’s system card for GPT-4o does not include extensive details on the model’s training data or who owns that data. ‘The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed.’”

AI4ALL is addressing the challenges of algorithms and LLMs by focusing on education and diversity in AI development. Reid acknowledges the importance of AI literacy: “I’m a huge believer in the need for AI literacy, and by that I don’t just mean students being able to being competent in using AI tools… That’s not necessarily the same as being able to understand what the machines is really underneath and what is really inside of these algorithms.”

By educating a diverse group of future AI practitioners, AI4ALL aims to ensure that the development of LLMs and other AI technologies considers a wide range of perspectives and potential impacts. Reid notes: “If we don’t make some changes now around what the diversity of that cohort of technologists looks like and if we don’t make some changes around the industry standards around trustworthiness, human-centered AI, responsibility, ethics… then I think we’re on a really troubling trajectory.”

Gendered AI: The Ethical Implications of Anthropomorphizing in Voice Assistants

One area of concern, Reid warns, is increasing AI anthropomorphization, especially in relation to gender and voice assistants. She observes the complexity of human interactions with AI that can pass the Turing test, and the preference many users have for human-like AI interfaces. Reid raises concerns about the prevalence of female-coded voice assistants, stating, “The vast, vast majority of voice assistants tend to be female coded in some way, whether it be the name and the voice, or both. That, to me is a really big concern, in part because it continues to put women in a position the stereotype of being a helper, or assistant.”

She infers that while developers may choose female voices based on user preferences, this approach raises ethical questions: “there are chatbots which can provide advice on personal situations and raises questions about potential dependence on technology for emotional support. Who is legally liable if someone acts on this advice?”

The level to which artificial intelligence is progressing now surfaces debates, as per Reid, about whether outsourcing emotional labor to chatbots is appropriate. Anthropomorphization assumes that humans may overestimate the AI capabilities by engendering the solution with human reasoning and trusting it for recommendations or tasks the technology is unequipped to handle. Users may accept AI-generated outputs with little scrutiny, leading to increased dependency and over time, leading to difficulties in distinguishing human vs. computer-generated interactions.

By overlaying the female voice-generated interactions, the ease of trust can be easily established, with an expectation that the technology somehow can morally reason. Reference from this Wired article, OpenAI introduced their voice mode and faced public criticism from Scarlett Johansson. A section of the system card, called “Anthropomorphization and Emotional Reliance” highlights issues that emerge when users attribute human qualities to AI systems, a tendency that seems to be intensified by the AI’s human-like voice capabilities.

“Joaquin Quiñonero Candela, head of preparedness at OpenAI, says that voice mode could evolve into a uniquely powerful interface. He also notes that the kind of emotional effects seen with GPT-4o can be positive—say, by helping those who are lonely or who need to practice social interactions.”

Reid cautions, “I think we’re going to have much more nuanced conversations around what our default assistant voice should be, what kind of options are available to change that and being able to properly evaluate the risks and harms when it comes to privacy, ethical misconceptions, and the social and psychological impacts.”

Traversing the Challenges: Privacy, Ethics, And Governance

OneTrust, AI4ALL, and PAI, despite their progress, face ongoing obstacles in advancing their missions. The internal tension between profit maximization and human-centered outcomes persists and this determines the path for investment and resource allocations when it comes to AI development.

Barday acknowledges the challenge of staying ahead of evolving privacy regulations. “The pace of innovation in AI is staggering, and it’s crucial that we continue to develop solutions that help companies comply with these regulations without stifling innovation.”

Reid highlights the barriers to inclusivity in AI, noting, “There is still much work to be done to ensure that AI is truly representative of our diverse society. This requires not just education but also systemic changes in how AI is developed and deployed” .

Reid draws a parallel between the current AI revolution and the early days of the Internet, suggesting that AI has the potential to be a significant equalizer in society. She acknowledges the transformative potential of AI while cautioning about its integration into existing societal structures: “While AI technologies offer opportunities for positive change, they don’t exist in a vacuum. They will inevitably be shaped by and integrated into our current political, social, and economic systems.”

Reid stresses the importance of intentional and proactive engagement with AI: “There is a lot of hope for where we can have AI go, but so much of it is going to be dependent on what we choose to do. We need to be thoughtful and strategic and active and hopeful about what we can change and how we can use the opportunity with these technologies to change some of those systems.”

Changing systems that may have worked for decades but have not worked for members of society is an undertaking that previously may have seemed daunting. Finlay, CEO of Partnerships on AI, agrees with Barday that accountability and innovation can coexist, but it also means a shift in mindset, “AI is not just a tool; it’s a paradigm shift. Business leaders need to recognize this and align AI with broader business goals. By fostering a culture of responsible AI development, even startups with limited resources can make significant strides. This includes encouraging continuous learning, staying updated on best practices, and prioritizing fairness and transparency in AI development.”

The Future of Responsible AI: The Path Forward

The convergence of ethics, trust, and inclusivity is essential for the responsible development of AI. For stakeholders in the AI ecosystem, mainstream understanding of the critical impacts of AI are beginning to take shape. Laws are catching up quickly and no longer will big tech have the luxury of operating under the radar. The community of ethical advocates like OneTrust, AI4All and Partnerships on AI are building coalitions to bring the promise of AI to everyone.

Up until now, the Big Tech has dictated the future of AI. We have time to undo what AI has done. Emily Reid philosophically questions, “AI Will Change the World. Who will Change AI?”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird