AI Made Friendly HERE

The ethics of innovation in generative AI and the future of humanity

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

AI has the potential to change the social, cultural and economic fabric of the world. Just as the television, the cell phone and the internet incited mass transformation, generative AI developments like ChatGPT will create new opportunities that humanity has yet to envision.

However, with great power comes great risk. It’s no secret that generative AI has raised new questions about ethics and privacy, and one of the greatest risks is that society will use this technology irresponsibly. To avoid this outcome, it’s critical that innovation does not outpace accountability. New regulatory guidance must be developed at the same rate that we’re seeing tech’s major players launch new AI applications.

To fully understand the moral conundrums around generative AI — and their potential impact on the future of the global population — we must take a step back to understand these large language models, how they can create positive change, and where they may fall short.  

The challenges of generative AI

Humans answer questions based on our genetic makeup (nature), education, self-learning and observation (nurture). A machine like ChatGPT, on the other hand, has the world’s data at its fingertips. Just as human biases influence our responses, AI’s output is biased by the data used to train it. Because data is often comprehensive and contains many perspectives, the answer that generative AI delivers depends on how you ask the question. 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

AI has access to trillions of terabytes of data, allowing users to “focus” their attention through prompt engineering or programming to make the output more precise. This is not a negative if the technology is used to suggest actions, but the reality is that generative AI can be used to make decisions that affect humans’ lives.

For example, when using a navigation system, a human specifies the destination, and the machine calculates the fastest route based on aspects like road traffic data. But if the navigation system was asked to determine the destination, would its action match the human’s desired outcome? Furthermore, what if a human was not able to intervene and decide to drive a different route than the navigation system suggests? Generative AI is designed to simulate thoughts in the human language from patterns it has witnessed before, not create new knowledge or make decisions. Using the technology for that type of use case is what raises legal and ethical concerns. 

Use cases in action

Low-risk applications

Low-risk, ethically warranted applications will almost always focus on an assistive approach with a human in the loop, where the human has accountability.

For instance, if ChatGPT is used in a university literature class, a professor could make use of the technology’s knowledge to help students discuss topics at hand and pressure-test their understanding of the material. Here, AI successfully supports creative thinking and expands the students’ perspectives as a supplemental education tool — if students have read the material and can measure the AI’s simulated ideas against their own.

Medium-risk applications

Some applications present medium risk and warrant additional criticism under regulations, but the rewards can outweigh the risks when used correctly. For example, AI can make recommendations on medical treatments and procedures based on a patient’s medical history and patterns that it identifies in similar patients. However, a patient moving forward with that recommendation without the consult of a human medical expert could have disastrous consequences. Ultimately the decision — and how their medical data is used — is up to the patient, but generative AI should not be used to create a care plan without proper checks and balances. 

Risky applications

High-risk applications are characterized by a lack of human accountability and autonomous AI-driven decisions. For example, an “AI judge” presiding over a courtroom is unthinkable according to our laws. Judges and lawyers can use AI to do their research and suggest a course of action for the defense or prosecution, but when the technology transforms into performing the role of judge, it poses a different threat. Judges are trustees of the rule of law, bound by law and their conscience — which AI does not have. There may be ways in the future for AI to treat people fairly and without bias, but in our current state, only humans can answer for their actions.  

Immediate steps toward accountability 

We have entered a crucial phase in the regulatory process for generative AI, where applications like these must be considered in practice. There is no easy answer as we continue to research AI behavior and develop guidelines, but there are four steps we can take now to minimize immediate risk:

  1. Self-governance: Every organization should adopt a framework for the ethical and responsible use of AI within their company. Before regulation is drawn up and becomes legal, self-governance can show what works and what doesn’t.
  2. Testing: A comprehensive testing framework is critical — one that follows fundamental rules of data consistency, like the detection of bias in data, rules for sufficient data for all demographics and groups, and the veracity of the data. Testing for these biases and inconsistencies can ensure that disclaimers and warnings are applied to the final output, just like a prescription medicine where all potential side effects are mentioned. Testing must be ongoing and shouldn’t be limited to releasing a feature once.
  3. Responsible action: Human assistance is important no matter how “intelligent” generative AI becomes. By ensuring AI-driven actions go through a human filter, we can ensure the responsible use of AI and confirm that practices are human-controlled and governed correctly from the beginning.
  4. Continuous risk assessment: Considering whether the use case falls into the low, medium, or high-risk category, which can be complex, will help determine the appropriate guidelines that must be applied to ensure the right level of governance. A “one-size-fits-all” approach will not lead to effective governance.

ChatGTP is just the tip of the iceberg for generative AI. The technology is advancing at breakneck speed, and assuming responsibility now will determine how AI innovations impact the global economy, among many other outcomes. We are at an interesting place in human history where our “humanness” is being questioned by the technology trying to replicate us.

A bold new world awaits, and we must collectively be prepared to face it.

Rolf Schwartzmann, Ph.D., sits on the Information Security Advisory Board for Icertis.

Monish Darda is the cofounder and chief technology officer at Icertis. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Originally Appeared Here

You May Also Like

About the Author:

Early Bird