AI Made Friendly HERE

AI Ethics and Why It Matters

Editor’s note: ISACA is introducing new training courses on artificial intelligence, including a course on AI ethics. Geetha Murugesan, an AI expert who contributed to the course, recently visited with the ISACA Now blog to share her perspective on AI ethics and its implications for digital trust professionals and society more broadly. See the interview with Murugesan below, and find out more about ISACA’s additional new AI courses here.

ISACA Now: What interests you most about AI ethics?

AI Is a game-changer. By using AI-based solutions, businesses can streamline processes, increase efficiency and reduce costs. AI technology is meant to augment or replace human intelligence—but when technology is designed to replicate human life, the same issues that can cloud human judgment can seep into the technology. AI ethics is the field of determining how to use technology responsibly. To prevent AI from going rogue and out of our control, we need to implement ethics in AI.

ISACA Now: What are some of the topic’s most important implications?

AI ethics encompasses many areas at the intersection of technology, privacy, security, human values, trust, or bias, just to name a few.

  • How can humans trust AI? AI trust (frequently mentioned in the same breath as AI bias) is the area of ethics that focuses on the need for AIs that are fair. Humans need to be convinced that decisions made by an AI are “fair” and do not favor or disfavor some groups inappropriately. The problem is that fairness itself is a subjective concept that humans do not agree on. Where humans do not agree, it is not possible for a computer program to make humans agree.
  • AI, privacy and human data. AI has demonstrated that personal data can be used for everything from recommending books to detecting diseases. But who has the right to say what is allowed and what is not? Laws like GDPR and others introducing clauses that empower individuals to control how organizations use private data. AI algorithms should be designed to minimize the collection and processing of personal data and ensure that the data is kept secure and confidential.
  • AI’s impact on our environment. As AI models become larger and larger, the number of resources they consume also grow larger. There are various studies that shows that a single training of an AI model can emit carbon similar to five cars in their lifetime. These models require frequent retraining. On the other hand, AI is promising in helping the environment to address climate changes. AI techniques, particularly machine learning and deep learning, are employed to build predictive models that forecast weather conditions with improved accuracy. These models learn from historical weather data, including atmospheric pressure, temperature, humidity, and wind patterns, to predict future weather patterns.
  • The AI technology race. Every nation (and many organizations) realize that that their future competitive advantages lie in their citizens becoming AI literate. To get an edge in AI technology, nations are investing in everything from data to compute machinery. What does this mean for the future of small economies vs. large ones? Will the larger economies amass so much AI knowledge and dataset resources that it will create yet another gap?
  • AI and weapons. AI’s creation abilities are not limited to art and poetry; it can create hundreds of potential chemical weapons. The conception of Generative Artificial Intelligence (AI) has led to a booming interest in the development of its applications, with countries investing heavily in AI Research and Development (R&D), especially in the military domain. However, one particularly disturbing consequence of this has been recent advances in the development of Autonomous Weapons Systems (AWS). While fully autonomous weapons have yet to materialize, continued advancements in the military applications of AI may make them a reality sooner rather than later. Technological advances in the military domain often end up heralding non-state actor capabilities, particularly when they offer a low-entry barrier. AWS can reduce, or altogether eliminate, the physical dangers of terrorism for the perpetrators, while providing increased anonymity and accessibility. Terrorists will no longer need to be physically present to conduct an attack, and it will be extremely difficult to identify the operator of an AWS. However, these capabilities are already available to terrorists via the use of manual drones. For instance, Yemen’s Houthi Rebels have been employing this tactic to carry out attacks in the Red Sea. What sets AWS apart is the fact that they are potentially invulnerable to traditional countermeasures like jamming. Additionally, they offer the possibility of force multiplication since they do not necessarily require continuous intervention in the system—swarm drones being a case in point. And while the engineering required for such endeavours is not yet available, even rudimentary autonomous drones working in tandem could have disastrous consequences. 

ISACA Now: How do you see AI ethics intersecting most with professionals in ISACA’s fields of interest: audit, risk, governance, etc.?

While AI brings unprecedented opportunities to businesses, it also brings incredible responsibility. Its direct impact on people’s lives has raised considerable questions around AI ethics, data governance, trust, security and privacy.

IT auditors today are not equipped to handle the complexities of AI. There are no agreed upon regulations or standards that govern how to audit AI systems, which especially hinders the production and use of ethical AI. So, it is up to AI auditors to perform ethics-based AI auditing, which entails analyzing the basis for AI, the code for AI systems and the effects that the AI brings forth.

AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development and uses with intended AI values. Core concepts in responsible AI emphasize human-centricity, social responsibility and sustainability. AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop and deploy AI to think more critically about context and potential unexpected negative and positive impacts.

Given the above challenges, a more focused approach must be adopted by IT auditors, risk management and governance professionals while they perform their respective roles.

ISACA Now: What is an aspect of ISACA’s new course on this topic that you think learners will find especially valuable?

“Ethics in AI,” a new course offered by ISACA to the global community, is designed to provide guiding principles that stakeholders – from engineers (business stakeholders) to government officials (regulatory and legal) – use to ensure artificial intelligence technology is developed and used responsibly. This course aims to help participants acquire an understanding of AI ethics considerations and guiding principles:

  • Understand the ethical issues raised by AI technologies.
  • Apply ethical principles to real-world AI scenarios.
  • Analyze the social and political implications of AI.
  • Communicate effectively about AI ethics with a variety of audiences.

ISACA Now: How will AI ethics knowledge help individuals stand out in their enterprises or with a future employer?

AI is revolutionizing the workplace by automating repetitive tasks, reducing costs and increasing efficiency. Challenges arise, however, as AI poses potential threats to privacy and the hiring process, job security and the ever-lasting fight against misinformation.

There is an increasing demand in AI skills and a need for more AI talent. But at the heart of the AI revolution are foundational tech skills, such as:

  • Data science skills
  • Data analytics skills
  • Python programming skills
  • Machine learning skills
  • Software engineering
  • Computer science skills, like front-end web development

One must think of these skills as the base for individuals to build more advanced technical skills and to be able to adapt, navigate and contribute to today’s tech-driven work environment. 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird