AI Made Friendly HERE

Planck’s Leandro DalleMule Discusses the Ethics of AI in Insurance : Risk & Insurance

One of the most difficult areas for regulators trying to determine whether AI us being used ethically is the issue of transparency.

Dan Reynolds, the editor-in-chief of Risk & Insurance, recently had a conversation with Leandro DalleMule, the global head of insurance at Planck. The two discussed the ethical concerns of using artificial intelligence in commercial insurance, the role of a code of conduct, and the potential risks associated with AI technology.

What follows is a transcript of that conversation, edited for length and clarity.

Risk & Insurance: What are the chief ethical concerns presented by the use of artificial intelligence in commercial insurance?

Leandro Dallemule: The top concern we’re seeing with our customers, in the media and especially in insurance and financial services is bias and discrimination. All AI models, from generative AI to standard linear regression, are trained with data.

This data can be raw, tagged by humans or even artificially created by another AI model to meet the vast data requirements. The use of synthetic data concerns me, as I learned during my time in banking risk management that anything labeled “synthetic” in these markets is typically not good.

The main issue is how these models are being trained and the creation of data to train them. It becomes increasingly difficult to assess whether the AI creating the data is unbiased, and there’s a risk of intensifying or magnifying any discrimination embedded in those models.

R&I: What other ethical concerns do you see with AI beyond bias?

LD: Transparency is the second major concern that comes to mind. In a linear regression model, you have a clear understanding of the variables being used. However, with complex AI models, it becomes incredibly difficult to open the black box and understand the variables at play.

Take GPT-4, for example. Although the exact number isn’t published, it’s estimated to have 1.7 trillion parameters. That’s an astronomical number of adjustable “knobs” within the model. Even smaller models like Falcon 180B, named for its 180 billion parameters, are still incredibly complex.

From a regulatory perspective, understanding the variables used in these models is crucial, regardless of how they were trained. The sheer complexity of modern AI makes transparency a significant challenge that needs to be addressed.

R&I: What are the key concerns for regulators when it comes to AI and machine learning in the insurance industry?

LD: Transparency is a critical concern for regulators. With the rapid advancement of AI models, it becomes increasingly difficult to discern their inner workings, especially for the larger models with billions of parameters.

Data security is another significant issue. When uploading data to these AI systems, there are risks of sensitive information being exposed or misused. In the insurance context, this could include personal details of policyholders or proprietary business information.

Job displacement is also a growing concern. Reports estimate that over 20% of insurance processes will be impacted by automation, potentially leading to reduced job opportunities. However, there is also the potential for job augmentation, where roles evolve and adapt to work alongside AI.

This transformation is expected to happen much faster than the industrial revolution, where machines replaced muscle power. Now, we are witnessing the replacement of brains, with AI capable of handling complex cognitive tasks. The full implications of this rapid change in how we live and work are yet to be determined.

R&I: What role can a code of conduct play in preventing bias against small and medium-sized businesses in the insurance industry?

LD: Addressing bias against small and medium-sized businesses in a code of conduct is complex, particularly due to the overlap with personal lines and data. For instance, a one-man shop contractor with a pickup truck blurs the line between personal and small business, raising concerns about confidentiality and personally identifiable information.

Recent developments in the code of conduct go beyond model creation and focus on model usage. The incident with Google’s Gemini image generation highlighted another layer of concern: intentional bias introduced by developers in the output, regardless of the training data.

In this case, the bias or discrimination was manually forced into the model, reflecting the developer’s own ideas rather than inherent data bias. Addressing such intentional bias in a code of conduct poses significant challenges that we’ll need to tackle.

R&I: What are the potential risks associated with the development and use of large language models, particularly in the context of small business insurance?

LD: The development of new AI models, especially generative AI, requires vast amounts of data and computational power, which can cost between $5 to $7 billion. As a result, the creation of these models is largely in the hands of big players like Google, Meta, Microsoft and OpenAI.

The concern lies in who checks these companies and their models. If we all rely on a version of their model, there’s a risk that someone could force changes into it, such as altering historical data in subtle ways that are not immediately obvious.

In the context of small business insurance, if changes are made to the foundational model with the intention of forcing a particular outcome, it could have serious downstream consequences that are difficult to detect and fix. This is why having a code of conduct for the companies developing these models is crucial: It sets a standard for the responsible development and use of AI technology.

R&I: How do the capabilities of massive technology companies entering the insurance industry impact the potential for bias, considering the specialized nature of the field?

LD: The potential for bias is very clear when massive technology companies bring their tools into the insurance industry. While many industries and professions are specialized, insurance is particularly so, and the introduction of these tools by tech giants raises concerns about the possibility of extensive bias.

To address this, human oversight is crucial. A code of conduct must be put in place to ensure that the specialized nature of insurance is respected and that the potential for bias is mitigated. The game has become super clear now, and it’s essential that we take steps to maintain the integrity of the insurance industry in the face of these technological advancements.

R&I: What is the most effective way for the insurance industry to implement ethical oversight of AI systems, given the challenges of scale and complexity?

LD: It’s a good question, and I’ve heard some proposed solutions that simply don’t make sense, such as reviewing everything these models are doing in a coordinated fashion worldwide. There are not enough human beings on the planet to do that, considering a single query can involve trillions of data points.

A practical approach that seems to be emerging — although it’s still early days — is a “human-in-the-loop” process. My best analogy for supervising these models is to think of AI as a very smart intern — a genius intern who has memorized every book they’ve read and knows everything about the business you’re trying to underwrite.

However, like an intern, if you do not ask the right questions and give explicit directions, they’re not going to give you what you want and will make mistakes. The best way to apply human oversight is to think of AI as a copilot or assistant that requires human guidance.

It’s crucial to keep the human element there for oversight. The AI may have impressive capabilities, but it still needs to operate under human supervision and direction to ensure it’s being used ethically and effectively.

R&I: What are the potential risks if insurance professionals rely too heavily on AI and abandon their own experience and discernment in the underwriting process?

LD: There are two main problems that can arise when relying too heavily on AI. First, while some individuals, like my son, may be skilled at crafting effective prompts for AI, there is a concern that they may not develop the necessary cognitive abilities to critically analyze and interpret the AI’s output.

This leads to the second issue, which is a chicken-and-egg problem. If future underwriters do not gain the experience needed to discern the quality and accuracy of the AI’s recommendations, they will struggle to judge whether the AI’s output is good or bad.

The challenge lies in effectively training these professionals if they become increasingly reliant on AI without developing their own skills and expertise.

R&I: What are the risks to businesses associated with AI-generated imagery?

LD: There are two significant risks when it comes to AI-generated imagery. The first risk is the potential for defects in the generated images. AI systems can sometimes produce images with flaws or inaccuracies that may not be immediately apparent.

The second risk is the potential for misuse or abuse of the technology. AI-generated imagery could be used to create misleading or fraudulent content, such as deepfakes or manipulated images. This can have serious consequences for businesses, particularly in terms of reputation and trust.

It’s crucial for businesses to be aware of these risks and to implement appropriate safeguards and controls when using AI-generated imagery. This may include rigorous testing and validation of the AI systems, as well as clear policies and guidelines around the use and dissemination of the generated images.

R&I: What can you tell me about the recent video generation tool launched by OpenAI?

LD: OpenAI launched its video generation a few weeks ago. This tool represents a significant advancement in AI-generated content creation, specifically in the realm of video.

By leveraging cutting-edge machine learning algorithms, the tool can generate realistic and coherent video sequences based on user-provided prompts or descriptions. This opens up a wide range of possibilities for content creators, marketers and artists looking to streamline their video production processes.

However, as with any powerful technology, it also raises important questions about the potential for misuse and the need for responsible deployment. As the technology continues to evolve, it will be crucial to address these concerns and establish guidelines to ensure its ethical and beneficial use.

R&I: What are your thoughts on the potential implications of advanced AI technologies like deepfakes on the insurance industry and society as a whole?

LD: The rapid evolution of AI technology, particularly in the realm of deepfakes, is a growing concern that extends beyond the insurance industry. With tools like Sora, users can generate realistic movies and videos from simple text prompts, which has obvious implications for industries like Hollywood.

In the near future, it may become nearly impossible to distinguish between real and AI-generated videos. This could have serious consequences, such as fabricating evidence of crimes or manipulating business-related videos. While some suggest watermarking real data, I’m skeptical about the effectiveness of this approach, as skilled programmers or hackers could likely remove these watermarks with ease.

Deepfakes have already been used in humorous or misleading ways, such as cloning the voices of public figures like presidents. However, as the technology continues to advance at an unprecedented pace, it’s becoming increasingly difficult to control and poses a significant challenge for society. Despite the efforts made thus far, I have yet to see a feasible solution to address this growing problem.

R&I: What is your perspective on the need for a universal code of ethics to govern AI in the insurance industry?

LD: I believe that standardizing a universal code of ethics across all industries is impossible due to the varying requirements and standards of each sector. For instance, the ethical considerations in insurance and financial services differ significantly from those in entertainment or travel.

However, I strongly advocate for the insurance industry to collaborate with the AI industry and regulators to establish a code of conduct specific to our sector. This partnership is crucial to prevent actors from imprinting their own biases into models without our knowledge or consent when utilizing them.

By implementing an industry-specific code of conduct, insurance companies can have greater assurance that they are not subject to flawed decision-making processes resulting from unethical AI practices.

R&I: What impact do you foresee state-by-state insurance regulations, such as the recent New York circular letter, having on the use of AI in insurance underwriting processes?

LD: The New York State insurance circular letter issued on January 17th addresses several important aspects of AI usage in insurance, including biases, ethical concerns and transparency. While it’s a step in the right direction, providing guidance for insurance companies using AI in their underwriting processes, the circular is still quite high-level.

The intentions behind the circular are commendable, but the practical implementation remains a challenge. Insurance companies are now tasked with figuring out how to adhere to these guidelines while leveraging AI technology effectively.

As more states follow suit with similar regulations, it will be crucial for insurers to navigate this evolving landscape. They must strike a balance between compliance and innovation to harness the benefits of AI while mitigating potential risks and ethical concerns.

R&I: What challenges do you foresee in crafting and enforcing AI regulations, particularly when it comes to measuring compliance and identifying red flags?

LD: Enforcing AI regulations is a complex issue, even if we have a clear goal and approach it on a state-by-state basis, as we do in the U.S. The question remains: How do you measure and enforce compliance?

For instance, if there’s a requirement for transparency in AI models, how do you actually enforce that? What are the metrics for measuring transparency, and what are the red flags that indicate non-compliance? These are questions that, to my knowledge, have not been adequately addressed anywhere yet.

Developing effective methods for measuring and enforcing AI regulations will be a significant challenge moving forward. It will require collaboration between policymakers, industry experts and researchers to establish clear guidelines and robust enforcement mechanisms.

R&I: What challenges arise when using AI to check AI?

LD: Using AI to check AI can lead to complicated situations, particularly when both systems belong to clients. If the evaluating AI detects a problem with the client’s AI, it puts us in a difficult position.

On one hand, we have a responsibility to report any issues found to maintain the integrity and reliability of the AI systems. However, doing so may strain the relationship with the client whose AI is under scrutiny.

It’s a delicate balance between upholding technical standards and maintaining client trust. In such cases, clear communication and a collaborative approach to problem-solving are crucial to navigating these challenges while preserving the client relationship. &

Dan Reynolds is editor-in-chief of Risk & Insurance. He can be reached at [email protected].

Originally Appeared Here

You May Also Like

About the Author:

Early Bird