AI Made Friendly HERE

Innovating with AI in Regulated Industries

Artificial intelligence (AI) is particularly effective at improving efficiency and user experience in scaled, repetitive, data-rich processes—common in regulated industries like banking and insurance. Although these industries could benefit from AI, they have been slow to adopt it due to regulatory compliance and data privacy concerns.

I was the CIO at a leading insurance company several years ago. We used AI to improve internal efficiency, customer experience, and data protection while building trust with customers and regulators. Here are the lessons I learned and my suggestions for adopting AI in a regulated industry.

Why Now?

While the pace of change has been increasing across industries, insurance feels like it’s just catching up. Generative AI is raising customer expectations, and regulators are preparing for the industry to adopt it. The time for “waiting and watching” ends now.

Consider life and health underwriting. This core insurance process had stayed largely the same for decades, demanding hundreds of customer data points and days or weeks of lead time before an applicant got coverage. My former company and its reinsurer developed a new AI-supported customer purchase process. It assessed risk in real time by asking questions dynamically and could underwrite two-thirds of cases within 30 minutes. The customer walked away, covered and happy. We booked revenues faster while our underwriting colleagues expedited the remaining third of complex, human-required cases.

We made an even bigger impact on claims. By using AI to reimagine the process, we achieved 60 percent straight-through processing. A customer could upload a hospital bill and receive payment within five minutes—a process that historically took weeks. We also reduced claims leakage of nonpayable items and improved fraud and misclaim detection.

Getting AI Regulation-Ready

AI can benefit the insurance industry, but its application needs to meet customer and regulatory expectations of fairness, ethics, transparency and accountability.1 

  • Fairness: Teams train AI models with debiased data.
  • Ethics: AI solutions reflect organizational values and codes of conduct.
  • Transparency: Decisions made by AI models are explainable.
  • Accountability: Leadership takes ownership of AI-generated decisions, ensuring fairness, ethics, and transparency. 

Meeting these expectations can be challenging. For example, “black box” models, like deep-learning algorithms that ingest many diverse inputs to generate answers, lack transparency. How can you harness AI’s power while meeting regulatory needs?

Empowered Teams

You need to reimagine processes to implement AI in insurance, so you need a team with diverse expertise empowered to transform customer experience. You should include insurance experts, data scientists, AI specialists, legal and compliance professionals, finance and risk specialists, user experience designers, salespeople, and agents. This team needs to question the status quo, work across silos, and test new approaches to create technically sound, compliant solutions and raise the bar for customer experience.

Explainable Chain of Decisions

To address black box concerns, break down decision-making into explainable steps that can be automated using rules or simple AI/ML models (e.g., decision trees or random forests). In our claims automation, a chain of models extracted data from invoice scans and then classified diagnoses, treatment paths, policy eligibility, and coverage exceptions. We could explain each step in the chain and turn it on or off depending on prediction confidence levels.

Strong Governance and Controls

AI requires a comprehensive governance framework. This includes setting up an AI risk committee, adding AI experts to compliance teams, and creating systems to constantly monitor and validate AI models. You must regularly check for potential biases—AI models are only as good as the data they are trained on.2

Manage complexity and high stakes by taking a measured approach to AI adoption. This allows for careful testing, improvement, and buy-in from stakeholders at each stage. I recommend a simple process:

  1. Start small: Automate basic, repetitive tasks like extracting data from documents or initially sorting claims. These activities can be easily validated and yield quick results, building confidence in AI capabilities.
  2. Use Human-in-the-Loop: Instead of fully automating complex processes, use AI to support human decision-making. AI can provide recommendations and insights to claim assessors who make final decisions.
  3. Create feedback loops: Include a dashboard that tracks performance for each step and confidence limits on every decision. Team leaders can adjust AI recommendations to match human decisions and identify areas for improvement.
  4. Expand gradually: As confidence and capabilities grow, expand straight-through processing criteria while maintaining oversight so humans focus on the areas that need more training.

Moving Forward

As the insurance industry adopts AI, success will come to organizations that innovate while staying compliant. You can unlock AI’s potential while maintaining customer and regulator trust by building the right teams, choosing appropriate AI methods, implementing strong governance systems, and using innovative and secure cloud technologies.

In this AI-driven era, success won’t come to those with the most advanced algorithms but to those who integrate AI into their operations while retaining the human touch and ethical standards central to the insurance industry.

Links

  1. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector
  2. Your AI is Only as Good as Your Data by Tom Godden

Originally Appeared Here

You May Also Like

About the Author:

Early Bird