AI Made Friendly HERE

AI is a data-breach time bomb, reveals new report

AI is everywhere. Copilots help employees boost productivity, and agents provide front-line customer support. LLMs enable businesses to extract deep insights from their data.

Once unleashed, however, AI acts like a hungry Pac-Man, scanning and analyzing all the data it can grab. If AI surfaces critical data where it doesn’t belong, it’s game over. Data can’t be unbreached.

And AI isn’t alone — sprawling cloud complexities, unsanctioned apps, missing MFA, and more risks are creating a ticking time bomb for enterprise data. Organizations that lack proper data security measures risk a catastrophic breach of their sensitive information.

To quantify AI’s impact on data risk, Varonis produced the State of Data Security Report: Quantifying AI’s Impact on Data Risk. Download the full report and continue reading to learn about the latest risks to data.

Researchers set out to capture the human-to-machine risk factors: how much sensitive data (employee salaries, R&D information, source code, and more) copilots and agents can access or expose with one prompt.

Researchers also looked at machine-to-machine risks: the integrity of the data used to feed LLMs. Incorrect data can trigger disastrous effects. Manipulated clinical data could undermine the development of breakthrough medicine. Bad actors can silently embed malicious code within LLMs.

Our team analyzed data from 1,000 real-world IT environments and found that no organization was breach-proof.

In fact, 99% of organizations have exposed sensitive data that can easily be surfaced by AI.

Read the report

A data risk deep dive

Varonis analyzed data from 1,000 data risk assessments to provide empirical evidence of risk, not conclusions based on AI readiness surveys and polls.

The State of Data Security Report deep dives into the data risks associated with AI, cloud environments, and some of the most popular SaaS apps and services, such as Microsoft 365, AWS, Snowflake, Box, Salesforce, and many others. 

Varonis found:

  • 99% of organizations have sensitive data unnecessarily exposed to AI tools
  • 90% of sensitive cloud data, including AI training data, is open and accessible to AI tools
  • 98% have unverified apps, including shadow AI, within their environments
  • 1 in 7 do not enforce MFA across SaaS and multi-cloud environments
  • 88% have ghost users lurking in their environments

Taken together, researchers found that no organization was fully prepared for AI. All 1,000 organizations examined were at risk of a breach in the AI era.

“AI is shining a light on data risk,” said Matt Radolec, Varonis VP of Incident Response and Cloud Operations. “Organizations are adopting AI without fully understanding the permissions models, and that means data can be exposed unintentionally to employees, other users, and even externally.”

The report outlines three ways organizations can take proactive steps to secure their data for AI:

  1. Reduce your blast radius by proactively decreasing the damage attackers can do with a stolen identity.
  2. Continuously monitor your data, automate access governance and posture management, and employ proactive threat detection.
  3. Use AI and automation. IT and security teams can harness AI and automation to remediate issues and vulnerabilities.

Varonis’ analysis revealed that the vast majority of organizations, regardless of size or sector, struggle to maintain robust data security practices. As AI continues to evolve, it is crucial for organizations to prioritize data protection and implement effective security measures.

Download the State of Data Security Report: Quantifying AI’s Impact on Data Risk.

Sponsored and written by Varonis.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird