âWe can no longer talk about high-level principles,â says Microsoftâs Ram Shankar Siva Kumar. âShow me tools. Show me frameworks.â
Generative artificial intelligence systems carry threats new and old to MSSPs, but practices and tools are emerging for them to meet customersâ concerns.
Ram Shankar Siva Kumar, head of Microsoftâs AI Red Team and co-author of a paper published Monday presenting case studies, lessons and questions on the practice of simulating cyberattacks on AI systems, told CRN in an interview that 2025 is the year customers will demand specifics from MSSPs and other professionals around protection in the AI age.
âWe can no longer talk about high-level principles,â Kumar said. âShow me tools. Show me frameworks. Ground them in crunchy lessons so that, if Iâm an MSSP and Iâve been contracted to red team an AI system ⦠I have a tool, I have examples, I have seed prompts, and Iâm getting the job done.â
[RELATED: The AI Danger Zone: âData Poisoningâ Targets LLMs]
Microsoft AI Red Teaming
Wayne Roye, CEO of Staten Island, N.Y.-based MSP Troinet, told CRN in an interview that Microsoftâs security tools present a big opportunity for his company in 2025, especially tools for data governance to take advantage of the growing popularity of AI.
âPeople are a lot more conscious of what they need to do to make sure, A, not only a breach ⦠but I also have internal people that may be able to access things theyâre not supposed to. And itâs not only a security issue. Itâs an operational issue.â
The paper by Kumar and his team, titled âLessons From Red Teaming 100 Generative AI Products,â presents eight lessons and five case studies acquired from simulated attacks involving copilots, plugins, models, applications and features.
Microsoft isnât new to sharing its AI safety expertise with the larger community. In 2021, it released the Counterfit open-source automation tool for testing AI systems and algorithms. Last year, Microsoft released the Pyrit (pronounced âpirate,â short for Python Risk Identification Toolkit) open-source automation framework for finding risks in GenAI systems in another example of the vendorâs community-minded work improving security in AI.
Among the lessons Microsoft provides professionals in this latest paper is understanding what AI systems can do and where they are applied.
Threat actors donât need to compute gradients to break AI systems, with prompt engineering having the potential to cause damage, according to the paper. AI red teams canât rely on safety benchmarks for novel and future AI harm categories. And teams should look to automation to cover more of the risk landscape.
Red teams should look to subject matter experts for assessing content risk and account for modelsâ riskiness in one language compared with another, according to the paper. Responsible AI harms are subjective and difficult to measure. And securing AI systems is never a completed process, with system rules potentially changing over time.
The case studies Kumarâs team presents in the paper include:
- Jailbreaking a vision language model to generate hazardous content
- Using LLMs to automate scams
- Gender bias in a text-to-image generator
- Server-side request forgery (SSRF) in a video-processing GenAI application
Security professionals will see that AI security and red teaming come with new tactics and techniques, but familiar methods and practices donât go away in the AI era, Kumar said.
âIf you are not patching or updating the library of an outdated video processing library in a multi-modal AI system, an adversary is not going to break in. Sheâs going to log in,â he said. âWe wanted to highlight that traditional security vulnerabilities just donât disappear.â