
Vytautas Kaziukonis is Surfshark’s Founder and CEO.
AI is transforming industries, workflows and customer expectations. For business owners, the rise of generative AI brings both exciting opportunities and serious risks.
In this article, I’ll examine how generative AI is reshaping the cybersecurity landscape to figure out: which side of this double-edged sword will we land on?
Generative AI: Advantages Versus Cybersecurity Risks
Most businesses already use platforms like ChatGPT, Midjourney or DALL-E. All these platforms are run with generative AI—the type of AI that can create content such as text, images, audio, video or even code based on patterns it has learned from existing data.
Unlike traditional AI, which typically analyzes data and makes predictions, generative AI goes a step further: It produces outputs that mimic human creativity. Generative AI is valuable for businesses because it can streamline operations and accelerate content creation, all while reducing time and resource costs.
However, it also poses risks. Generative AI is already changing the game for cybercriminals—not by inventing brand-new hacking techniques, but by supercharging the speed and scale of existing ones. Threat actors are now using it to automate phishing campaigns, rapidly write malware and even make fake tools that quietly sneak harmful code into real business software.
Security researchers have already observed AI being used to craft remote access Trojans and flood code repositories with malicious packages faster than defenders can react. While the core methods of attack may not be novel, the efficiency and volume made possible by AI mark a significant shift.
Here are just a couple of cyberattacks that have become increasingly sophisticated and dangerous because of generative AI:
Social Engineering
AI has significantly increased the risk of social engineering attacks. The FBI has issued a warning that cybercriminals are using AI tools to craft convincing phishing emails, as well as voice and video messages, designed to deceive both individuals and businesses.
AI-driven attacks are faster, more automated and more believable than ever, often featuring flawless grammar and personalized content that exploits the trust people place in familiar contacts. Beyond emails, criminals are using AI to clone voices and faces, impersonating co-workers or business partners to steal sensitive information or authorize fraudulent transactions.
More Advanced Malware
AI tools like GPT-4 can write code and content, which opens the door to more advanced forms of malware. Unlike traditional threats, AI-generated malware can quickly adapt to different systems and environments, making it much harder to detect and block.
On top of that, attackers are using AI to develop sophisticated evasion techniques, such as polymorphic and metamorphic malware that constantly rewrites its own code, allowing it to slip past company security defenses.
A Risk From Within Your Business?
The risk of generative AI doesn’t come only from external threats. When implemented without proper oversight, these handy tools can become a Trojan horse, introducing vulnerabilities from the inside out.
As generative AI chatbots become increasingly integrated into business operations, their data collection practices raise serious privacy and cybersecurity concerns. An analysis from my company, Surfshark, of top AI chatbots—including Google Gemini, Meta AI and ChatGPT—revealed that they all collect user data, with some gathering up to 90% of possible data types. That includes sensitive categories like health, financial and biometric information. Nearly half collect precise location data, and several track users for targeted advertising or share data with third parties.
For businesses, this poses significant risks. The most obvious ones are data exposure and regulatory compliance. But there’s also the risk of breaches, which could result in serious reputational harm and even legal action. And such breaches are already happening. For example, DeepSeek, a Chinese AI startup, experienced a leak of over a million records, including chat histories and API keys.
As companies increasingly rely on AI tools, it’s critical that they assess these privacy implications and implement safeguards to protect both corporate and customer information. If your business uses ChatGPT and you’re looking to better understand its safety and best practices for secure usage, my company has published an article that explores ChatGPT risks and how to manage them effectively.
Landing On The Good ‘Edge’: Tips On Using GenAI Safely
We’ve established that generative AI is a double-edged sword, bringing valuable benefits but also serious risks. So, is it possible to minimize those risks while maximizing the advantages?
While eliminating risk may be unrealistic, businesses can take thoughtful precautions to use generative AI safely and responsibly. Here are some ways to start:
Implement strict access controls.
Limit who can use AI tools internally, and ensure only authorized personnel can input or review sensitive business data. Make sure that employees who handle this data are well-trained in the best data security practices. This controlled access reduces the risk of accidental data leaks or misuse and helps maintain tighter oversight over how generative AI interacts with your company’s critical information.
Avoid sharing confidential information.
Never enter proprietary, personal or confidential information into public AI platforms unless you have a clear and enforceable data protection agreement in place.
Always verify AI-generated content for accuracy and bias.
This helps ensure that decisions based on this information are reliable and fair, protecting your business from potential errors or unintended consequences.
Use trusted, compliant platforms.
Choose AI providers with strong data privacy and enterprise-level security. Make sure your business partners follow these standards, too.
Train employees on responsible use.
Educate your team on AI best practices, including ethical use, fact-checking and recognizing AI limitations.
Bottom Line: Staying Informed Is Key To Using AI Safely
Generative AI offers businesses powerful tools, but it also introduces new cybersecurity challenges that cannot be ignored. Understanding how these technologies can both help and harm your organization is not optional—it’s essential.
By raising awareness within your teams, implementing strong security practices and staying informed about the latest threats, your business can leverage generative AI’s benefits while protecting itself from potential risks. Taking a proactive approach today is crucial to secure your company’s future in an increasingly AI-driven world.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?