AI Made Friendly HERE

Phishing Scams to Social Engineering, how scammers are using new AI Chatbot

AI powered scammers are becoming all the more sophisticated and are causing some serious damage. From Social Engineering, malware distribution, to phishing scams, AI-powered scammer’s chatbots, called FraudGPT, are increasingly becoming more popular and sophisticated

During the annual World Economic Forum at Davos, cybersecurity leaders convened to discuss the escalating challenges faced by law enforcement agencies globally. Jurgen Stock, the Secretary General of Interpol, emphasized the persistent obstacles arising from advanced technologies such as artificial intelligence (AI) and deepfakes.

Stock underscored that law enforcement agencies are grappling with a crisis due to the surging volume of cybercrime. Despite efforts to raise awareness about fraud, Stock noted that an increasing number of cases continue to emerge.

He stated, “Global law enforcement is struggling with the sheer volume of cyber-related crime. Fraud is entering a new dimension with all the devices the internet provides. Crime only knows one direction, up. The more we are raising awareness, the more cases you discover. Most cases have an international dimension.”

Related Articles


ChatGPT, Attack! OpenAI is working with US armed forces, making cybersecurity tools for them


OpenAI CEO Sam Altman marries longtime boyfriend: Who is Oliver Mulherin?

During the discussions, the panel delved into the realm of technology, including FraudGPT — a nefarious iteration of the popular AI chatbot ChatGPT. Stock revealed that cybercriminals are organizing themselves based on expertise within an underground network, complete with a rating system that enhances the reliability of their services.

FraudGPT is an AI chatbot that exploits generative models to generate convincing and coherent text. It utilizes language models trained on vast text data to produce human-like responses to user prompts.

Cybercriminals leverage FraudGPT for various malicious purposes, including phishing scams, social engineering, malware distribution, and fraudulent activities.

Phishing Scams: FraudGPT has the capability to produce convincing phishing emails, text messages, or websites that appear genuine, duping users into disclosing sensitive details such as login credentials, financial information, or personal data.

Social Engineering: Utilizing human-like conversation, FraudGPT can mimic genuine interactions to establish trust with unsuspecting individuals, ultimately persuading them to inadvertently share sensitive information or engage in harmful activities.

Malware Distribution: By generating deceptive messages, FraudGPT entices users to click on malicious links or download harmful attachments, resulting in the compromise of their devices through malware infections.

Fraudulent Activities: Leveraging its AI capabilities, FraudGPT assists hackers in crafting fraudulent documents, invoices, or payment requests, ensnaring both individuals and businesses in financial scams.

The risks associated with AI in cybersecurity were also highlighted during the discussions. While AI has played a significant role in enhancing cybersecurity tools, it has concurrently introduced risks.

Stock pointed out that even individuals with limited technological knowledge can now carry out distributed denial of service (DDoS) attacks, expanding the scope of cyber threats. The increasing affordability and accessibility of AI tools are expected to amplify the risks to cybersecurity.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird