AI Made Friendly HERE

AI-generated content is being used in LinkedIn phishing scams, but can users spot it?

C

reative AI tools such as ChatGPT aren’t just helping students to cheat with their coursework, they might also be helping cybercriminals cheat you out of your money and personal information.

Cybersecurity researchers recently unearthed a LinkedIn-based “phishing campaign”, a type of scam that looks to steal users’ personal details, where it appears that the AI art-creation platform Dall-E was utilised.

The researchers from US cybersecurity firm discovered a fraudulent whitepaper aimed at sales executives looking to convert their leads better, where the ad’s graphic design supposedly contained a telltale colour pattern in the lower-right corner, which is usually stamped on images produced by Dall-E.

Produced by Open AI, the same US start-up as Chat GPT, Dall-E is an AI model that generates images from text-based prompts.

After accessing the whitepaper, where the text also appeared to have been generated using AI as per the researchers, users were directed to a call-to-action button which generated a form that then auto-fills with the user’s personal LinkedIn details.

Read More

The page of the account promoting the ad, named ‘Sales Intelligence’, was largely blank, featuring only a link to the webpage of a jewellery store in Arizona, and was likely only set up to harvest users’ details for dubious ends, according to SafeGuard.

Though not as compromising as bank details, the researcher said something as simple as a user’s name, email, LinkedIn profile, and date of birth could be the first step to more serious cybercrimes, such as identity theft.

However, LinkedIn has not acknowledged that these scams are AI-based, so it’s technically still an assertion from the researchers. Though it seems difficult to imagine how the Dall-e watermark could have slipped in any other way.

The fake ad

/ Fake ad.jpg

Will AI make it easier to get scammed?

This is an issue that is likely to intensify rapidly over the next decade, according to Dr Ilia Kolochenko, adjunct professor of cybersecurity at Capitol Technology University,

In particular, Chat GPT-style tools may make it far easier for would-be hackers from non-English-speaking countries to craft convincing legible scam messages, according to Kolochenko, who called it a “gift” to this demographic. Many of the scam emails we’re all familiar with aren’t exactly Shakespearian in quality, so you can see how AI might be a step up for many groups looking to produce official-sounding documents.

“We have a lot of young, talented cyber criminals who simply don’t have great English skills,” explained the professor.

This isn’t the first time we’ve seen this type of thing, Kolochenko claims anecdotally he has seen other instances of phishing emails that looked like they had been generated using AI, or of cybercriminals using AI chatbots to generate communications with their victims. According to the professor, these cybercriminals will often pretend to be the tech support desk of a large tech firm, to extort payments.

LinkedIn

How can I protect myself against AI fakes?

James Bores, a cybersecurity consultant with two decades of experience and his own firm Bores Consultancy, was pessimistic about whether consumers will be able to actually spot fraudulent AI-based content.

“In terms of what people can do, the depressing answer is not much,” explained Bores. “There are tools to try and detect AI-generated content, but they are unreliable.

“Some of them work, and some of them don’t, and these will only become more unreliable as the technology gets better.”

Kolochenko said you should be suspicious of written communications from fraudsters with perfect grammar and spelling, as humans make typos, type in lowercase, or use colloquial English in their emails.

“If you receive a text that looks too good to come from your colleague, who almost always writes very short and practically incomprehensible emails, you should ask yourself why,” he explained.

Kolochenko says consumers should be extremely weary of context-related cues such as if they receive an email supposedly from a US-based colleague at 9 am UK time.

The dubious-sounding company behind the scam

/ Scammer

If you’re unsure whether a video that you are watching could be an AI-generated “deepfake”, Bores recommends looking at the shadows, or for strange patterns in the way the subject blinks. If you are looking at a piece of text, he says you should look for odd phrasing.

However, Bores admits that unusual patterns in someone’s text could simply be “someone having an off day” and there is no fool-proof way of telling for sure what is real and what is fake.

If you want to assess whether a picture of a real person might be AI-generated, Bores thinks you should look at whether the eyes are symmetrical, as most real people have some asymmetry in their eyes.

Ultimately, though, Bores recommends a “back-to-basics” approach to AI fraud prevention, saying it is almost impossible to tell what content is produced by AI, telling consumers to simply be wary of anything which seems “too good to be true”.

“If it’s asking for anything out of character, or anything new, that should be a warning sign immediately,” explained Bores. “To be honest, if it’s asking for anything at all.”

According to Bores, you should try to contact the company via a recognised route if you feel suspicious and ask for a reference number. If they can’t provide it, go no further.

Should social platforms step up to prevent AI fraud?

There is a lot that big-tech platforms like LinkedIn and Twitter can do to regulate AI-generated content on their platforms, according to Bores, but it’s unlikely they will take appropriate measures unless they are forced to by future legislation.

Kolochenko feels that, with the massive amount of investment that Microsoft has poured into AI technology, they should eventually be able to develop reliable ways of spotting AI-generated content. However, at the moment, it’s simply not possible for them to monitor their platform consistently.

If a high-profile fraud based on AI-generated material were to appear on a platform such as LinkedIn, such as a politician making a deepfaked inspiring speech, for example – and this went viral, that could well prompt the big-tech platforms to take action, according to Kolochenko. The professor also predicts that soon, the big social platforms will ban users from posting AI-generated content without a disclaimer as part of their terms and conditions. How this ban might be enforced is anyone’s guess.

What has Linkedin said about the scam?

A LinkedIn spokesperson said, “We encourage our members to report suspicious activity or accounts so that we can take action to remove and restrict them, as we did in this case.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird