AI Made Friendly HERE

Sora’s AI video has people gasping but have they already forgotten OnlyFake?

OpenAI is marketing Sora, its new text-to-video model, while withholding it from the public while it gets some red-team attention, or harms evaluation.

But it almost seems that an AI product evaluation can’t be completed before there is another major advance in AI capability and quality. And that is not a comfortable thought for people (and AI algorithms?) racing to create deepfake detectors.

OpenAI’s announcement of Sora is dominated by lush one-minute videos of non-reality that would be interesting if they’d been crunched into existence by human game programmers.

That these clips, which are by no means flawless representations of life scenes, were created in minutes from fewer than 100 words in a typed prompt is thought provoking.

Sora hasn’t appeared out of vacuum, however, and there are those who want to know the world’s best red team is busy right now. This, the shortest month of the year, has already seen the public debut of a deepfaked driver’s license that looks at least as real as a photo of the genuine article.

Sora smells a little of a stunt because there are tells in each video that is supposed to look like footage of a real scene.

In the clip showing a woman walking in Tokyo, for instance, looks good. But then you see all the people walking in the background as if they are trying to keep unseen books from being shaken off their head.

There are other tells, too, but the point is, why did OpenAI release this now? Why not wait for code that truly astounds? Would that be a delay of months? Weeks? Days? Probably only the firm’s funders will ever know.

The deepfake driver’s license, however, is indistinguishable from reality, at least at this moment. It came from the underground site OnlyFake.

People are buying licenses there. While pearls get clutched over Walking Tokyo Woman, the promised challenge of deepfakes is already a commodity.

It makes the $16 million venture deal closed last week by Clarity seem like silent movie. Clarity’s elevator pitch involved a detection process that mirrors how biometric and other AI attack strategies roll out in myriad ribbons, not waves.

Can anyone’s programmers work fast enough, even in a virus-like fashion, to neuter efforts like the one creating fake ID documents?

Article Topics

biometrics  |  deepfakes  |  generative AI  |  OnlyFake  |  OpenAI  |  Sora

Originally Appeared Here

You May Also Like

About the Author:

Early Bird