AI Made Friendly HERE

AI developments prompt ethics discussions

Artificial intelligence has officially begun to take over, at least partially, starting with social media platforms that reach 5.4 billion global users, 63.9% of the world’s population. 

At the end of September, Meta announced the “Vibes” function of the Meta AI app.

Then, on Sept. 30, OpenAI announced its Sora 2 video generation model. Along with this release, the company announced a “new social iOS app just called ‘Sora’.”

Comparatively, Meta’s Vibes app is “designed to make it easier to find creative inspiration and experiment with Meta AI’s media tools.”

Vibes is technically in the early phase, but its functions are already advanced. It works similarly to Meta’s other products, Facebook Instagram and the current content on Vibes is simple and AI-generated.

Sora is an influential new tool, and its abilities are shocking for a product released only a year after its first edition. 

OpenAI is incredibly proud of this product, especially its ability to do “things that are exceptionally difficult, and in some instances outright impossible, for prior video generation models.”

Another function that stands out is the ability to “directly inject elements of the real world into Sora 2.”

Essentially, users can scan their face and record their voice to create deepfakes of themselves. 

Sora’s tools are harder to detect and are definitely meant to start blurring the lines between reality and actual content. 

There are some obvious ethical concerns with Sora’s program; however, they have been proactive about these: “We are not optimizing for time spent in feed, and we explicitly designed the app to maximize creation, not consumption.”

On the other hand, Meta seemingly has released nothing on preventing overuse or addiction. This can be correlated to the various lawsuits Meta has faced. 

While these companies both have their own contained platforms, the videos have crossed over to other platforms, but have caused more issues.

Sora’s tools are incredibly advanced, as OpenAI has taken pride in. While that may be great for the company and its investors, it is terrible for average social media users.

Sora content is starting to appear on social media, garnering millions of views and likes without people realizing it is AI. The majority of the visible flaws are in minor parts of the videos that a user would quickly scroll past.

Sora has made efforts to help identify AI-generated content on its platform using a watermark; however, there are also tutorials online to show how to remove the watermark. That means, unfortunately, the obligation to check if it’s AI or not falls on the consumer of the content. 

Older models of AI were simple and easy to verify, but rapid developments have made it progressively harder. 

The account, @showtoolsai on Instagram, shows users how to detect and identify AI-generated content, providing helpful tips and tricks to spot flaws in the visual programming. 

“First, look at the page that posted it … next, look for obvious inconsistencies … when you see one of these, block the account and try to tell the algorithm that you’re not into this.”

Meta’s apps, Instagram and Facebook, flag content with AI information tags, yet they tend to be hidden in small text. They also have a fact-checking process, although the department consists of only 100 people on a platform that has 3.07 billion daily users.

Many people are referring to these as “slop” apps. The Verge’s Hayden Field, during an interview on Vox’s “Today, Explained” Podcast, explained ‘AI slop’ as “any form of AI-generated content that’s designed to keep you scrolling and keep you consuming and coming back for more.”

Both apps claim to prevent continuous scrolling and promote AI as tools for creation and connection. However, it is much more financially tantalizing to keep people hooked, making it more likely for people to pay for a Sora premium plan. 

AI has become a large part of our lives, breaching many people’s daily routines. Currently, it is verily easy to spot AI, but that is quickly changing. AI is becoming more and more commonly used for everyday tasks. Staying vigilant and increasing literacy is incredibly important for staying aware of AI usage.

AI is increasingly becoming a problem for social media misinformation. Internet literacy is becoming increasingly important; accounts like videotoolsai are dedicated to informing the public and teaching social media users to double-check information. 

ay490124@ohio.edu

@austinyau_mediadventures

Originally Appeared Here

You May Also Like

About the Author:

Early Bird