AI Made Friendly HERE

Navigating the Impacts of AI-Driven Social Media Algorithms

This article was co-authored with Emma Myer, a student at Washington and Lee University who studies Cognitive/Behavioral Science and Strategic Communication.

In today’s digital age, social media has become deeply ingrained in society, affecting many facets of everyday life. Social media provides a space for people all over the world to connect, receive the latest news updates, and both share and learn online content. Behind the seemingly effortless swiping on your favorite social media applications is a complex technology known as artificial intelligence.

Artificial Intelligence (AI) is the simulation of human intelligence by a system or a machine (Tai, 2020). AI can perform tasks that normally require human intelligence. Social media platforms implement AI to curate a personalized feed based on previous consumption; let’s say you “like” a video about fitness on Instagram Reels. Following that single “like,” you notice more videos like “how to make the perfect protein shake” and “best arm routine to give you toned arms” pop up on your page. As you continue to “like” those videos, your feed becomes proliferated with fitness content. All due to AI.

This curated algorithm can help consumers discover content that interests them and even offer community and connection, especially for some youth who feel isolated in their offline environments. While there are benefits to this algorithm, it also presents some risks with potentially long-lasting harm.

The research on social media’s impact is still trying to catch up to the rapidly evolving technology. While Vuorre and Przybylski (2023) indicated they did not find consistent evidence that widespread internet adoption is linked with broad negative mental health outcomes, they cautioned that they did not have access to data on online behaviors at social media platforms. Because much of the behavioral data (e.g., real usage patterns on social media/gaming platforms) is held by private technology firms, the authors called for collaborative efforts so that these datasets can be studied. Cook (2022) charged that platforms such as Instagram “make 1 in 3 female teenage users feel worse about their bodies; the app is addictive by design; and it algorithmically drives vulnerable users toward pro-eating-disorder content.”

In 2022, a family sued Instagram’s owner, Meta, attributing the platform’s algorithm to their daughter’s eating disorder and self-harming tendencies, arguing that “Instagram’s artificial intelligence engine almost immediately steered the then-fifth grader into an echo chamber of content glorifying anorexia and self-cutting, and systematically fostered her addiction to using the app” The potentially harmful effects of social media are most concerning in children, as their brains are developing and their prefrontal cortex (responsible for complex behaviors like decision-making and impulse control) is the last brain region to fully mature, completing development in the mid-to-late 20s.

Beyond adverse mental health outcomes, AI applications like ChatGPT can inhibit learning and critical thinking skills. Those who use AI platforms often develop a shorter attention span and diminished creativity. This is seen among users of applications like ChatGPT, where AI answers user-driven questions. Need to write a historical fiction short story by 5 pm today? ChatGPT and various other AI-based platforms can perform those tasks for you. While this can assist users by reducing their workload, it also negatively impacts important learning experiences (e.g., hindering creativity, learning writing skills) for children who need to develop these skills.

Another instance of the consequences of AI on mental capacity and critical thinking skills can be seen on Facebook, which implements AI to summarize the comment section under a post into a short paragraph. This inclines readers to be complacent and agree with the general summary, discouraging them from reading the variety of comments and forming their own conclusions and opinions.

A similar effect is evident on TikTok and Instagram Reels, whose algorithms enable seemingly endless swiping, thereby promoting the consumption of short-form content. Short-form content is popular, as it is often attention-grabbing and instantly gratifying, which helps keep viewers engaged. However, it can result in addictive behaviors, resulting in shortened attention spans and social isolation (Qin, 2022).

AI tools are increasingly used by children for companionship, when imagination, fantasy play, and social curiosity are developmentally normative. Data show that among 11-year-olds, approximately 44 percent of AI companionship interactions include violent content. Around age 13, sexual or romantic roleplay becomes the most common AI interaction theme. While many children demonstrate insight—with about half reporting concern about screen time if they were in a parent’s role—their prefrontal cortex and self-regulation systems are still developing, limiting their ability to disengage from immersive or emotionally charged content. This mismatch between awareness and regulation underscores the need for adult guidance, age-specific boundaries, and proactive conversations, rather than relying on insight or self-control alone.

Artificial Intelligence Essential Reads

There are technological tools that users can actively use to mitigate potential harm. TikTok created a feature where users can mark the displayed video as “not interested.” This feature informs the AI software that the consumer does not want to see similar forms of content in their feed. Instagram also implemented a “mute” option that allows users to not see posts in their feed while still following the account, promoting mindful consumption. Most social media platforms also include a “block” feature that allows users to block all content from certain accounts. AI can work positively by reading those interactions and creating a specific algorithm that provides similar, uplifting content. While these tools offer the user more control over their algorithm, they only activate if users are mindful in using them, which may be an unreasonable expectation for children and teens, given their developing brains.

Thus, the most effective mitigation strategies begin with parents, who should consider whether to provide their children with access to personal media devices. Parents are recommended to delay giving children smartphones, and when they do, the electronic devices should have parental monitoring software to help block harmful content such as pornography. Children benefit from learning how to evaluate media content and understand the role of algorithms and AI. Parents are recommended to create family media agreements detailing expectations, limits, device-free times (e.g., meals), while using collaborative decision-making to improve cooperation and adjusting agreements regularly as children mature.

Clearly, social media and AI are here to stay, and there are arguably clear benefits. But as with any new technology, there are risks and benefits to giving children and adolescents access to it. It would behoove society to be mindful of the potential harm to developing minds and to adopt strategies to mitigate those risks.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird