Artificial Intelligence and the End of Cyberbullying: The TikTok Bubble Effect

Artificial Intelligence and the End of Cyberbullying: The TikTok Bubble Effect

3 min de leitura

Artificial intelligence (AI) permeates almost every aspect of modern life, from voice recognition and language translation systems to online product recommendation algorithms and social media content.

With the advancement of AI, social networks have become much more personalized and isolated platforms, where each user’s experience is shaped by their own interests and behaviors. A clear example of this is TikTok, which uses AI to create this individualized “bubble” for each user.

In today’s article, we’ll discuss a very interesting side effect of these algorithms — their (accidental) potential to help combat a very complex social issue: bullying.

The TikTok Bubble Effect

Although I always refer to “TikTok” directly, nowadays all social networks implement the same type of algorithm for content recommendation, so I’m also implicitly talking about other social networks, such as: YouTube, Facebook, Instagram, Kwai, etc.

The “bubble effect,” as it is known, refers to the way TikTok’s algorithm analyzes a user’s interests and preferences to present relevant content, such that each user lives in their own “bubble” of personalized content. This extremely targeted personalization can result in the isolation of users with different interests, reducing the likelihood of interactions between these groups.

One of the main reasons for bullying is the perceived difference between individuals — whether in terms of appearance, interests, behavior, or beliefs. By highlighting similarities and minimizing differences through content personalization, AI can create a “safer” online environment, where the chances of being a victim of something like bullying are drastically reduced.

The TikTok bubble, in this context, can be seen as a kind of protection against bullying, since online interactions are limited to groups of people with extremely similar interests. If victims and bullies are isolated in their own bubbles, they end up never meeting. Abuse, therefore, does not occur.

But… Is This Good?

However, it’s important to consider some concerns. The first is that, while the “bubble effect” can protect people from bullying, it can also limit exposure to different viewpoints, creating echo chambers that reinforce existing beliefs and attitudes. This can lead to the polarization of extremist ideas (which everyone reading this has certainly already seen) and a lack of empathy.

Additionally, AI algorithms depend on user behavior to improve their predictions and recommendations. This means that if a user is exposed to bullying and the algorithm interprets it as an interest, the user may inadvertently be exposed to more bullying content.

Conclusion

When reflecting on the isolating powers of Artificial Intelligence algorithms, it is impossible not to question the broader ethical and social implications that accompany these technologies. The “bubble effect” on TikTok and other AI-based social platforms may be an effective way to protect people from online bullying. But is it really the best path forward?

Certainly, protecting people from bullying is a laudable goal. Victims of bullying, both online and offline, can face serious emotional, psychological, and physical consequences. The use of AI to help prevent (even if “accidentally,” in the case of social networks) bullying can be seen as a positive step toward creating safer and more inclusive environments.

However, it begs the question: does AI have a responsibility to protect individuals from bullying? And if so, to what extent? Should we trust algorithms to shape our behavior and online experiences, protecting us from potential harm, but possibly isolating us from challenging experiences that could contribute to our personal growth and development?

This question goes beyond bullying and extends to the very essence of our interaction with AI. It is a question that touches the heart of the role of technology in our lives and how it can, and should, be used to protect and benefit society. This is not a question that can be easily answered, but it is certainly one worth exploring as we continue to navigate the digital age.