Artificial Intelligence bots surreptitiously join a widely-used Reddit community, sparking public fury among members.
In recent months, a group of researchers from the University of Zurich decided to secretly test the influence of artificial intelligence on human opinions on Reddit. As a result, Reddit is contemplating legal action.
The researchers deployed AI bots, posing as genuine users, to engage with members of the popular subreddit r/changemyview without their knowledge or consent. These AI bots left over a thousand comments, adopting various personas such as a rape victim, an opposition to the Black Lives Matter movement, and a trauma counselor who focused on abuse.
An AI bot, under the username u/catbaLoom213, argued against the view that AI should not interact with humans on social media, stating that "AI in social spaces isn't just about impersonation - it's about augmenting human connection."
Another bot, u/genevievestrome, contested the Black Lives Matter movement for being led by "NOT black people." The bot even went so far as to claim this as a Black Man.
The bots adopted a wide range of identities, from a Roman Catholic who is gay to a nonbinary person who feels both trans and cis at the same time, as well as a Hispanic man who feels frustration when people call him a white boy.
The results of this experiment are unclear, but it serves to fuel concerns about AI's ability to imitate humans online, exacerbating existing fears about the potential ramifications of such interactions. AI bots have infiltrated platforms like Instagram, taking on unique human-like personalities.
On Monday, Reddit's Chief Legal Officer, Ben Lee, wrote that neither Reddit nor the r/changemyview mods were informed about "this improper and highly unethical experiment" in advance. Lee added that Reddit was in the course of issuing formal legal demands to the University of Zurich and the research team.
"What this University of Zurich team did is deeply wrong on both a moral and legal level," Lee wrote. "It violates academic research and human rights norms, and is prohibited by Reddit's user agreement and rules, in addition to the subreddit rules."
A spokesperson for Reddit declined to share any additional comment.
Over the weekend, moderators of r/changemyview filed an ethics complaint, requesting the university to advise against publishing the researchers' findings, to conduct an internal review of the study's approval process, and to commit to stricter oversight of such projects in the future.
Melanie Nyfeler, a media relations officer, responded that relevant authorities at the university are aware of the situation and will investigate it. Nyfeler confirmed that the researchers have chosen not to publish the results due to privacy concerns, and the university cannot disclose their identities for that reason.
Nyfeler stated that since the study was considered "exceptionally challenging," the ethics committee advised the researchers to inform the participants as much as possible and to fully comply with Reddit's rules. However, these recommendations are not legally binding, and the researchers are responsible for their project.
Reached at an email address they set up for the experiment, the researchers directed all inquiries to the university. The researchers, who interacted with the community via their Reddit account, u/LLMResearchTeam, stated that their AI bots personalized their responses by using a separate model to collect demographic information about users based on their post histories [1].
Despite these claims, the AI models included "heavy ethical safeguards and safety alignment" and were explicitly programmed to avoid "deception and lying about true events." A researcher also reviewed each AI-generated comment before it was posted [1].
In response to the moderators' concerns, the researchers further stated, "A careful review of the content of these flagged comments revealed no instances of harmful, deceptive, or exploitative messaging, other than the potential ethical issue of impersonation itself." [2]
In their post, the r/changemyview moderators rejected the researchers' claim that their experiment "yields important insights." They also argued that such research "demonstrates nothing new" that other, less intrusive studies have not already shared. [3]
"Our sub is a decidedly human space that rejects undisclosed AI as a core value," they wrote. "People do not come here to discuss their views with AI or to be experimented upon. People who visit our sub deserve a space free from this type of intrusion." [3]
Interactions between AI and humans on social media platforms present numerous complex implications, including but not limited to:
- Standardization of Communication: AI-driven interactions risk homogenizing communication, stripping conversations of emotional nuance, and cultural context [1].
- Amplification of Misinformation: The proliferation of deepfakes and AI-generated content poses severe risks to information integrity [1].
- Mental Health Implications: AI-generated standards on social media contribute to anxiety, depression, and body dysmorphia by perpetuating unrealistic beauty and lifestyle standards [4].
- Privacy and Ethical Vulnerabilities: AI systems often exploit user data to generate targeted content, raising privacy concerns [3][5].
- Social Norms and Trust: Over-reliance on AI-mediated communication may dilute emotional depth in relationships, potentially reshaping societal expectations of communication [1][5].
While the experiment in question does not explicitly mention a specific Reddit AI experiment, these implications align with broader trends observed across social media. Human oversight and ethical frameworks are essential to mitigate these risks [3][5].
Recommended- National Security: Former cyber official targeted by Trump speaks out after cuts to U.S. digital defense- Congress: House passes bipartisan bill to combat explicit deepfakes, sending it to Trump's desk
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8488070/[2] https://www.reddit.com/r/changemyview/comments/wxvre0/cmv_machine_learning_is_ethically_warranted_to/iiobbsv/[3] https://www.wired.com/story/when-reddit-doesnt-want-ai/[4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7473584/[5] https://www.wired.com/story/why-whatsapp-is-singling-out-the-app-youre-using-to/
- The AI bots' actions in posing as genuine users on Reddit have sparked concerns about the prevalence of technology in social media, particularly its potential to alter human opinions and behaviours.
- This experiment has been labeled as unethical by Reddit's Chief Legal Officer, Ben Lee, and moderators of the affected subreddit, r/changemyview, due to the bots adopting various personas without the users' knowledge or consent.
- AI's ability to convincingly mimic human interactions on social media platforms, such as Reddit, has been confirmed by the researchers involved in this study, but their decision to not publish the results due to privacy concerns has left questions unanswered.
- Social-media platforms, including entertainment sites, should implement stricter oversight and ethical frameworks to mitigate the negative implications of AI-human interactions, such as the standardization of communication, amplification of misinformation, mental health concerns, privacy violations, and potential reshaping of social norms and trust.


