Volunteer moderators of Q&A site Stack Overflow—one of the internet’s most active forums for programmers, data scientists, and IT professionals—have declared they are going on strike in response to mandates they say prohibit them from restricting AI-generated content.
In December 2022, Stack Overflow declared a temporary ban on ChatGPT-generated content. Yet it quickly backtracked, leaving the decision in the hands of the volunteers who run individual sites. Moderators on the site and across the Stack Exchange network, which has hundreds of other Q&A communities, say the company has switched gears again and handed down guidance making it de facto impossible to stem the tide of AI-generated content.
In a May 30 post to the site, a Stack Overflow staff member wrote the company is asking moderators to apply “very strict standard of evidence to determining whether a post is AI-authored when deciding to suspend a user,” citing the inadequacy of moderators’ intuition and demonstrable inaccuracy in “current GPT detectors.” In response, a group of moderators announced a strike, issuing an open letter saying that alongside the public statement, the company had also issued private guidance prohibiting mods from taking action on AI-generated content in the vast majority of cases.
“This change has direct, harmful ramifications for the platform, with many people firmly believing that allowing such AI-generated content masquerading as user-generated content will, over time, drive the value of the sites to zero,” moderators wrote in a collective statement on Meta Stack Exchange.
Mithical, an Israeli-based moderator on the literature, AI, and constructed languages Stack Exchange sites, helped write the open letter. They told IT Brew via Discord chat the dispute over AI was just one of several points of tension among moderators and the decision to override individual sites’ policies on ChatGPT content was a turning point.
“Technically speaking, Stack has not reversed that decision, but have decided that enforcing those policies is not possible, and have forbidden moderators from enforcing it in almost all cases,” Mithical wrote. Striking moderators have insisted that contrary to the company’s claims, AI detectors have never played a large role in their decisions.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Mithical said that as a result of the strike, in many cases spam remains on the site for a half hour or more before Stack staff take action or it accrues enough flags to be automatically deleted.
This adds to troubles for the Stack Exchange network, particularly as the company has admitted many programmers are turning to AI tools for help rather than Stack Overflow, its most popular site. The open letter says the strike is necessary as a “last-resort effort to protect the Stack Exchange platform and users from a total loss in value.”
Any Stack Exchange user gains limited access to moderation tools after hitting the 10,000 reputation level. As of June 9, the open letter had well over 1,100 signatures, including over 100 of the Stack Exchange network’s 538 “diamond” moderators—those elected by community members or appointed by staff and who carry out official duties. According to Mithical, those 127 comprise most if not all of Stack Exchange’s most active moderation teams.
In an email statement to IT Brew, Stack Overflow vice president of community Philippe Beaudette said the company stood by its policy and characterized the protest as involving a “small number of moderators (22%).” She wrote the company had analyzed “ChatGPT detection tools” and found an “alarmingly high rate of false positives.”
“Usage of these tools correlated to a dramatic upswing in suspensions of users with little or no prior content contributions; people with original questions and answers were summarily suspended from participating on the platform,” Beaudette wrote.