What is the history of online content moderation?

by OnShoulderOfGiants

Inspired by the recent META threads and seeing the mods out in force, it got me thinking about what the early days of moderation on the internet was like. Have there always been moderators? Did the early forums and chat rooms have any kind of moderation or was it more of an anarchic free for all?

SarahAGilbert

1/2 (Also possibly NSFW/CW for reference to sexual assault)

Pretty, much yes—moderators and moderation have been around almost as long as online communities, but early online communities adopted moderation reluctantly.

In order to understand early moderation practices and why they might have been reluctant to moderate it’s important to know a bit about the early Internet. The first computer network was ARPANET, which was created in 1969 and linked the computers of Department of Defense researchers. The original purpose of ARPANET was to create a network that allowed computers to talk to each other, but it turns out that APRANET was just as effective at enabling people to talk to each other (through text) too. It didn’t take too long for the first email-as-we-know-it protocols to be developed. That the Internet, and computer mediated communication, started through a Department of Defense initiative tells us something about it—the people who created it, the people who first started using it to talk, and the people who would soon come to build communities on it were a relatively homogenous group of people.

Connecting through networked computers wouldn’t be limited to people at select government and academic agencies for long, however. In 1978, Ward Christensen and Randy Suess developed the first public dial up BBS (Bulletin Board System) which used a central server to connect local computers through telephone lines. In 1980, Usenet was developed, which worked similarly as BBSs, but without a central server, and Usenet wasn’t geographically bounded the way BBSs were at first. Even though these systems made computer mediated communication more widely available, the population of users were still relatively homogenous: as those who work at the DoD, they were highly educated, tended to work in or study science and technology, they were predominantly white men, and they were mostly American. It’s this group whose values would inform the norms the would come to dominate early online communities (Phillips & Milner, 2021).

The values that would shape the early Internet are familiar to us today—notably privacy, security, and most relevant to questions of moderation, freedom of speech. The earliest denizens of online communities were full of hope. The sense of spacelessness afforded by "The Net" was freeing—time was transcended through programs like Usenet that connected people all around the world with a shared interest, and matter through MUDs (Multi-User Dungeons) as people engaged in role playing that allowed them to leave place and body behind. Anonymity was particularly important, as is reflected in a 1985 article from Vanity Fair: “Anonymity is an important part of the process: it eliminates status considerations, keeps factions from forming, submerges individual identity, facilitates the emergence of group mind.” Online communities were utopian, equalizing, egalitarian, global villages.

Until they weren’t.

Freedom of speech, as operationalized in early online communities, followed liberalistic notions of freedom, such as those espoused by thinkers such as John Stewart Mill—who argued that all discussion should remain open, even with established falsehoods and harmful beliefs, and Justice Louis Brandeis—who’s statement that “sunlight is the best disinfectant” for corruption was later applied to speech. Therefore, open discussion about anything and everything was good. It was believed that the community, through the establishment of mutually agreed upon and ever-evolving norms, should regulate discussions when necessary rather than an administrator (or moderator) through the enforcement of codified rules. This is evident in the operations manual for an early online community (which they were calling computer conferences) called CommuniTree. The developer of CommuniTree created a role, known as a Fairwitness, to lead a community by providing content, welcoming new users, and setting norms; however, their power was limited:

in a computer conference, however, too heavy a hand by the system owner can assure that a particular message never even reaches the audience, so that, in fact, no dialogue ensues. The Fairwitness, we hope, can reduce this possibility in computer conferencing. Moreover, setting policy for what may or may not be brought up in a particular conference can be more-or-less worked out on-line.

The Fairwitness could make rules, but sanctions (if any) would come from the community and would be social rather than technical in nature.

Norms and guidelines work well when you have a homogeneous group of people who share the same values; they work less well for people in the minority and when people are intentionally disruptive. We can see how both these play out in Julian Dibble’s article, “A rape in cyberspace.” Originally published in a 1993 edition of The Village Voice, Dibble describes a formative incident in LambdaMOO, a popular MUD.

MUDs were entirely text-based communities, but just like a book, consisted of elaborate and varied worlds. LambdaMOO (a Multi-User Dungeon Object Oriented) was a house with many rooms, the most lively of which was the living room. Its users (of which there were around 1000) would enter rooms and interact with others by issuing text-based commands. They could also create characters, build their own rooms, and make new objects, so there was quite a bit of flexibility in what users could do and how they could operate in their world. One night a user named Mr. Bungle entered the living room and used a “voodoo command” (a command which would attribute an action to another user) to force two users in the room to perform a variety of (increasingly violent) sex acts. The characters he attacked were ungendered and non-white, or female. And there wasn’t much anyone could do to effectively stop it.

Dibble recounts reactions from the community in the aftermath of the event. Using a mailing list called *social-issues that served the purpose of a LambdaMOO Meta thread, one of the victims stated:

Mostly voodoo dolls are amusing. And mostly I tend to think that restrictive measures around here cause more trouble than they prevent. But I also think that Mr. Bungle was being a vicious, vile fuckhead, and I...want his sorry ass scattered from #17 to the Cinder Pile. I'm not calling for policies, trials, or better jails. I'm not sure what I'm calling for. Virtual castration, if I could manage it. Mostly, [this type of thing] doesn't happen here. Mostly, perhaps I thought it wouldn't happen to me. Mostly, I trust people to conduct themselves with some veneer of civility. Mostly, I want his ass.

This quote demonstrates the prevailing ethos towards moderation in online communities—despite being the victim of a virtual rape, she is careful to express that she doesn’t want rules and policies put in place. Norms should be enough, even though they weren’t. After a day and a lot of discussion, she eventually requested that Mr. Bungle be toaded (i.e., the Bungle program wiped, replaced with that of a toad, and the account erased—a warty banhammer of sorts). While others agreed with this call, the toad command could not be issued by users. It had to be done by a wizard.

Wizards on LambdaMOO were the MUD’s programmers and administrators and the only ones who could modify and control objects created by others. LambdaMOO had several wizards, but the main wizard was Haakon. At the time Mr. Bungle raped several users, LambdaMOO was not regulated by a set of rules. However, that wasn’t always the case. Haakon, with a few others, programmed LambdaMOO and when it was ready, promoted it on rec.games.mud, the relevant Usenet newsroom. Pretty much straight from the outset LambdaMOO faced challenges. As Haakon writes:

We had, I think, already had some discipline problems, even then. I remember a couple of assholes from PSU who came in, changed their names to things I wouldn't want to say in front of my mother, and started cursing at everyone in sight. I remember going to try to talk to them about it, meeting stiff resistance, and finally recycling them in frustration.

At the request of the community Haakon developed a document of rules, called “help manners”—which worked for a time. However, the community continued to grow and with it, the workload of the wizards who were tasked with enforcing those rules. To offset the stress, Haakon created a group called the Architecture Review Board to distribute the labour. This also worked for a while, but the community continued to grow. The toad command was created, but that didn’t solve the problem either: “We tried to block out a lot of people who we thought were causing problems and then stopped trying because it's too hard to be effective at that game.” Haakon and the other wizards were playing a losing game of whack-a-toad.

So they quit.

I believe that there is no longer a place here for wizard-mothers, guarding the nest and trying to discipline the chicks for their own good. It is time for the wizards to give up on the `mother' role and to begin relating to this society as a group of adults with independent motivations and goals.

So, as the last social decision we make for you, and whether or not you independent adults wish it, the wizards are pulling out of the discipline/manners/arbitration business; we're handing the burden and freedom of that role to the society at large. We will no longer be the right people to run to with complaints about one another's behavior, etc. The wings of this community are still wet (as anyone can tell from reading *social-issues), but I think they're strong enough to fly with.

Michael_Karanicolas

What a great question! I’m normally a lurker here, but I had to connect in to add a bit more context to SarahAGilbert’s wonderful rundown of Dibble’s account (which I very much agree is required reading for anyone who wants to understand how the early Internet evolved).

In addition to the libertarian/free speech ethos which she cites, there was also a legal incentive for early platforms to take a hands-off approach to speech, insofar as the common law of defamation holds that the more closely a party acts to manage the content of a publication, the more legal responsibility they have. This is why, for example, a courier or postal service wouldn’t be responsible for carrying a defamatory message the way a newspaper publisher would.

The reason all this is relevant is that Prodigy, one early online service provider, explicitly billed itself as a more heavily moderated and curated space. This became a problem when in 1995 they were sued by Stratton Oakmont (yes, that Stratton Oakmont) over commentary on Prodigy's "Money Talk" bulletin board by an anonymous user alleging financial improprieties at the firm. The judge found that Prodigy's stronger moderation structure meant they were, indeed, responsible for the content, due to their assumption of editorial control. In other words – early internet platforms had a legal incentive to be as hands-off as possible, and to turn a blind eye to bad content posted by their users, since intervening created legal risk.

It was in response to this problematic incentive that the U.S. Congress passed the Communications Decency Act in 1996, which included the now famous section 230, that provided platforms with immunity from liability for good faith moderation efforts – which is why many people point to s. 230 as foundational to the modern internet, and certainly to modern content moderation.

For further reading on how the modern content moderation space evolved, I would highly recommend Kate Klonick’s groundbreaking work in the Harvard Law Review - The New Governors: The People, Rules, and Processes Governing Online Speech – though most of her story takes place in the social media age, from 2010 onwards. Still, it's a fascinating look at how early decision-makers at Facebook and YouTube thought about these challenges.

See also R. Hayes Johnson, Jr., Defamation in Cyberspace: A Court Takes a Wrong Turn on the Information Superhighway in Stratton Oakmont, Inc. v. Prodigy Services Co., 49 ARK. L. REV. 589, 623 (1996).

And, if I can cite my own work, Michael Karanicolas, Squaring the Circle Between Freedom of Expression and Platform Law, 20 Pittsburgh Journal of Technology Law & Policy 177, 2020.

*edited for typos