On fandom content and policing

From Fanlore
Jump to navigation Jump to search
Meta
Title: On fandom content and policing
Creator: fozmeadows
Date(s): December 5, 2018
Medium:
Fandom:
Topic:
External Links: on fandom and content policing, Archived version
Click here for related articles on Fanlore.

On fandom content and policing is a 2018 essay by fozmeadows.

Within a few days of its posting, it had well over 2000 notes.

Some Topics Discussed

From the Essay

While there are legitimate arguments to be made about the unwisdom of tumblr’s soon-to-be-forbidden content choices - the whole “female-presenting nipples” thing and the apparent decision to prioritise banning tits over banning Nazis, for instance - the functional problem isn’t that they’ve decided to monitor specific types of content, but that they’ve got no sensible way of enacting their own policies. Quite clearly, you can’t entrust the process to bots: just today, I’ve seen flagged content that runs the gamut from Star Trek: TOS screenshots to paleo fish art to quilts to the entire chronic pain tag to a text post about a gay family member with AIDS - and at the same time, I’ve still been seeing porn gifs on my dash.

It’s absolute chaos, which is what happens when you try to outsource to programs the type of work that can only reliably be done by people - and even then, there’s still going to be bad or dubious or unpopular decisions made, because invariably, some things will need to be judged on a case by case basis, and people don’t always agree on where the needle should fall.

It’s a point I’ve made again and again, but I’m going to reiterate it here: it’s always easy to conjure up the most obvious, extreme and clear-cut examples of undesirable content when you’re discussing bans in theory, but in practice, you need to have a feasible means of enacting those rules with some degree of accuracy, speed and accountability that’s attainable within both budget and context, or else the whole thing becomes pointless.

On massive sites like AO3 and tumblr, the considerable expense of monitoring so much user-generated content with paid employees is, to a degree, obviated by the concept of tagging and blocking, the idea being that users can curate and control their own experience to avoid unpleasant material. There still needs to be oversight, of course - at absolute minimum, a code of conduct and a means of reporting those who violate it to a human authority in a position to enforce said code - but the thing is, given how much raw content accrues on social media and at what speed, you really need these policies to be in place, and actively enforced, from the get-go: otherwise, when you finally do start trying to moderate, you’ll have to wade through the entire site’s backlog while also trying to keep abreast of new content.

It’s also because, quite frankly, neither Facebook nor Twitter were originally thought of as entities that would one day be ubiquitous and powerful enough to be used to sway elections; and when that capability was first realised by those with enough money and power to take advantage of it, there were no internal safeguards to stop it happening, and not nearly enough external comprehension of or appreciation for the risks among those in positions of authority to impose some in time to make a difference. Because even though time spent scrolling through social media passes like reverse dog years - which is to say, two hours can frequently feel like ten minutes - its impact is such that we fall into the trap of thinking that it’s been around forever, instead of being a really recent phenomenon. Facebook launched in 2004, YouTube in 2005, Twitter in 2006, tumblr in 2007, AO3 in 2009, Instagram in 2010, Snapchat in 2011, tinder in 2012, Discord in 2015. Even Livejournal, that precursor blog-and-fandom space, only began in 1999, with the purge of strikethrough happening in 2007. Long-term, we’re still running a global beta on How To Do Social Media Without Fucking Up, because this whole internet thing is still producing new iterations of old problems that we’ve never had to deal with in this medium before - or if so, then not on this scale, within whatever specific parameters apply to each site, in conjunction with whatever else is happening that’s relevant, with whatever tools or budget we have to hand. It is messy, and I really don’t see that changing anytime soon.

Which is why, compared to what’s happening on other sites, the objections being raised about AO3 are so goddamn frustrating - because, right from the outset, it has had a clear set of rules: it’s just not one that various naysayers like. Content-wise, the whole idea of the tagging system, as stated in the user agreement, is that you enter at your own risk: you are meant to navigate your own experience using the tools the site has provided - tools it has constantly worked to upgrade as the site traffic has boomed exponentially - and there’s a reporting process in place for people who transgress otherwise. AO3 isn’t perfect - of course it isn’t - but it is coherent, which is exactly what tumblr, in enacting this weird nipple-purge, has failed to be.

Plus and also: the content on AO3 is fictional. As passionate as I am about the impact of stories on reality and vice versa, this is nonetheless a salient distinction to point out when discussing how to manage AO3 versus something like Twitter or tumblr. Different types of content require different types of moderation: the more variety in media formats and subject matter and the higher the level of complex, real-time, user-user interaction, the harder it is to manage - and, quite arguably, the more managing it requires in the first place. Whereas tumblr has reblogs, open inboxes and instant messaging, interactions on AO3 are limited to comments and that’s it: users can lock, moderate or throw their own comment threads open as they choose, and that, in turn, cuts down on how much active moderation is necessary.

tl;dr: moderating social media sites is actually a lot harder and more complicated than most people realise, and those lobbying for tighter content control in places like AO3 should look at how broad generalisations about what constitutes a Bad Post are backfiring now before claiming the whole thing is an easy fix.

Fan Comments

[actyourshoesizegirl reblogged this from klaineharmony and added]:

I’ve said similar things on here a few times and I make this argument in my work nigh on constantly. When you ask that a commercial entity accept responsibility for moderating/filtering/blocking content/speech, first ask yourself first

1) what they are technically capable of

2) what their own interests and motivations are.

Contrary to popular belief, private operators are by and large not bound by freedom of speech/expression/opinion rules. Those are fundamental rights that, in almost all cases, only bind the state.

Companies will get this wrong because they don’t have the technical capabilities to do it properly or because their interests conflict with your interests. Be careful what you ask for. You might get it. In all the wrong ways.

[quizzicalqueek]:

The problem here is that people’s complaints about AO3 weren’t about “NSFW content.” If it were just “does it mention a dick?” that would be a much simpler problem.

They were saying AO3 should ban, say, portrayals of rape, or of underage sex - but many of those people will then agree that they’re only talking about certain portrayals of these things.

Can a bot figure out whether a story is portraying a rape in a positive or negative light? PEOPLE can’t even agree on that. Can a bot identify the exact ages of all participants in a sexual scene, even if those ages aren’t mentioned anywhere because anyone familiar with that particular canon would know them? Hell, can a bot even figure out if a sex scene is consensual?? Many humans can’t even fucking figure that out in real life. And then there are fics containing rape play, where people consent to have what looks like non-consensual sex.

Foz has posted before about how difficult-to-impossible it would be for humans to consistently apply the standards people push for, even if everyone could agree on those standards. These definitely aren’t the kinds of judgments current AI is capable of making. [1]

[elizabethminkel reblogged this from porcupine-girl and added]:

I want to reblog this excellent addition, because it’s essentially what I put in my tags as well.

One of the things that’s so complicated about this conversation—and something both porcupine-girl and fozmeadows are hitting really well—is that for a lot of AO3 detractors, it’s not that people are writing stories about rape, incest, etc, but how they’re writing about them. Not “how dare you write an underage scene,” but rather “how dare you fetishize/romanticize/wank off to underage sex in your writing?” Forget about a bot figuring out if sex is consensual; what if it’s definitely not? Trying to find a line between merely depicting and romanticizing…is literally impossible, because all humans have had different experiences, will read and see things different ways. For that matter, I could write something meant to be horrifying and in no way sexy, and someone could still find it sexy. They could find the fact that my character finds it upsetting sexy, too! You can’t control the way someone reads your work, no matter how hard you try; you can’t control thought.

The big social media platforms are currently grappling with moderation at scale. Facebook is utterly incompetent, on both the automated side and the human side. YouTube has long been a shitshow on this front. Things are…clearly not going well at Tumblr! All of these efforts are inherently going to barrel right over context. Tumblr’s “is it nipple art tho” question is a great example, as is the blurring of any distinctions between erotica and porn.

AO3′s tagging system—which isn’t flawless, and relies on mutual trust across thousands and thousands of people—actually sort of deliberately removes context, on a platform level. If I tag something “noncon,” of course I can add more tags like, “but sexy tho” or “NOT SEXY THO JUST UPSETTING,” but generally, I’m trying to let people know that my story contains nonconsensual sex. That’s either a big backspace for people who don’t want to read that, regardless of context, or a flag for people who then step in and determine the context for themselves. And that interpretation, of course, will vary from reader to reader. But it shifts that decision from the platform to the reader—something that’s not going to work on a big social media platform, at least not in any way that I can envision. Certainly not on a site with traditional commercial pressures. [2]

References