An encounter with Facebook’s laughable “content moderation”

Bryan Alexander 2021-05-31

Last week I posted about a dozen times to Facebook.  One of those posts caught the attention of that site’s content management operations. I’d like to share that experience here as an example of Facebook’s fumbling, as well as the chronic difficulties of content moderation.  It’s just a small event, but perhaps it is illustrative.

Around last Tuesday a Daily Beast article caught my eye.  It described how some Q-Anon activists were urging likeminded folks and true believers to avoid the 2021 Pentagon UFO report story (flying tic-tacs, etc.). Not because they thought that UFOs were bogus, but because they deemed it a distraction from what they considered much more important.

I read this in a sleepy and puckish mood, then shared it with sarcasm on Twitter and, alas, Facebook:

My favorite news story of the day so far: Q-Anon influencers are trying to convince followers *not* to believe UFO stories, because they are distractions from the main event - i.e., showing that Trump won in 2020, that COVID is a hoax, and, of course, that there's a secret cabal of rich folks who get off on a special compound found in the blood of human children. Good morning, all.

My favorite news story of the day so far:

Q-Anon influencers are trying to convince followers *not* to believe UFO stories, because they are distractions from the main event – i.e., showing that Trump won in 2020, that COVID is a hoax, and, of course, that there’s a secret cabal of rich folks who get off on a special compound found in the blood of human children.

Good morning, all.

It only elicited a few responses and I moved on with my working day.  There were plenty of other things to consider.

Later that week I logged into Facebook and was greeted by this dialog box, floating above the rest of the page:

Your post goes against our Community Standards on misinformation that could cause physical harm. No one else can see your post. We encourage free expression, but don't allow false information about COVID-19 that could contribute to physical harm.

Your post goes against our Community Standards on misinformation that could cause physical harm.

No one else can see your post.

We encourage free expression, but don’t allow false information about COVID-19 that could contribute to physical harm.

I’ve heard about such things, but never experienced them as a poster.  I screen-capped then hit “Continue.”

(After this point I forgot to screencap. What follows is based on my recollection alone, since I can’t find any logging of this on Facebook.)

The next dialog box asked me to understand that I had done A Very Bad Thing, that either Facebook human staff or software had found my trespass, and that I should not do it again.  The following box gave me a choice: admit my grievous fault, or contest the ruling. To help guide my choice, the box added that Facebook takes feedback very seriously and would love to hear from me.  I hit “Contest.”

The site paused for a few seconds, then popped up a new box. This one thanked me for my response and apologized that they were too busy to actually engage with the feedback they had just solicited.  I clicked “OK,” the box disappeared, and the normal Facebook site remained… minus my sarcastic and apparently dangerous post.

So what happened here?

My friend (and organizer of the 37th Distance Teaching and Learning conference, coming up this year!) Thomas Tobin reckoned that my use of the phrase “COVID hoax” had fit neatly into an algorithm sweeping Facebook for mis- and disinformation. That the context was clearly not pro-hoax, and was, in fact, mocking, didn’t fit so nearly into the sweep.

I’m not sure if anything else in that post twigged Facebook. Trump’s “stolen” election racket and my pointer towards one of Q’s most deranged ideas might have escaped moderation.

I don’t know what happened to comments and likes etc. attached by other Facebook users, or if Facebook notified them of the deletion.

A few quick reflections: this is a good example of an algorithm (or a person, perhaps) reading badly, which is one classic problem with any kind of content moderation. It shows Facebook acting through fiat, unilaterally imposing a decision without allowing for interaction, pushback, discussion, or feedback.  Note, too, the specific language of the warning – this isn’t about offending sensibilities or committing psychological damage, but “caus[ing] physical harm.”

Nothing else has followed along these lines.  Facebook hasn’t given me any new warnings or asked for my feedback. Otherwise nothing has changed.

Have any of you experienced the long and shaky arm of Facebook’s content control efforts?

PS: I am overdue for a post about why I still use Facebook.  That’s coming.