Banned for 24 hours from Facebook.

I just got banned for 24 hours from Facebook.

Basically what happened is this: a friend of mine posted an article about how a woman was banned from volunteering at the local school when an anonymous parent informed school officials about her OnlyFans site.

I remarked:

What always fascinates me is how a lot of slut shaming is driven by other women. But that may be because men are pigs.

24 hour suspension.

It was in the context of a joke, but also served as a societal observation about the difference between men and women when confronting someone like the woman in the Florida article. And the punchline of the joke was not “women suck” but “men are pigs.”

Remember: always punch down.


But that doesn’t matter to Facebook.

See, the fundamental problem with Facebook is this: there are just too many posts, too many comments to monitor. If we assume all 1 billion people on Facebook leave just 1 comment a day and one post per every 10 days–that’s far far more content than a team of moderators can physically monitor.

So Facebook has to resort to AI to do the monitoring.

Generally the way AI works in a case like this is that Facebook monitors for certain key phrases and word patterns–and each key phrase and word pattern is assigned a score. Of course because all this is built with an Artificial neural network, it’s not quite the case where (say) we score the word “slut” with a -20 index and “pig” with a -10 index–and if the sum of the indexes reach a certain threshold, the poster is banned.

It’s more complicated than that.

But it’s also a black box. Meaning if I were to ask (say) a developer at Facebook how the system works beyond “well, we trained this network to process sentences”–they couldn’t tell you. You can’t really know how a neural network actually works, only that it does.

Usually.

So for me to complain about Facebook’s 24 hour suspension would be like complaining about a hurricane or an earthquake: sure, you got affected. But it was a computer program–one that necessarily had to be built as a result of the size and scale of Facebook verses the relatively small size of Facebook’s work force–that caused the problem.

One that was necessitated in part by pressure on Facebook to “moderate” itself–but balanced with Facebook’s own need to drive “engagement”, which ironically enough, favors more problematic content. After all, what’s the best way to get people to engage with content than to make them angry and drive their anxiety through the roof?


But there’s something more interesting at play here.

Because the AI that Facebook uses to screen content cannot be debugged or explained–only retrained (at tremendous cost to Facebook staff in terms of time and effort)–it makes the perfect way for bias to creep into the system. To bias Facebook against certain forms of speech and for certain ideas or concepts.

And no-one can be the wiser, really.

We have seen this elsewhere, by the way–and in far more problematic settings than a social media network. Like software that provides sentencing guidelines, which turns out is biased against blacks. Not because there is a piece of code that says:

    if (race === 'black') {
        sentence_recommendation += 5;  // years
    }

But because deep in the bowls of the neural network, associations have been made by training the software on existing case fact patterns which tease out race, and tease out the fact that certain races “deserve” more time in jail.

And it cannot be fixed.

It cannot be fixed because the patterns on which we are training the software is biased against blacks.

And, so the saying goes: “Garbage In, Garbage Out.”

Of course it would be nice to try to train the software on hypothetical situations–but that will always fall short because there are real world situations that should mitigate circumstances. Unfortunately one of the major mitigations that has been used, in practice, in the past–is race: if you’re black, you supposedly deserve more jail time than if your white.

And those “mitigations” get encoded into the black box of connection weights and signal feedbacks that is a neural network.


That Facebook’s algorithms are a black box allows Facebook to honestly say, with a straight face, that they are entirely value neutral: it’s a computer making the decisions to ban people based on their radical ideology.

Or bad taste in humor.

But on the other hand, it allows Facebook management to quietly say to others in those often liberal-libertarian leaning social circles that make up the bulk of Silicon Valley, that they are quietly leading the charge against “hateful” conservative ideology.

And as we know, Facebook pretty much is now driving the conversation–and defining what is, and is not, acceptable to say.