You may remember the late 18th century deed I wrote about a few days ago. It seemed innocent enough to me but Facebook declared it objectionable because it “contained nudity or sexual activity”. I can only assume that these decisions are made by a thoroughly defective software robot. That would be bad enough but on appeal the decision was confirmed. That was supposed to be the result of examination by a person. It’s almost beyond belief! It’s the censorship of stupidity. I have an account warning. If Facebook interprets a hand-written deed as something pornographic, what in the world will they make of a photo of a pen?
It seems the appeals are examined by exactly the same bot that examines the alledged transgressions. And of course, the machine “thinks” it is always right, simply because… it is a machine!…
I think you must be right!
Perhaps the quill used to write the document in question had been stripped of its barbs, so rendering it naked? On the other hand I have had comments rejected on one site for the phrase, “…the pen is…”, the machine reading this as two words instead of three.
It must be a filthy nekked quill!
I had a similar experience with FB except I changed my background to be a Calvin & Hobbs cartoon (Calvin is a young boy and Hobbs is his stuffed animal (a tigger)). They claimed is contain nudity…. FB is the enemy.
It surely is. I won’t let it go just yet. I think my FB readers would have enjoyed sight of the document.
I work in Electrical Engineering / Computer Science. It is scary knowing ignorant people in senior positions keep pushing out these Artificial Intelligence (AI – a misnomer if there ever was one) / Machine Learning (ML) “Agents” into positions with unchecked life and society-altering power. The Agents are improperly trained and there are no safety checks (“expensive” Humans) in place to apply common sense. As a result the Agents make bad decisions which reinforce more bad decisions; it teaches itself to act even worse over time. The really big problem is once these Agents learn bad behavior, the very nature of how AI/ML works today makes it almost impossible to unlearn the unwanted behavior. Often the only reasonable recourse is to shut the Agent down (murder it) and start all over again from the beginning, a very time consuming (hence expensive) process. The thought of allowing this defective technology as it is today to conduct wars, run power grids, diagnose and attempt to treat illnesses, etcetera makes me very uncomfortable.
Thank you very much for that, David. It’s very illuminating and very worrying!