Facebook accidentally blocks genuine COVID-19 news
Fake news, bogus miracle cures: Facebook has been dealing with a lot, and COVID-19 isn’t making it any easier.
Like many other companies, Facebook is trying to keep its employees safe by allowing them to opt for working remotely, so as to avoid infection.
But when humans are taken out of the content moderation loop, it might suggest that automated systems are running the show. Facebook is denying that a recent content moderation glitch has anything to do with workforce issues, but it’s also saying that automated systems are to blame for being overzealous in stamping out misinformation.
On Tuesday, Guy Rosen, Facebook’s VP of Integrity, confirmed user complaints about valid posts about the pandemic (among other things) having been blocked by mistake by automated systems:
We’ve restored all the posts that were incorrectly removed, which included posts on all topics – not just those rel… twitter.com/i/web/status/1…
—
Guy Rosen (@guyro) March 18, 2020
On Wednesday, a Facebook spokesperson confirmed that all affected posts have now been restored. While users may still see notifications about content having been removed when they log in, they should also see that posts that adhere to community standards are back on the platform, the spokesperson said.
Facebook says it routinely uses automated systems to help enforce its policies against spam. The spokesperson didn’t say what, exactly, caused the automated systems to go haywire, nor how Facebook fixed the problem.
They did deny that the issue was related to any changes in Facebook’s content moderator workforce, however.
Regardless of whether the blame should lie with humans or scripts, The Register reports that it took just one day for COVID-19 content moderation to flub it. On Monday, Facebook had put out an industry statement saying that it was joining Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube to scrub misinformation contained in posts about COVID-19. (Speaking of which, just for the record, health authorities say that neither drinking bleach nor gargling with salt water will cure COVID-19).
We are working closely together on COVID-19 response efforts. We’re helping millions of people stay connected while also jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world. We invite other companies to join us as we work to keep our communities healthy and safe.
Within one day, its automated systems were, in fact, squashing authoritative updates. From what the Register can discern, the systems-run-amok situation was first spotted by Mike Godwin, a US-based lawyer and activist who coined Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.”
On Tuesday, Godwin said that he’d tried to post a non-junky, highly cited story about a Seattle whiz-kid having built a site to keep the world updated on the pandemic as it spreads, as in, it gets updated minute by minute.
When Godwin tried to share the story on Facebook, he got face-palmed:
Facebook decided that my posting of this Times of Israel article is spam. (It’s not spam.) pic.twitter.com/3NqUbiwmyi
— Mike Godwin (@sfmnemonic) March 17, 2020
Other users reported similar. One such carries quite a bit of Facebook cred: Alex Stamos, formerly Facebook’s chief security officer and now an infowar researcher at Stanford University, weighed in:
It looks like an anti-spam rule at FB is going haywire. Facebook sent home content moderators yesterday, who generally can’t WFH due to privacy commitments the company has made. We might be seeing the start of the ML going nuts with less human oversight. https://t.co/XCSz405wtR
— Alex Stamos (@alexstamos) March 17, 2020
A Facebook post about keeping its workers and its platform safe said that it requested any of its workers who can work at home to do so. However, that’s not an option for all of the company’s tasks, it specified:
For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons.
According to Stamos, content moderation is one of the tasks that can’t be done at home due to Facebook’s privacy commitments. So which is it: were content moderators sent home as Stamos suggested, leaving the machines in charge? How does that jibe with Facebook’s statement that staffing had nothing to do with the glitch?
Either way, this crisis is pointing to some kinks needing to be worked out in the human/script content moderation process. Facebook workers have a lot on their plate when it comes to keeping users connected with family, friends and colleagues they can no longer see face to face, and when it comes to keeping us all properly informed, as opposed to drinking bleach or wasting our time on other snakeoil posts.
The last thing we need is to be kept from reading about things that whiz-kids are cooking up. Let’s hope that Facebook gets this figured out.
Latest Naked Security podcast
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.
Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yXPQA1Px0uY/