Facebook is in the process of launching a range of updates intended to fight the spread of fabricated and harmful information on its site, intensifying its fight against fake news and hate speech as it continues to deal with the mounting outside pressure.
Various human rights groups and lawmakers have been critical of the social media giant for its inadvertent role in the spread of extremism and fake news on Facebook and Instagram. This series of updates will control the visibility of links that are suspected to be clickbait or misleading. Facebook is also strengthening its fact-checking program with external expert sources (e.g., Associated Press) to assess videos and other materials posted on Facebook.
Meanwhile, in a separate Senate subcommittee hearing, Facebook representatives were asked about claims that social media firms are prejudiced against conservatives. The simultaneous hearings portray the fine line that Facebook and other social media companies (e.g., YouTube, Twitter, Instagram) must walk as they work to root out the problematic and damaging materials while avoiding making moves that could be viewed as censorship.
Facebook CEO Mark Zuckerberg’s latest vision for his company, which highlights private, encrypted messaging, is expected to pose a problem for the social media giant especially when it comes to taking out problematic material. In a meeting with reporters at the company HQ in Menlo Park, CA, Guy Rosen, VP for integrity, recognized that challenge. He noted that establishing a balance between public safety and protecting people’s privacy is “something societies have been grappling for centuries.”
He insisted that the company is intent on making sure that it’s doing its best “as Facebook evolves toward private communications.” However, he declined to go into specifics. The company has already assigned teams to monitor various posts for material that violates its policies against information that incites violence, is explicitly sexual, or can be classified as hate speech.
According to Karen Courington, who is assigned to product-support operations at Facebook, half of the 30,000 workers in the social media giant’s “safety and security” teams are concentrating on content review. She noted that content moderators are composed of Facebook employees and contractors, but declined to provide a percentage breakdown. The company has drawn flak for the environment the content reviewers work in. These teams are constantly subjected to a barrage of posts, videos, and photos, and have to make a quick decision as to which of them should be taken down or left alone in a matter of minutes if not seconds.
According to Courington, these individuals undergo 80 hours of training before they start working and they also receive “additional support,” which includes psychological resources. She also noted that their wages exceed the “industry standard” for this type of job. Even for material that clearly runs against company policies, Facebook staff are still left with the task of handling information that falls into a more gray area: material that doesn’t break the rules but would be regarded as offensive by most viewers or is false. Facebook and its peers have long tried to sidestep the role of content editors and “arbiters of truth” so they often end up leaving the material up — although less visible — when they encounter those gray areas.
Still, if Facebook determines that the information is incorrect, why doesn’t the company remove it? This question was put forward by Paul Barrett, deputy director at the New York University Stern Center for Business and Human Rights. Filippo Menczer, a professor of informatics and computer science at Indiana University, believes that Facebook is facing a tough challenge. Still, he said he is glad that the company is making an effort to consult journalists, researchers, and other experts on fact-checking. Menczer has recently spoken with the company a couple of times on the issue of misinformation.