In light of misinformation and explosively popular falsehoods we’ve seen in the last few years, Facebook has taken steps to create consequences for spreading objectionable content. This month the US Trademark and Patent Office published Facebook’s application for a detection tool on their platform. As stated in the application, its primary purpose is to improve detection of pornography, hate speech, and bullying. Last month, however, Zuckerberg emphasized the need for “better technical systems to detect what people will flag as false before they do it themselves.”
The system described is largely consistent with Facebook’s current protocols for objectionable content but it also adds layers of machine learning to improve efficiency. The move comes at a time when Facebook is under increasing public pressure to reduce the spread of propaganda through its network. Although they have expressed commitment to making improvements, they are proceeding with caution to the idea that machine learning can separate fact from fiction but due to the audiences receptive to some of the more questionable content, clear standards may be the first necessary step for implementation.
Incentives for this move are not only from a profound sense of responsibility, but also possibly financial. According to a survey conducted by the Pew Research center, 62% of U.S. adults get their news primarily from social media but Facebook as a whole presents itself as a neutral platform. Making editorial judgments can easily alienate users across a wide spectrum of ideologies. While bullying, hate speech, and pornography are much easier to identify, fake news will have both technological and ethical barriers to adequate monitoring.