Stung by charges that it allowed fake news stories to proliferate during the 2016 election cycle, Facebook on Thursday began rolling out broadly a feature meant to regain the trust of its members. The tool effectively will surround questionable stories with related news stories offering different perspectives — a strategy intended to help readers discern where the truth lies.
Facebook this spring began testing the Related Articles feature, offering different takes on stories fact checkers have disputed as false, including coverage of the topics by legitimate news organizations.
Facebook also has updated its machine learning technology to flag potential hoax stories and send them to third-party fact checkers, according to News Feed product manager Sara Su. Following review, additional stories may be posted showing related articles beneath the original post.
Election Fallout
Facebook has been under fire since the 2016 election cycle for allowing fake news to proliferate on the network, and CEO Mark Zuckerberg has taken criticism for failing to understand his company’s role as a de facto provider of news and information to readers.
The company in January brought in former CNN and NBC broadcaster Campbell Brown to lead its news partnerships group, which acts as a liaison with more traditional media organizations to help Facebook better adhere to journalistic standards in terms of the content that is allowed in users’ News Feeds.
“Ad placements next to fake news is increasingly being recognized as a problem by both the ad and publishing communities,” observed Rick Edmonds, media business analyst at Poynter.
“I think and certainly hope that there is a growing flight to quality, which will reward the providers of serious journalism like The New York Times, The Atlantic and Vox,” he told TechNewsWorld.
Facebook’s relationship with traditional media has been complicated by the fact that many legitimate news sites have found their articles and videos posted on the network for free, with no way of recouping revenue and no way of tracking how many legitimate subscribers actually are viewing the content.
As a result, Facebook last month announced plans to test a paywall that would charge users a fee to view content from certain sources, following a grace period of 10 free articles per month.
“Fake news is a really tough problem to crack,” noted Mark Nunnikhoven, vice president of cloud research at Trend Micro.
“The challenge isn’t actually filtering out posts from a news feed or search index — it’s how to identify what is fake about a story,” he told TechNewsWorld.
Underground services in China, Russia, the Middle East and other countries propagate fake news, using everything from underground crowdsourcing services to voter manipulation platforms and services like BeSoEasy, which sells automation bots, according to a recent Trend Micro white paper.
Facebook’s approach is admirable, Nunnikhoven said, because it avoids pointing fingers at particular stories but gives readers enough information to make their own value judgments.
However, “the deeper challenge is determining whether something has gained the attention of the native algorithms from organic interest or a coordinated push,” he pointed out. “Even then, a coordinated push could be a marketing campaign and not something malicious.”
Search Algorithm Issues
The fake news phenomenon is clearly is not limited to Facebook. Google also has been a target of criticism for allowing content farms and other automation services to drive certain stories up the search rankings. The result is that when users search for a particular subject, the misleading articles are displayed right next to legitimate stories.
Google has taken some steps to improve its algorithms, using new search-quality rater guidelines and direct feedback tools, Ben Gomes, vice president of engineering at Google, noted in an online post this spring.
Google also expanded its use of Jigsaw’s fact-check tool into its global search and Google News sites to display legitimate articles and tag them to show that they have been properly vetted, Justin Kosslyn, project manager at Jigsaw, and research scientist Connie Yu wrote in an online post.
Google also commissioned a study, conducted jointly by Michigan State University, Oxford University and the University of Ottawa, which found that fake news and biased search algorithms did not have a major impact on public opinion, based on a survey of 14,000 Internet users in the U.S. and six European nations.