Facebook last week released six videos to educate people about artificial intelligence.
AI will bring major changes to society, and will be the backbone of many of the most innovative apps and services of the future, but it remains mysterious, noted Yann LeCun, Facebook’s director of AI research, and Joaquin Candela, the company’s director of applied machine learning in an online post.
The videos are “simple and short introductions” that will “help everyone understand how this complex field of computer science works,” they said.
The announcement immediately sparked speculation that Facebook would use its AI technology to tackle the fake news items posted on its pages.
However, tackling fake news is more a question of ethics than technology, LeCun told reporters, with questions coming into play about the trade-off between filtering and censorship versus free expression and democracy.
Mysteries of the Mind
Understanding the videos, which LeCun created, does require a certain amount of grounding in science.
Still, they “set a baseline — help explain what an AI can do in the near term,” said Rob Enderle, principal analyst at the Enderle Group.
Watching the videos “likely makes the topic less frightening,” he told TechNewsWorld.
Facebook “is trying to reset the perception of AI,” and from that standpoint, they’re dead-on, observed Jim McGregor, a principal analyst at Tirias Research.
“This is what the market needs,” he told TechNewsWorld, to counter “unrealistic Hollywood scenarios and sensationalism in the general press.”
Facebook’s long-term goal is to understand intelligence and build intelligent machines, according to LeCun and Candela.
Trying to understand intelligence and how to reproduce it in machines will “help us not just build intelligent machines,” they suggested, “but develop keener insight into how the mysterious human mind and brain work.”
Corralling Fake News
AI might help clamp down on the proliferation of fake news stories, which circulated in unprecedented numbers during the recent presidential election, and may have influenced voters.
One motive for the creation of fake news stories is money, according to one fake-news writer who told The Washington Post that he believed his made-up stories helped President-Elect Donald Trump win votes.
In separate investigations, both BuzzFeed and The Guardian earlier this year found upwards of 100 pro-Trump sites in Macedonia — many apparently run by teenagers looking to make a quick buck.
Fake news also comes from partisan news sites slanted strongly toward one candidate.
Thirty-eight percent of stories generated by right-wing, pro-Trump sites were false or contained falsehoods blended with facts, according to a study BuzzFeed published this fall.
First Amendment Issues
AI “can help filter abusive or fraudulent content, improve security, analyze user interests, perform image recognition and classification,” Tirias’ McGregor pointed out.
“There are so many facets that Facebook can benefit from,” he said.
However, AI would need to have “absolute control” over all incoming posts and filters to be effective, suggested Enderle. “It really is the only tool that we have that can handle the kind of scale Facebooks needs.”
That might raise a storm of protests over the issue of First Amendment rights.
The stakes are “extremely high” for Facebook, Enderle said, as not addressing the fake news issue in a timely fashion, or addressing it badly, “could lead to everything from excessive litigation to company failure.”
The question at the heart of the matter, suggested McGregor, is “how do you get something right that has not been done before? The answer is, you try and learn from your mistakes.”
However, AIs “will never be 100 percent perfect,” he said, “because, just like us, they are always learning.”