Social Networking

Report: YouTube Too Fixated on Engagement to Curb Toxic Content

YouTube

YouTube executives have been unable or unwilling to rein in toxic content because it could reduce engagement on their platform, Bloomberg reported Tuesday.

In a 3,000-word article, Mark Bergen wrote that the US$16 billion company has spent years chasing one business goal: engagement.

“In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary, and toxic content that the world’s largest video site surfaced and spread,” he noted.

Despite those concerns, YouTube’s corporate leadership is “unable or unwilling to act on these internal alarms for fear of throttling engagement,” Bergen wrote.

The problem with the social internet, IMO, is metrics. They’re almost always a false indicator — shock rather than quality — but because businesses are built on KPIs, they will always manage by any given numbers, even bad ones.

Tackling Tough Content Issues

YouTube did not respond to a request for comment for this story, but in a statement provided to Bloomberg, it maintained the company’s primary focus has been tackling tough content challenges.

Some of the measures taken to address the toxic content challenge:

  • Updating its recommendations system to prevent the spread of harmful misinformation by adding a measure of “social responsibility” to its recommendation algorithm, which includes input on how many times people share and click the “like” and “dislike” buttons on a video;
  • Improving the news experience by adding links to Google News results inside of YouTube search and featuring “authoritative” sources from established media outlets in its news sections;
  • Increasing the number of people focused on content issues across Google to 10,000;
  • Investing in machine learning to be able to more quickly find and remove content that violates the platform’s policies;
  • Continually reviewing and updating its policies (it made more than 30 policy updates in 2018 alone); and
  • Removing over 8.8 million channels for violating its guidelines.

‘Bad Virality’

Corporate culture began to change at YouTube in 2012, Bergen explained, when executives like Robert Kyncl, formerly of Netflix, and Salar Kamangar, a Google veteran, were brought in to make the company profitable.

“In 2012,” Bergen wrote, “YouTube concluded that the more people watched, the more ads it could run — and that recommending videos, alongside a clip or after one was finished, was the best way to keep eyes on the site.”

At that time, too, Kamangar set an ambitious goal for the company: one billion hours of viewing a day. So the company rewrote its recommendation engine with that goal in mind and reached it in 2016.

Virality — a video’s ability to capture thousands, if not millions of views — was key to reaching the billion-hour goal.

“YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention,” Bergen wrote.

“People inside YouTube knew about this dynamic,” he explained. “Over the years, there were many tortured debates about what to do with troublesome videos — those that don’t violate its content policies and so remain on the site. Some software engineers have nicknamed the problem ‘bad virality.'”

Borderline Content

The problem YouTube now faces is how to create an effective mechanism to handle problematic content, observed Cayce Myers, an assistant professor in the communications department at Virginia Tech in Blacksburg, Va.

“Much of this content doesn’t violate YouTube’s social community standards,” he told TechNewsWorld. “This is content that is borderline.”

Any mechanism that removes content from a platform creates risks. “You run the risk of developing a reputation of privileging some content over others as to what’s removed and what’s not,” Myers explained.

On the other hand, if something isn’t done about toxic content, there’s the risk that government regulators will enter the picture — something no industry wants.

“Any time you have government intervention, you’re going to have to have some mechanism for compliance,” Myers said.

“That creates an expense, an added layer of management, an added layer of employees, and it’s going to complicate how your business model runs,” he continued.”It may also affect the ease at which content is populated on a site. Regulatory oversight may take away the kind of ease and quickness that exists today.”

From Lake to Cesspit

It’s doubtful that government regulation of YouTube would be beneficial, observed Charles King, principal analyst at Pund-IT, a technology advisory firm in Hayward, California.

“Though Facebook and YouTube and Google execs have claimed for years to be doing all they can to curb toxic content, the results are pretty dismal,” he told TechNewsWorld.

“The video shared by the suspect in the Christchurch, New Zealand, mosque massacre is just their latest failure,” King remarked. “That said, it’s difficult to envision how government regulation could improve the situation.”

Companies ought to be concerned about toxic content because it can have a negative impact on a company’s brand and financial performance, he pointed out.

“You can see evidence of that in various consumer boycotts of advertisers that support talk show and other TV programs whose hosts or guests have gone beyond the pale. No company wants to be deeply associated with toxic content,” King added.

“Failing to control or contain toxic content can poison a platform or brand among users and consumers. That can directly impact a company’s bottom line, as we’ve seen happening when advertisers abandon controversial programs,” he explained. “In worst-case circumstances, the platform itself may become toxic. With inattention and pollution, a popular mountain lake can quickly transform into a cesspit that people avoid. Commercial companies are no different.”

Trump Card

Meanwhile, YouTube’s efforts to manage toxic content may get more complicated due to a federal court ruling in New York state. That decision stems from President Donald J. Trump’s blocking of some Twitter followers critical of his job performance.

“We hold that portions of the @realDonaldTrump account — the ‘interactive space’ where Twitter users may directly engage with the content of the President’s tweets — are properly analyzed under the ‘public forum’ doctrines set forth by the Supreme Court, that such space is a designated public forum, and that the blocking of the plaintiffs based on their political speech constitutes viewpoint discrimination that violates the First Amendment,” wrote U.S. District Court Judge Naomi Reice Buchwald.

That “public forum” analysis has social media executives wondering about the legal status of their platforms.

“Everybody is concerned that rather than being a private club where everybody can have their own dress code, they’re more like a public forum or town square where they’re subject to the First Amendment,” said Karen North, director of the Annenberg Online Communities program at the University of Southern California in Los Angeles.

“If there’s a question of freedom of speech, then everyone is wondering where they can draw the line between what should be available and what should be blocked,” she told TechNewsWorld. “Some pretty vile and toxic speech is legal, and in the town square, that speech is protected.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Social Networking

What's your outlook for the business climate in 2025?
Loading ... Loading ...

Technewsworld Channels