How Google fights disinformation on YouTube

First published at the Sunday Times & IT , November 10, 2019

The words “misinformation,” “disinformation,” and “fake news” mean different things to different people. Google refers to disinformation as deliberate efforts to deceive and mislead using the speed, scale and technologies of the open web. What is disturbing is several Marcos videos proliferate in YouTube which contain several fallacies which have been debunked already. One example is the order of Marcos “not to fire” at EDSA protesters when in fact there was an order to decimate mutineers on EDSA.

I got a copy of How Google Fights Disinformation from Yves Gonzales, Head of Government Affairs and Public Policy, Google Philippines to understand how disinformation is being handled. Facebook looks at inauthentic coordinated behavior of the accounts.

Subscribers or account names are not an issue in the YouTube platform if their content does not violate community guidelines. My issue is when video content is misleading or contain half-truths. In a Marcos propaganda video, a text description reads “After 25 years, it is now obvious that Cory administration is more violent with more journalist dead in her 6 years’ term compared to Marcos’ 20 years.” Where are the facts to support this?

Google’s approach to tackling disinformation in their products and services is based around a framework of three strategies: make quality count in their ranking systems, counteract malicious actors, and give users more context. They want to create a balance “between managing our users’ expectations to express themselves freely on the platform with the need to preserve the health of the broader community of the creator, user, and advertiser ecosystem.” They use three guiding principles:

1. Keep content on the platform unless it is in violation of our Community Guidelines

2. Set a high bar for recommendations

3. They view monetization on our platform as a privilege.

Beyond removal of content that violates their community guidelines, one of three explicit tactics used by YouTube to support responsible content consumption is to reduce recommendations of low-quality content. The Marcos propaganda video I mentioned will not be pulled out in YouTube, but it won’t be listed in “recommended for you” videos. The video contains half-truths, so it is not a downright violation of their policies. What it would remove though are violations on YouTube’s policies against hate and harassment. “Hate speech refers to content that promotes violence against, or has the primary purpose of inciting hatred against, individuals or groups based on certain attributes, such as race or ethnic origin, religion, disability, gender, age, veteran status or sexual orientation/gender identity. Harassment may include abusive videos, comments, messages, revealing someone’s personal information, unwanted sexualization, or incitement to harass other users or creators.”

For example, content that claims that the Earth is flat or promises a “miracle cure” for a serious disease might not violate their Community Guidelines, but they “don’t want to proactively recommend it to users”. The intention is to get this right for their users. They use people as evaluators to provide input on what makes up disinformation or borderline content under their policies, which in turn informs their ranking systems.

As media consumers, flagging the half-truths or misleading content on YouTube would help evaluators to understand context. YouTube’s artificial intelligence recommendation engine, the algorithm that directs what you see next based on your previous viewing habits and searches, could promote false and useless content in the pursuit of engagement. Google needs to look at their policies about misleading content. The enemy of truth is not the outright lie, because an outright lie is easy to see and expose. The enemy of truth is the half-truth lies coated in generalizations, which may have some truth in them but are meant to deceive. How do they now create a balance of freedom of expression on the platform with the need to preserve the health of the broader community? Lies interspersed with part truths fool people. The antidote: demand for specifics or provide context. Google provides users with more context (often text-based information) to make them more informed users on the content they consume. They provide information panels that contain additional contextual information and links to authoritative third-party sites so users couldmake educated decisions about the content they watch on their platform.

While most Marcos-related videos attempting to revise history are not searchable and circulate among closed groups, let’s not underestimate closed spaces in niche social networks. Many of us are already using messenger apps because Facebook or Twitter could be stressful. With the popularity of closed, niche networks and group-chat apps, links to misleading content can re-posted in various groups. More work needs to be done in fighting disinformation. I commend the recent launch of #ThinkFirst campaign of Google and their partners to bring awareness and unite diverse stakeholders to promote media and information literacy in the country.

Download How Google Fights Disinformation.