Google’s approach to fighting misinformation online
Information can be a powerful thing. We’re determined to help people find what they’re looking for and make informed decisions about what they see online.
When crises and significant global events occur, from the Coronavirus pandemic to the Russian invasion of Ukraine, technology can act as a lifeline, connecting people around the world and providing access to critical information. We’ve seen this more than ever in recent years, as people increasingly turn to the internet to meet, form communities and better understand the events happening around us.
But, while digital acceleration is helpful during these times, it can also mean that misinformation can spread in new and more problematic ways - making it harder for people to find reliable guidance.
How does Google approach misinformation?
We have teams of experts around the world working in the fight against misinformation. Their work focuses on ensuring that we provide users with high quality and trusted information, rewarding the publishers and creators who produce it, surfacing more authoritative sources and reducing the spread of borderline content and removing content that puts people at risk.
In early 2021, we launched the Google Safety Engineering Center for Content Responsibility in Dublin, a regional hub for Google experts working to combat the spread of illegal and harmful content. It’s a place where we can share this work and improve it - with policymakers, researchers, and regulators.
From Search to YouTube to Google Ads, misinformation manifests in different ways on different platforms. A part of the work we do to tackle misinformation starts with the rules we put in place across Google services to prohibit certain types of harmful content and behaviors. When we find content that violates these policies, we take action - like blocking, removing or restricting content so it is less likely to cause harm.
This approach allows us to tailor how we moderate content across products. This might include deceptive behavior online such as the use of deepfakes, fraudulent activity and spamming, impersonation and the misrepresentation of ownership as well as medical misinformation. We take action against this kind of content at scale.
Filtering out misinformation isn’t something we can do alone. We’re working in collaboration with partners in every country we operate in to innovate and create ways to counteract fake news online, covering everything from 5G to national security to COVID-19. Collaboration is the key to success in the fight for quality information. For many years we’ve invested in both products and partnerships to fight misinformation. We also signed the EU Code of Practice on Disinformation in 2018 and worked with the other signatories on a stronger version that was adopted in 2022.
COVID-19 and the spread of misinformation
Our approach also guided our response to the Coronavirus pandemic. COVID-19 was an unprecedented event that impacted all of our lives and saw us change the way in which we communicate and find information.
As the pandemic evolved, so did online searches for health-related topics, like home tests, vaccines and masks, which in turn presented an opportunity for bad actors to capitalise on the situation.
We’ve had long-standing policies across our products prohibiting harmful and misleading medical or health-related content. In response to COVID-19, our teams worked to stop a variety of new abuses including phishing attempts, malware, dangerous conspiracy theories, and fraud schemes. We addressed this throughout our processes and services, from policy updates to enforcement.
At YouTube for example, this challenge was met with a dedicated policy on COVID-19 medical misinformation, based on health authority guidance. This targeted policy makes clear that content that poses a serious risk of egregious user harm, such as content that contradicts high-quality health information from local health authorities (LHA) and the World Health Organization (WHO), will not be allowed on the platform. Enacting this policy allowed enforcement teams to take action to help protect the YouTube community. For example, in the first 18 months of the pandemic, YouTube removed over one million videos related to dangerous Coronavirus information, like false cures or claims of a hoax.
Empowering people to find trustworthy information
One of the many reasons online misinformation is difficult to tackle is that a single group alone cannot fight it - collaboration between academics, policymakers, publishers, NGOs and technology companies is key.
With health-specific information we do this by working with trusted partners, including the WHO and the Centers for Disease Control and Prevention (CDC). During the height of the COVID-19 these partnerships were key in our efforts to raise up information from authoritative sources, helping people around the world to navigate the pandemic with healthcare information.
In 2021, we contributed €25 million to the launch of the European Media and Information Fund to help academics, publishers and nonprofits launch their own media literacy programs, extend fact-checking initiatives and conduct vital research into all forms of misinformation.
Fact checking has also become an important tool in spotting misinformation online more broadly and debunking myths, particularly for claims that show up in search results and social media. With more than 50,000 new fact checks surfacing on Google Search between March 2020 and March 2021, we’ve invested heavily in initiatives that support media literacy and helping users find trustworthy sources.
In November 2022, Google and YouTube announced a $13.2 million grant in the International Fact-Checking Network (IFCN) to launch a new Global Fact Check Fund to support their network of 135 fact-checking organizations from 65 countries covering over 80 languages. Building on previous work, this is Google and YouTube’s single largest grant in fact-checking.
Building resilience from the start
Tackling misinformation online is an ongoing challenge that Google continues to invest in, including researching novel approaches to the problem. For the past several years, Jigsaw, a unit within Google focused on threats to open societies, has collaborated with researchers from universities of Cambridge and Bristol and American University to study the effectiveness of a tactic called prebunking. While a more commonly known tactic, “debunking” seeks to correct false claims already in popular discourse, prebunking helps individuals build psychological resilience to misleading narratives before they ever encounter them.
Prebunking works by alerting individuals to attempts to manipulate them, along with the false claims or tactics likely to be used to do so, coupled with a thorough refutation of those claims or tactics. The approach is highly flexible and can be adapted to a wide range of media, from a simple billboard to long-form narrative content. Academic research has shown it to be effective against a variety of false narratives, from white supremacist hate speech to COVID vaccine misinformation. In the fall of 2022, Jigsaw ran a large-scale trial of prebunking seeking to counter anti-refugee narratives in Central and Eastern Europe following the mass influx of Ukrainians in the wake of Russia’s invasion of the country.
We will continue to fight this pernicious problem by taking a flexible approach to tackle misinformation across different products, taking action at scale, elevating trustworthy information and equipping people with skills to detect misinformation.
Cybersecurity
Learn how we keep more people safe online than anyone else in the world.
Learn more