Information and content
you can trust
At Google, we aim to balance delivering information with protecting users and society. We take this responsibility seriously. Our goal is to provide access to trustworthy information and content by protecting users from harm, delivering reliable information, and partnering with experts and organizations to create a safer Internet.
Protecting you from harm
Preventing abuse
To keep people safe from abusive content, we use protections powered by machine learning. Gmail automatically blocks nearly 10 million spam emails from inboxes every minute, and Search has tools to prevent autocomplete from suggesting potentially harmful queries. Automatic detection helps YouTube remove harmful content efficiently, effectively and at scale—in Q2 of 2023, 93% of policy-violating videos removed from YouTube were first detected automatically. We also implement safety guardrails in our generative AI tools to minimize the risk of them being used to create harmful content.
In addition, each of our products is governed by a set of policies that outlines acceptable and unacceptable content and behaviours. Our policies are continuously honed and updated to address emerging risks. When it comes to our work in AI, we also rely on our AI principles to guide product development and help us test and evaluate every AI application before it is launched.
Detecting harmful content
As the tactics of bad actors evolve, we must work even harder to detect harmful content that gets onto our products. AI is helping us scale abuse detection across our platforms. AI-powered classifiers help quickly flag potentially harmful content for removal or escalation to a human reviewer. In 2022, automated enforcement helped us detect and block over 51.2 million ads with hate speech, violence and harmful health claims. Additionally, large language models, a breakthrough type of AI, show the promise of exponentially reducing the time it takes to detect and evaluate harmful material, especially from new and emerging risks.
We also work with outside organizations that flag content they think may be harmful. Both Google and YouTube take feedback from hundreds of Priority Flaggers: organizations around the world with cultural and subject-matter expertise that escalate content to us for review.
Responding appropriately
We rely on both people and AI-driven technology to evaluate potential policy violations and respond appropriately to content that is flagged. When a piece of content violates our policies, we can restrict, remove, demonetize or take account-level actions to reduce future abuse.
In 2022, Google Maps blocked or removed over 300 million pieces of fake content, 115 million policy-violating reviews and 20 million attempts to create fake business profiles. In Q2 2023, YouTube removed over 14 million channels and 7 million videos for violating our Community Guidelines .
To evaluate context and nuance while reducing the risk of over-removal, we rely on ~20,000 expertly trained reviewers to work in a variety of roles to enforce our policies, moderate content, and evaluate flagged content across Google's products and services.
If a creator or publisher feels we've made a wrong call, they have the ability to appeal our decisions.
Delivering reliable
information
Intelligent algorithms
Tools to help you evaluate content
Managing content
responsibly on YouTube
We remove content that violates our policies—using a combination of people and technology.
We elevate authoritative sources for news and information, and provide context for viewers.
We manage content responsibly by reducing the spread of borderline content and harmful misinformation.
We reward trusted creators via the YouTube Partner Program (YPP) by sharing ad revenue and a variety of other income sources. Over US$50 billion was paid out to creators, artists and media companies in the three years prior to June 2022 through the YPP.
Partnering to create
a safer internet
Exchanging knowledge to keep you safe
We partner with experts from civil society, academia and governments to tackle global issues like misinformation, ad safety, election integrity, AI in content moderation and combating online child exploitation. We also publish research findings and release datasets to academics to progress this field.
At YouTube, we regularly consult with our independent Youth and Families Advisory Committee on product and policy updates, including our Youth Principles and a series of product updates centred on teen mental health and wellbeing.
Working with experts to fight illegal content
We also work with partners to uncover and share clear-cut abusive content signals to enable removal from the wider ecosystem. We share millions of CSAM hashes with the US National Center for Missing & Exploited Children every year. We also participate in Project Lantern, a program that enables technology companies to share signals in a secure and responsible way. Additionally, YouTube cofounded the Global Internet Forum to Counter Terrorism (GIFCT), which brings together the technology industry, government, civil society and academia to counter terrorist and violent extremist activity online.
Supporting organizations dedicated to safety
We support organizations around the world dedicated to online safety and media literacy through robust programs offering training and materials, such as Be Internet Awesome, YouTube's Hit Pause and the Google News Lab. Additionally, Google and YouTube have announced a US$13.2 million grant to the International Fact-Checking Network (IFCN) to support their network of 135 fact-checking organizations. All together, our collaborations have equipped over 550,000 journalists in digital verification skills, and we've trained another 2.6 million online .