Information and content
you can trust.
At Google, we aim to balance delivering information with protecting users and society. We take this responsibility seriously. Our goal is to provide access to trustworthy information and content by protecting users from harm, delivering reliable information, and partnering with experts and organizations to create a safer internet.
We keep you and society at large safe with advanced protections that not only prevent, but also detect and respond to harmful and illegal content.
Prevent
Preventing abuse
To keep people safe from abusive content, we use protections powered by machine learning. Gmail automatically blocks nearly 10 million spam emails from inboxes every minute, and Search has tools to prevent Autocomplete from suggesting potentially harmful queries. Automatic detection helps YouTube remove harmful content efficiently, effectively, and at scale — in Q2 of 2023, 93% of policy-violative videos removed from YouTube were first detected automatically. We also implement safety guardrails in our generative AI tools to minimize the risk of their being used to create harmful content.
In addition, each of our products is governed by a set of policies that outlines acceptable and unacceptable content and behaviors. Our policies are continuously honed and updated to address emerging risks. When it comes to our work in AI, we also rely on our AI Principles to guide product development and help us test and evaluate every AI application before it launches.
Detect
Detecting harmful content
As the tactics of bad actors evolve, we must work even harder to detect harmful content that gets onto our products. AI is helping us scale abuse detection across our platforms. AI-powered classifiers help quickly flag potentially harmful content for removal or escalation to a human reviewer. In 2022, automated enforcement helped us detect and block over 51.2 million ads with hate speech, violence, and harmful health claims. Additionally, Large Language Models, a breakthrough type of AI, show the promise of exponentially reducing the time it takes to detect and evaluate harmful material, especially from new and emerging risks.
We also work with outside organizations, who flag content they think may be harmful. Both Google and YouTube take feedback from hundreds of Priority Flaggers, organizations around the world with cultural and subject matter expertise who escalate content to us for review.
Respond
Responding appropriately
We rely on both people and AI-driven technology to evaluate potential policy violations and respond appropriately to content that is flagged. When a piece of content violates our policies, we can restrict, remove, demonetize, or take account-level actions to reduce future abuse.
In 2022, Google Maps blocked or removed over 300 million pieces of fake content, 115 million policy-violating reviews, and 20 million attempts to create fake Business Profiles. In Q2 of 2023, YouTube removed over 14 million channels and 7 million videos for violating our Community Guidelines.
To evaluate context and nuance, while reducing the risk of over-removal, we rely on ~20K expertly trained reviewers to work in a variety of roles to enforce our policies, moderate content, and evaluate flagged content across Google’s products and services.
If a creator or publisher feels we’ve made a wrong call, they have the ability to appeal our decisions.
We enable confidence in the information and content on our platforms by delivering reliable information and best-in-class tools that put you in control of evaluating content.
How we organize information
Intelligent algorithms
Our constantly updated algorithms are at the heart of everything we do, from products like Google Maps to Search results. These algorithms use advanced Large Language Models and signals like keywords or website and content freshness so that you can find the most relevant, useful results. For example, YouTube prominently surfaces high-quality content from authoritative sources in their search results, recommendations, and info panels to help people find timely, accurate and helpful news and information.
We created a number of features to help you understand and evaluate the content that our algorithms and generative AI tools have surfaced, ensuring you have more context around what you’re seeing online.
-
My Ad Center
My Ad Center makes it easy to manage your ad experience so that you see more of the brands you like and less of the ones you don’t.
-
Fact check in Search and News
Every day Google surfaces independent fact checks 6 million times. With Google’s advanced image and result fact checking tools, you are more equipped to spot misinformation online.
-
About this result
"About this result" provides details about a website to help you learn about it before you visit, including its description, when it was first indexed, and what others say about the source and topic.
-
SynthID Watermarking
SynthID is a tool for watermarking and identifying AI-generated images. It is an early and promising technical solution to the pressing problem of misinformation online.
-
About this image
“About this image” helps you assess the credibility and context of images you see online by providing information like when an image may have first been seen by Google and how other sources use and describe it.
YouTube is committed to fostering a responsible platform that the viewers, creators, and advertisers who make up our community can rely on. Learn more about our approach.
-
Remove
We remove content that violates our policies — using a combination of people and technology.
-
Raise
We elevate authoritative sources for news and information, and provide context for viewers.
-
Reduce
We responsibly manage content by reducing the spread of borderline content and harmful misinformation.
-
Reward
And we reward trusted creators via the YouTube Partner Program (YPP), by sharing ad revenue and a variety of other income sources. Over $50 billion was paid out to creators, artists, and media companies in the three years prior to June 2022 through YPP.
We proactively collaborate, inform, and share our resources and technologies with experts and organizations.
Sharing Knowledge
Exchanging knowledge to keep you safe
We partner with experts from civil society, academia, and governments to tackle global issues like misinformation, ad safety, election integrity, AI in content moderation, and combating online child exploitation. We also publish research findings and release datasets to academics to further progress in this field.
At YouTube, we regularly consult with our independent Youth and Families Advisory Committee on product and policy updates, including our Youth Principles, as well as a series of product updates centered on teen mental health and wellbeing.
Sharing Signals
Working with experts to fight illegal content
We also work with partners to uncover and share clearcut abusive content signals to enable removal from the wider ecosystem. We share millions of CSAM hashes with the US National Center for Missing and Exploited Children every year. We also participate in Project Lantern, a program that enables technology companies to share signals in a secure and responsible way. Additionally, YouTube co-founded the Global Internet Forum to Counter Terrorism (GIFCT), which brings together the technology industry, government, civil society, and academia to counter terrorist and violent extremist activity online.
Sharing Resources
Supporting organizations dedicated to safety
We support organizations around the world dedicated to online safety and media literacy through robust programs offering training and materials, such as Be Internet Awesome, YouTube’s Hit Pause, and the Google News Lab. Additionally, Google and Youtube announced a $13.2 million grant to the International Fact-Checking Network (IFCN) to support their network of 135 fact-checking organizations. All together, our collaborations have equipped over 550K journalists in digital verification skills, and we’ve trained another 2.6 million online.
We share Application Programming Interfaces (APIs) that help other organizations protect their platforms and users from harmful content.
-
Child Safety Toolkit
We give partners like Adobe and Reddit access to our tools, Content Safety API and CSAI Match, which help them to prioritize Child Sexual Abuse Material (CSAM) for human review. These tools help our partners process over four billion pieces of content every month, enabling them to better fight against child sexual abuse online.
-
Safe Browsing API
Our Safe Browsing API allows client applications to check URLs against our constantly-updated lists of unsafe web resources. This tool protects 5 billion devices a day by warning users against sites that host malware or unwanted software.
-
Information Quality APIs
Our information quality APIs empower publishers and platforms to combat toxic and harmful content. Perspective API processes 2 billion comments a day, helping 850+ partners curb toxic comments daily. The New York Times, Reddit, and the Wall Street Journal are some of the organizations using Perspective API to manage abusive comments on their sites.
-
Vision API
Our Vision API helps developers identify and remove harmful content by allowing them to integrate features like image labeling, optical character recognition (OCR), and explicit content tagging into their applications. With limited training data, Vision API could successfully classify 80% of images.
GSEC DUBLIN
Taking on Content responsibility in Dublin
Our Google Safety Engineering Center for Content Responsibility in Dublin is a regional hub for Google experts working to tackle the spread of illegal and harmful content and a place where we can share this work with policymakers, researchers, and regulators. Our network of Google Safety Engineering Centers give our teams the space, inspiration, and support to develop the next-generation solutions to help improve safety online.
Never has the impact of our work to provide trustworthy information and content mattered more. To evolve with content moderation challenges, we’ll continue to invest in developing and improving policies, products, and processes that provide you peace of mind and build a safer online experience for all.
keep everyone safe online.
-
In our productsLearn how your safety is protected in all of Google’s products.
-
Security and privacyLearn how Google protects your private information and puts you in control.
-
Family safetyLearn how Google helps you manage what's right for your family online.
-
CybersecurityLearn how we keep more people safe online than anyone else in the world.