Information and content
you can trust
At Google, we aim to balance delivering information with protecting users and society. We take this responsibility seriously. Our goal is to provide access to trustworthy information and content by protecting users from harm, delivering reliable information, and partnering with experts and organizations to create a safer internet.
Protecting you from harm
We keep you and society at large safe with advanced protections that not only prevent, but also detect and respond to harmful and illegal content.
Preventing abuse
To keep people safe from abusive content, we use protections powered by machine learning. Gmail automatically blocks nearly 10 million spam emails from inboxes every minute, and Search has tools to prevent Autocomplete from suggesting potentially harmful queries. Automatic detection helps YouTube remove harmful content efficiently, effectively, and at scale — in Q2 of 2023, 93% of policy-violative videos removed from YouTube were first detected automatically. We also implement safety guardrails in our generative AI tools to minimize the risk of their being used to create harmful content.
In addition, each of our products is governed by a set of policies that outlines acceptable and unacceptable content and behaviors. Our policies are continuously honed and updated to address emerging risks. When it comes to our work in AI, we also rely on our AI Principles to guide product development and help us test and evaluate every AI application before it launches.
Detecting harmful content
As the tactics of bad actors evolve, we must work even harder to detect harmful content that gets onto our products. AI is helping us scale abuse detection across our platforms. AI-powered classifiers help quickly flag potentially harmful content for removal or escalation to a human reviewer. In 2022, automated enforcement helped us detect and block over 51.2 million ads with hate speech, violence, and harmful health claims. Additionally, large language models, a breakthrough type of AI, show the promise of exponentially reducing the time it takes to detect and evaluate harmful material, especially from new and emerging risks.
We also work with outside organizations that flag content
they think may be harmful. Both Google and YouTube take
feedback from hundreds of Priority Flaggers, organizations
around the world with cultural and subject-matter
expertise that escalate content to us for review.
Responding appropriately
We rely on both people and AI-driven technology to evaluate potential policy violations and respond appropriately to content that is flagged. When a piece of content violates our policies, we can restrict, remove, demonetize, or take account-level actions to reduce future abuse.
In 2022, Google Maps blocked or removed over 300 million pieces of fake content, 115 million policy-violating reviews, and 20 million attempts to create fake business profiles. In Q2 2023, YouTube removed over 14 million channels and 7 million videos for violating our Community Guidelines.
To evaluate context and nuance, while reducing the risk of over-removal, we rely on ~20,000 expertly trained reviewers to work in a variety of roles to enforce our policies, moderate content, and evaluate flagged content across Google’s products and services.
If a creator or publisher feels we’ve made a wrong call, they have the ability to appeal our decisions.
Delivering reliable
information
We enable confidence in the information and content on our platforms by delivering reliable information and best-in-class tools that put you in control of evaluating content.
Intelligent algorithms
Tools to help you
evaluate content
We created a number of features to help you understand and evaluate the content that our algorithms and generative AI tools have surfaced, ensuring you have more context around what you’re seeing online.
Managing content
responsibly on YouTube
YouTube is committed to fostering a responsible platform that the viewers, creators, and advertisers who make up our community can rely on. Learn more about our approach.
We remove content that violates our policies — using a combination of people and technology.
We elevate authoritative sources for news and information, and provide context for viewers.
We responsibly manage content by reducing the spread of borderline content and harmful misinformation.
We reward trusted creators via the YouTube Partner Program (YPP), by sharing ad revenue and a variety of other income sources. Over $50 billion was paid out to creators, artists, and media companies in the three years prior to June 2022 through YPP.
Partnering to create
a safer internet
We proactively collaborate, inform, and share our resources and technologies with experts and organizations.
Exchanging knowledge to keep you safe
We partner with experts from civil society, academia, and governments to tackle global issues like misinformation, ad safety, election integrity, AI in content moderation, and combating online child exploitation. We also publish research findings and release datasets to academics to further progress in this field.
At YouTube, we regularly consult with our independent Youth and Families Advisory Committee on product and policy updates, including our Youth Principles, as well as a series of product updates centered on teen mental health and well-being.
Working with experts to fight illegal content
We also work with partners to uncover and share clearcut abusive content signals to enable removal from the wider ecosystem. We share millions of CSAM hashes with the US National Center for Missing and Exploited Children every year. We also participate in Project Lantern, a program that enables technology companies to share signals in a secure and responsible way. Additionally, YouTube co-founded the Global Internet Forum to Counter Terrorism (GIFCT), which brings together the technology industry, government, civil society, and academia to counter terrorist and violent extremist activity online.
Supporting organizations dedicated to safety
We support organizations around the world dedicated to online safety and media literacy through robust programs offering training and materials, such as Be Internet Awesome, YouTube’s Hit Pause, and the Google News Lab. Additionally, Google and Youtube announced a $13.2 million grant to the International Fact-Checking Network (IFCN) to support their network of 135 fact-checking organizations. All together, our collaborations have equipped over 550,000 journalists in digital verification skills, and we’ve trained another 2.6 million online.
Actively sharing
safety technology
We share Application Programming Interfaces (APIs) that help other organizations protect their platforms and users from harmful content.
Taking on content responsibility in Dublin
A helpful, safer internet
experience — by design
Never has the impact of our work to provide trustworthy information and content mattered more. To evolve with content moderation challenges, we’ll continue to invest in developing and improving policies, products, and processes that provide you peace of mind and build a safer online experience for all.