Learn how we’re building a robust, more inclusive cyber workforce.
Information and content
you can trust.
At Google, we aim to balance delivering information with protecting users and society. We take this responsibility seriously. Our goal is to provide access to trustworthy information and content by protecting users from harm, delivering reliable information, and partnering with experts and organizations to create a safer internet.
We keep you and society at large safe with advanced protections that not only prevent, but also detect and respond to harmful and illegal content.
We enable confidence in the information and content on our platforms by delivering reliable information and best-in-class tools that put you in control of evaluating content.
How we organize information
Our constantly updated algorithms are at the heart of everything we do, from products like Google Maps to Search results. These algorithms use advanced Large Language Models and signals like keywords or website and content freshness so that you can find the most relevant, useful results. For example, YouTube prominently surfaces high-quality content from authoritative sources in their search results, recommendations, and info panels to help people find timely, accurate and helpful news and information.
We created a number of features to help you understand and evaluate the content that our algorithms and generative AI tools have surfaced, ensuring you have more context around what you’re seeing online.
My Ad Center
My Ad Center makes it easy to manage your ad experience so that you see more of the brands you like and less of the ones you don’t.
Fact check in Search and News
Every day Google surfaces independent fact checks 6 million times. With Google’s advanced image and result fact checking tools, you are more equipped to spot misinformation online.
About this result
"About this result" provides details about a website to help you learn about it before you visit, including its description, when it was first indexed, and what others say about the source and topic.
SynthID WatermarkingLearn more
SynthID is a tool for watermarking and identifying AI-generated images. It is an early and promising technical solution to the pressing problem of misinformation online.
About this image
“About this image” helps you assess the credibility and context of images you see online by providing information like when an image may have first been seen by Google and how other sources use and describe it.
We proactively collaborate, inform, and share our resources and technologies with experts and organizations.
We share tools that help other organizations protect their platforms and users from harmful content.
Child Safety ToolkitLearn more
We give partners like Adobe and Reddit access to our tools, Content Safety API and CSAI Match, which help them to prioritize Child Sexual Abuse Material (CSAM) for human review. These tools help our partners process over four billion pieces of content every month, enabling them to better fight against child sexual abuse online.
Safebrowsing APILearn more
Our Safebrowsing API allows client applications to check URLs against our constantly-updated lists of unsafe web resources. This tool protects 5 billion devices a day by warning users against sites that host malware or unwanted software.
Information Quality APIs
Our information quality APIs empower publishers and platforms to combat toxic and harmful content. Perspective API processes 2 billion comments a day, helping 850+ partners curb toxic comments daily. The New York Times, Reddit, and the Wall Street Journal are some of the organizations using Perspective API to manage abusive comments on their sites.
Our Vision API helps developers identify and remove harmful content by allowing them to integrate features like image labeling, optical character recognition (OCR), and explicit content tagging into their applications. With limited training data, Vision API could successfully classify 80% of images.
Taking on Content responsibility in Dublin
Our Google Safety Engineering Center for Content Responsibility in Dublin is a regional hub for Google experts working to tackle the spread of illegal and harmful content and a place where we can share this work with policymakers, researchers, and regulators. Our network of Google Safety Engineering Centers give our teams the space, inspiration, and support to develop the next-generation solutions to help improve safety online.
Never has the impact of our work to provide trustworthy information and content mattered more. To evolve with content moderation challenges, we’ll continue to invest in developing and improving policies, products, and processes that provide you peace of mind and build a safer online experience for all.
keep everyone safe online.