Keeping users safe on the ad-supported internet
From helping small businesses connect with new customers to allowing more people to access information, ads help support the internet as we know it today. Google plays an important role in this digital advertising ecosystem, so what is it doing to keep users safe from bad ads? Brian Crowley, Director of Global Ads and Content Investigations at Google discusses the people, policies and processes dedicated to this work.
The online advertising industry is huge and continues to grow, but what is it that makes something a ‘bad ad’?
Most ads that appear online follow the rules that we have in place for ad content and contribute to a healthy digital advertising ecosystem. But there are also ads that intend to abuse this by misleading, misrepresenting or threatening the safety of users.
We call these bad ads, and they cover a wide spectrum of content issues – anything from those that contain inappropriate content to scams to advertising the sale of dangerous goods.
The policies that we have in place for ads cover two major categories. These include prohibiting certain kinds of products, services and content from appearing in ads and restricting other kinds of products, services and content.
Ads with prohibited content may include content that intends to scam the user, or something that we consider to be inappropriate, such as content showing cruelty to animals. This type of behaviour is harmful and never allowed on our platforms.
Restricted content means that the ad is allowed to be shown on Google’s services, but with limitations. This could be something like a gambling or alcohol ad that cannot be shown in certain geographical locations due to cultural sensitivities or legal reasons.
To give an idea of scale, in 2021 we removed 3.4 billion bad ads and restricted 5.7 billion more before they were experienced by users. This work is core to Google’s mission and helps protect people from encountering inappropriate material in ads.
Who decides what these policies are?
There are thousands of fantastic and passionate people across Google’s policy, product and trusted experience teams who work to create a safer experience for users, creators, publishers and advertisers. Delivering safety is core to their mission.
And within this wider team, we have people around the world focused on developing and updating our ad policies. They work in different locations, speak different languages and work on specialised topics to make sure that our policies reflect the nuances of an evolving ads landscape. This diversity helps us to assess cultural and regional nuances and identify subject gaps in development stages.
But we don’t just rely on our own internal knowledge, we also seek advice outside of Google, such as child safety organisations and medical experts. These specialists help us to find gaps in policies based on their knowledge and exposure to those topics, making sure that our policies are relevant and enforceable at scale.
How does Google stay ahead of emerging issues?
As the Internet continues to grow, we’ll keep seeing new types of harmful ads appearing – including ads that play on societal issues to mislead users or target new trends. It’s something that we will always invest in tackling, and in 2021 we updated more than 30 policies in response to emerging risks in ads.
One example that we saw in the second half of 2021 was an increase in fraudulent ads promoting cryptocurrency. As interest grew in cryptocurrency, more people were searching online to learn about crypto products and companies. In this case, the newness of the sector helped bad actors to mislead users in their ad content by imitating brands that people were already familiar with.
Staying ahead of these types of harmful ads is critical, and this violation alone resulted in more than 136,000 accounts being blocked or suspended and a further 2.1 million accounts suspended for misrepresentation and phishing.
The last few years have seen some extraordinary world events. What impact has this had on ad safety?
World events such as conflict or major political movements will often trigger trends in bad actors looking for new ways to mislead people online. These happen quickly and we need to react just as fast.
In the early phase of the COVID-19 pandemic, we saw an increase in ad abuse appearing in different ways, so we needed to remain nimble and address this throughout our processes – from policy updates to enforcement. In the first 12 months of the pandemic, we blocked over 99 million COVID-related ads, including those for miracle cures, N95 masks due to supply shortages and even fake vaccine doses.
Events like these also expose other forms of policy violations and issues online that aren’t as obvious. We’ll usually see this across a wider group of bad actors trying to take advantage of the confusion and sense of urgency that something like the pandemic creates. So, as COVID-19 cases rose we enforced a policy that prevents behaviour such as price-gouging on in-demand products like hand sanitiser, masks and paper goods.
What can I do if I don’t like an ad and don’t want to see it again?
We know that even if an ad doesn’t violate our policies, there may be other reasons why you don’t want to see it.
Google’s My Ad Centre offers an easy way to control the ads that you see across Google Search, YouTube and Discover. The tool allows you to manage the information used to show ads, including information associated with Google Accounts and what we estimate about interests based on online activity. It can also be used to customise ad experiences so that you see more of the brands that you like and limit the ones that you don’t. You can also permanently delete the activity data tied to your account at any time.
This decision was made in response to feedback from our users requesting more control over their ad experience. Transparency and investing in these types of tools remains a priority for us and we’ll continue to develop user controls across our products.
How do advertisers know what the rules are for ad content?
It makes sense that the more advertisers and publishers know about our ads policies and how we enforce them, the better these systems work.
When an advertiser is creating a new ad or campaign, we provide information about our policies throughout the process via our help centre. New ad content is then put under review and if it doesn’t meet our policy standards we’ll notify the advertiser of what the violation is and what they can do next. These systems help us to keep ads safe for users before publishing them and give publishers guidance for appropriate content.
What is being done to enforce these policies?
The number of daily interactions and variety of content online means that there isn't a one-size-fits-all solution to policy enforcement.
Artificial intelligence and machine learning are critical in helping to sort through content and detect violations at scale. This is combined with engineers working to improve products, analysts and experts identifying new trends, and human reviewers that can make judgements on individual cases. It's a feedback loop that sees our teams and technology work together 24/7 to continuously refine and update our processes.
We've seen huge increases in the bad ads we remove. In our 2021 Ads Safety Report we recorded that between 2019 and 2021 the number of bad ads that were removed increased from 2.7 billion to 3.4 billion. These numbers are reflective of a growing advertiser base but also improvements in machine learning detections and an expansion of our policies to address emerging harms. It's an ongoing process and we'll continue to invest in these efforts year on year.
Our identity verification policy requires some advertisers across Google platforms to complete a verification program to confirm their identity and this will continue to be expanded over time. This equips users with more information about who is advertising to them and helps them make more informed decisions when using our advertising controls.
We also regularly release reports that provide insights not only about the results of our policy enforcement efforts but also what our processes were in getting there. 2021 marked a decade of publishing our Google Ads Safety Report, which highlights the work we do to prevent malicious use of our ads platforms, how we enforce our policies or review content reported to be in violation of local law.
An ad-supported internet means everyone can access essential information. While this ecosystem continues to grow, so does Google's responsibility to provide a safe user experience on its platforms. Our Ads Safety Report is published annually and you can read the full 2021 version here.
Cybersecurity
Learn how we keep more people safe online than anyone else in the world.
Learn more