Facebook wants to keep ads away from harmful content
Facebook wants to keep ads away from harmful content

Facebook is developing ads blocking tools from harmful news files to help advertisers direct their ads placements away from specific topics in their newsfeeds.

The company said it will first test objective exclusion controls with a small group of advertisers.

The company stated that, for example, toy companies can avoid content related to crime and tragedy.

Other topics include news, politics, and social topics, and the company says it takes most of the year to develop and test controls that exclude topics.

Participants such as Facebook, Google, and Twitter work with marketers and agencies through an organization called the Global Alliance for Responsible Media (GARM) to develop standards in this area.

The team examined measures to help keep consumers and advertisers safe, including malicious content definition, reporting standards, independent monitoring and approval, to create tools for better content management along with ads.

Facebook's News Feed Tool uses tools that work in other areas of the platform, such as: b. Interstitial videos or a custom audience network that software developers can use to target users with in-app ads based on Facebook data.

The concept of brand security is very important for any advertiser who wants to ensure that their company's ads are not related to a specific topic, and the industry is increasingly demanding that platforms like Facebook be more secure.

"It has gone from brand integrity to raising awareness of social security," said the CEO of the World Federation of Advertisers, which founded the Global Responsible Media Alliance.

Ad-supported content can support any ad-free content, and many advertisers are responsible for what goes on in the sponsored display network.

Last summer, a large number of advertisers boycotted Facebook temporarily and called for tougher measures to prevent the spread of hate speech and false information on its platform, which has been widely explained.

In addition to keeping their ads away from disruptive or discriminatory content, some of these advertisers want to develop a plan to ensure that the content is completely off the platform.

Advertisers have complained for years that large social media companies have done little to prevent ads from displaying hate speech, fake news, and other harmful content.

In September, Facebook, Twitter and YouTube signed an agreement with major advertisers to tackle harmful online content.

Previous Post Next Post