Hi there,
Good morning,
It's been a crucial year (summer, really) for social media companies that have been left navigating how to handle a range of misinformation and hate speech. It came to a head when hundreds of marketers vowed this summer to avoid advertising on Facebook, sister company Instagram and, in some cases, all social channels until those sites cleaned up their channels.
Amid this pressure (led by civil rights groups), the companies released new initiatives to win back those groups, ad dollars and press. Facebook, as an example, agreed to a brand safety audit by the Media Rating Council (MRC).
Now, companies including Facebook, Twitter and YouTube say they can agree on new, consistent policies on hate speech and harmful content. TikTok, Pinterest and Snap are also likely in.
Even though we don't know the new policy exactly, marketers say this is good news. “Having common definitions and reporting will allow media buyers a sense of relief when buying for integrated media plans because everyone is speaking the same language when it comes to ensuring brand safety,” Bridget Jewell, creative director at the Minneapolis-based agency Periscope, told my colleague Scott Nover.
Read his full story here.
What else we're covering:
New TikTok downloads get the OKIAS to provide brand safety across Microsoft Audience NetworkReddit opens up UK bureauFun Fact: 26% of adults in the U.S. get their news from YouTube.
Need a break? Plan out your Prime Day.
Please consider sending any news tips to sara.jerde@adweek.com. Thanks for reading.
Consider supporting our journalism with an Adweek Pro Subscription and gain full access to all of Adweek's essential coverage and resources.