AI Briefing: What transparency could look like for AI-powered brand safety tech

AI Briefing: What transparency could look like for AI-powered brand safety tech

By Marty Swant  •  August 9, 2024  •

Ivy Liu

Following Adalytics’ new report questioning the effectiveness of AI-powered brand safety tech, industry insiders have more questions about what works, what doesn’t and what advertisers are paying for.

The 100-page report, released Wednesday, examined whether brand-safety tech from companies like DoubleVerify and Integral Ad Science is able to identify problematic content in real time and block ads from appearing next to hate speech, sexual references or violent content.

After advertisers expressed shock over the findings, DV and IAS defended their offerings with statements attacking the report’s methodology. According to a blog post by IAS, the company is “driven by a singular mission: to be the global benchmark for trust and transparency in digital media quality.”

“We are committed to media measurement and optimization excellence,” IAS wrote. “And are constantly innovating to exceed the high standards that our customers and partners deserve as they maximize ROI and protect brand equity across digital channels.”

In DoubleVerify’s statement, the company said the Adalytics report lacked proper context and emphasized its settings options for advertisers. However, sources within the ad-tech, brand and agency spaces, said the report accurately identified key issues. Despite commitments, the sources said DV and IAS still haven’t provided enough transparency to alleviate concerns about AI tools, which would in turn help the industry better understand and test the tools, as well as address broader concerns.

One expert, citing a Star Wars scene in which Obi Wan Kenobi uses mind control to redirect stormtroopers another direction, put it this way: “If ever there was a ‘these aren’t the droids you’re looking for’ moment in brand safety, this is it.”

Earlier this week, Digiday sent DV and IAS questions that advertisers and tech experts wanted insights on before the report was released. The questions covered how brand-safety technology is applied, the AI model’s process for analyzing and valuing page safety, and whether pages are crawled occasionally or in real time. Others asked about whether the companies did page-level analysis and if UGC content is analyzed differently from news content. Neither DV nor IAS directly answered the questions. 

“There are clearly some gaps in the system where it is making obvious errors,” said Laura Edelson, a professor at NYU. “If I were a customer, the very first thing I’d want is more information about how this system works.” 

Without transparency, a report like Adalytics’ “really shatters trust,” because “without trust there is no foundation,” said Edelson.

So what might transparency look like? What kind of information should advertisers get from vendors? And how can AI brand-safety tools better address issues plaguing content and ads across the internet?

Rocky Moss, CEO and co-founder of the startup DeepSee.io — a publisher quality research platform — argued that measurement firms should provide more granular data about the accuracy and reliability of page-level categorization. Advertisers should also ask vendors about other issues: How vendors’ prebid tech responds when a URL is not categorized or when it’s behind a paywall; how they address a potential overreliance on aggregate ratings; and about the risk of bid suppression for uncategorized URLs. He also thinks vendors should share information about how they avoid false positives and how much time they spend reviewing flagged content every day for highly trafficked and legacy news sites.

“All that said, categorization models will always be probabilistic, with false negatives and false positives being expected in (hopefully) small amounts,” Moss said. “If the product is being sold without disclosing that, it’s dishonest. If someone buys BS protection, believing it’s perfect, I know Twitter bots with some NFTs to sell them.”

The divide between brand safety and user safety is becoming increasingly blurred, according to Tiffany Xingyu Wang, founder of a stealth startup and co-founder of Oasis Consortium, a nonprofit focused on ethical tech. She thinks companies incentivized to address both issues deserve better tooling for user safety, brand suitability and value-aligned advertising.

“We need to move away from a blocklist focus on filtering,” said Wang, who was previously CMO of the AI content moderation company OpenWeb. “It’s no longer adequate for advertisers, given the increasingly complex environment.”

At Seekr — which helps advertisers and people identify and filter misinformation and other harmful content — every piece of content that enters its AI model is made available for review. That includes news articles, podcast episodes and other content. Instead of labeling content risk by systems measured on a “low,” “medium,” or “high” scale, Seekr scores content on a scale from 1 to 100. It also shows what is scored, how it’s scored, what is flagged and why it’s flagged.

Transparency also helps companies make better business decisions, said Pat LaCroix, evp of strategic partnerships at Seekr. He also noted that performance and suitability shouldn’t be mutually exclusive: “This shouldn’t be seen as a hassle or a tax you’re paying, but something that drives key metrics.”

“People need to change the lens, and everyone needs to go a level deeper to know how to evaluate content. It’s too black box, too generic,” said LaCroix, who previously worked at agencies and in-house at brands like Bose. “At the end of the day, CPM is still a real metric that buyers are beholden to and advertisers are still looking for the bottom-of-the-barrel prices and that’s why this keeps happening.”

Prompts and Products — Other AI news and announcements

  • Irish Data Protection Commission took action against X over allegedly using EU user data without consent to train X’s Grok AI model.
  • A new report from Check My Ads examines the damage of AI-generated obits.
  • The FCC has opened a new comment period as it look to create new rules for AI use in political advertising.
  • A new report from Dentsu found that the percentage of CMOs who doubt AI’s ability to create content fell from 67% in 2023 to 49% in 2024.

https://digiday.com/?p=552253

More in Media Buying

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *