By Kimeko McCoy • August 1, 2024 •
Ivy Liu
For the span of the generative AI hype cycle, brands and agencies have been eager to invest in AI tools to do everything from creating workflow efficiencies to drumming up press coverage. A less exciting part of the AI picture, though, is the prospect of AI-generated or altered content being labeled as such.
A lot of the industry hype around generative AI is the potential for the technology to make marketers’ jobs easier, faster and more efficient. In what feels like a flash, tools of all kinds, from Google’s search function to Firefly in Adobe, have embedded some type of AI. But as more images touched by AI, in which a person has a suspicious amount of fingers, for example, appear on social media, tech giants are setting up some guardrails before things get too out of control.
Marketers, however, are more hesitant to see “Made with AI” labels slapped across their creative campaigns. Mostly because that label paints with too broad of a brush, executives say, as it labels content generated by AI or in which AI tools were used during the creation process in the same way — despite the fact that those are two different things.
“Because at the end of the day, with the brand clients, what they care about is the perception of the brand and the implications to brand perception,” said Cristina Lawrence, evp of consumer and content experience at Razorfish. “It’s a perception worry.”
And, perhaps, rightfully so. Generative AI-based images have a reputation of quantity over quality, due to those notable examples of AI-generated images where people who look human have too many fingers, misplaced ligaments or an excessive amount of teeth. Ultimately, per Lawrence, marketers are apprehensive about AI labels implying that content is inauthentic or misleading.
It’s not that marketers don’t see the value in labeling AI as such, but there’s a negative perception around AI-generated images, and brands don’t want to be associated with AI in that way, Lawrence said. AI labels as they currently exist imply that an entire image was generated solely through AI, as opposed to something created with human oversight using AI-powered tools.
Notably, the AI hype cycle has hit a rough patch. The latest wave of marketing for AI isn’t landing with people as some had expected it to. The most recent example is the Olympics-themed ad for Google’s Gemini AI chatbot, in which Google’s Gemini chatbot is tasked with generating a letter from a young athlete to American Olympic track star Sydney McLaughlin-Levrone. The ad didn’t go over well, and sparked criticism about Big Tech replacing a child’s creativity with AI-generated text.
Across the industry, momentum has been building behind better labeling and identifying AI-generated or altered content and putting safeguards in place to mitigate and prevent AI fraud. In an effort to set industry standards, some companies have teamed up through the Coalition for Content Provenance and Authenticity (C2PA), which was founded in 2021 to certify online media’s provenance (or, to put it in plain terms, how an image came to be and traveled online). Tech giants like Microsoft, Intel, Sony and Adobe are current members of the group, which also includes camera companies, media firms and holding companies like Publicis Groupe.
Last October, Adobe debuted a new “Content Credentials” icon in an effort to improve transparency around videos and images that were either created or edited using AI. In February, Meta made a similar announcement, rolling out “AI Info” content labels so users know AI was used to either generate or alter content.
It’s a valiant effort, but it could cause ripple effects in the industry’s approach to labeling AI content, said Elav Horwitz, evp and global head of applied innovation at McCann Worldgroup.
“Gmail is becoming better with predictive generative AI,” she said. “So what? We would need to start saying on our emails ‘Made with AI’ now that it’s embedded in everything we do?”
To Horwitz, the idea of labeling content as being made or tweaked using AI is good in theory. But in practice, it’s a slippery slope. As it stands now, the practice is to label everything as AI, whether it was genuinely AI-generated or the production process involved AI-powered tools. This calls into question what is AI and what is not, she said. Which, perhaps, further muddies the water in favor of AI companies.
For example, it may make more sense to label a campaign using an AI-generated model as “Made with AI.” However, agency execs argue that that same labeling shouldn’t apply to a campaign featuring a human model’s photo that was lightly retouched using AI-powered tools. (A different agency exec told Digiday about a similar experience in the latest edition of our Confessions series.)
At one point, there were industry grumblings about a framework with rating levels, according to one executive who spoke on background. For example, level five would mean the image was generated completely by artificial intelligence. Level one would mean no AI was used. Again, this points to a slippery slope in which differentiating minimal intervention using AI tools versus significant intervention would be a challenge, the exec added.
What’s clear is that the AI train has already left the station and there’s no sending it back, per the anonymous exec.
“AI is a bit of a Pandora’s box. It’s already out so we can’t put it back in the box,” they said. “All this to say that the landscape is evolving pretty rapidly. I don’t think we know the full guardrails of everything. That’s what we’re living through.”
As it stands, the U.S. doesn’t have any federal regulations around AI. Instead, oversight is managed by a hodgepodge of players, including government entities, corporations and organizations like C2PA. Earlier this month, Digiday reported that Senators were set to propose new regulations for privacy, transparency and copyright protections regarding AI.
But generative AI is just the industry’s latest shiny object. Meaning, as content labeling continues to be adopted and refined across the industry, clients will eventually get to a point where their apprehension subsides, Razorfish’s Lawrence said.
“As we continue to refine the content labeling and that provenance piece, I think we will get to a point eventually where that apprehension will subside,” she said.
https://digiday.com/?p=551408