Agency

Legal risks loom for Firefly users after Adobe’s AI image tool training exposed


Adobe’s Firefly was designed to give enterprise users generative AI image tools they could trust. 

“A lot of our very big enterprise customers are very concerned about using generative AI without understanding how it was trained,” Adobe chief strategy officer Scott Belsky told TechCrunch last year. “They don’t see it as viable for commercial use in a similar way to using a stock image and making sure that if you’re going to use it in a campaign you better have the rights for it — and model releases and everything else. There’s that level of scrutiny and concern around the viability for commercial use.”

The company said Firefly was only trained on licensed content from Adobe’s image banks. To back up that claim, they provided enterprise users with full legal indemnification for any content created using it. This is “a proof point that we stand behind the commercial safety and readiness of these features,” Claude Alexandre, the company’s VP of digital media, said at last year’s Adobe Summit. 

Unfortunately, or perhaps inevitably, this wasn’t entirely true. A Bloomberg report found Firefly was trained, in part, on AI-generated images. Some of these came from Midjourney, an AI many believe was trained on images scraped from the web. 

This has created a big marketing problem for Adobe and potentially a big legal problem for Firefly’s users. It also highlights IP issues that hang over all of AI and that can only be solved by law. 

The 5% solution

Adobe said the images from Midjourney only made up 5% of the training material. That’s not a great defense. The company has 248 million images under license, so that “only” could be up to 1.25 million pictures.

“It doesn’t matter if it’s 5%, 1%, 000.1%,” said Katie Robbert, CEO and co-founder of Trust Insights, and a MarTech contributor. “You are making the declarative statements of what your product does and doesn’t do. And when it’s shown that what you’re saying is not true, then you’re no longer a trustworthy brand.”

In addition to brand damage, it also means Firefly doesn’t solve the problem it is supposed to.

Dig deeper: AI in marketing: Examples to help your team today

“I don’t think there’s a high degree of concern that you’re going to generate something that someone’s going to come back and say, that’s mine,” Robbert said. “But we don’t know that. And that’s the problem. Users are stuck in this limbo of ‘Can I use it safely, or can’t I?’ We don’t know.”

Brands at risk

What that means is a lot of marketers for big brands who rely on Firefly are now looking for something that will ensure they have the legal right to use the images they create. “Because the last thing I need is a lawsuit because I just wanted to put an image on my blog,” said Robbert.

In the event of a lawsuit, it’s the brand and not Adobe that’s at risk.

“I’m not an IP attorney, but I believe the end user is the one who’s liable for the use of the tool,” said Paul Roetzer, CEO of The Marketing AI Institute. “So if there is some massive class action lawsuit and it’s determined that these models were actually illegally trained, the end user is the one that’s going to get caught up in it.”

Roetzer said the best remedy for this situation is a legal one. The EU already has regulations in place to deal with this. The AI Act requires providers to provide the public with a detailed summary of the content used for training their models. It also has limited exceptions for text and data mining, which balance copyright protection with promoting innovation and research. 

In the U.S. it currently looks like regulations may come about because of lawsuits, not legislation. Roetzer said that means the technology will continue to outpace the law.

“These models are going to advance so much more before we ever have any meaningful lawsuits that get to the point where we have regulations that cover this,” he said.

The speed of innovation, fueled by consumer demand, is only going to compound the problem.



“People are going to want the most advanced technology,” Roetzer said. “That technology may or may not have been trained on things they probably shouldn’t have been trained on. That is just where we are and it’s where we’re going and it’s not going to stop anytime soon.”



Source link

en_US