AI-generated images – the ethical debate

AI-generated images and videos are advancing in leaps and bounds. If you haven’t seen this video, comparing AI-generated videos from 12 months ago to now, it’s worth a watch. As the outputs become more sophisticated and less preschool-drawing-of-your-nightmares, you can see it’ll have a whole host of practical applications. In the IT channel, we sometimes […]

Vivienne Winborne, Senior Marketing Manager at One Little Seed

AI-generated images and videos are advancing in leaps and bounds. If you haven’t seen this video, comparing AI-generated videos from 12 months ago to now, it’s worth a watch. As the outputs become more sophisticated and less preschool-drawing-of-your-nightmares, you can see it’ll have a whole host of practical applications.

In the IT channel, we sometimes have quite a specific stock image or video requirement. The ROI may not be there to shoot the footage ourselves, and searching for that perfect clip takes time. Imagine if we could type a prompt into an online program and get an ideal and unique video clip that no one else is using! Looking at it like this, many people see AI generators for copy, images and videos as a no-brainer. Plus, when you play around with platforms like Midjourney, it’s clear that the potential for AI-generated images is incredible. It can create complex, customised imagery based on your prompts.

So, what's the problem?

Most AI generators use copyrighted material to train their AI algorithms, without the copyright owners’ permission.

Well, not all AI image generators are created equally. Even if they share a similar technological platform, the key differentiator is the image database used to train the AI algorithm. This database is where most of the ethical considerations come into play.

Most AI generators use copyrighted material to train their AI algorithms, without the copyright owners’ permission. Not only are the creators concerned they’ll be out of a job because of AI, but their work is being exploited by the same software that might make their role obsolete.

But what about the ethical AI platforms? Adobe Firefly is frequently referenced as the “ethical AI image generator” because it uses its stock image library for training. However, when you dig deeper, it seems that the image creators didn’t give their permission for their images to be used to train Adobe Firefly. In fact, Adobe failed to consult them at all.

They’ve also floated the idea of offering stock image contributors an “opt-out.” Sounds good in theory but given that Adobe Firefly has already been built and is in use commercially, this feels like, at best, closing the gate after the horse has bolted and, at worst, a bad PR initiative.

Both Adobe Firefly and ShutterStock came out publicly and said that they plan to reimburse the artists and photographers whose work was used to train the AI model. While there’s no method in place to calculate whose image was used to inspire a particular AI-generated image of a businessperson working at a desk, the idea is that each creator will get a payment which is based on the number of images used as a percentage of the total dataset used to train the AI image generator.

With AI-generated images essentially competing against human-generated stock images, compensation would need to be significant. Unless the AI training algorithms are open source, how can we be sure creatives are being paid appropriately?

And as a user, why would you use a stock image that other people can access rather than generating your unique image that precisely meets your needs? One strong argument to keep the creative industry alive lies in the idea of true creative advancements. After all, AI cannot currently create something entirely new – its creations are based on what has come before.

To use AI -generated imagery or not to use AI-generated imagery – that is the question.

At OLS, we pride ourselves on our values and commitment to quality. Our values include being straight up and having the good, the bad and the ugly conversations, as well as taking a good hard look at what tools we should use and the ethical implications. So, this topic sparked an interesting internal debate where we discussed these sticky questions:

1. Should we take an ethical standpoint?

Is AI trained on copyrighted material from around the web any different from humans taking inspiration from images, videos and marketing material they see in everyday life?

There’s some discussion around whether the ethical responsibility lies with the image generation tool (i.e. with Midjourney, DALL-E, NightCafe, etc.) or with the image user. For us, it feels like a cop-out to say that, even though we know creators have major copyrighting concerns, we will proceed. We felt that, for One Little Seed, this was a decision that we needed to make ourselves.

2. Is AI trained on copyrighted material from around the web any different from humans taking inspiration from images, videos and marketing material they see in everyday life?

To me, the primary differences between feeling inspired by that song I heard on the radio or the layout I saw when reading the weekend paper are a) scale and b) intention. I am a single person using my naturally occurring lived experience to produce copy or marketing strategy. In contrast, an AI image generator is deliberately designed to harvest vast amounts of material to generate large-scale profit for an organisation.

3. Is it a case of client preference?

Currently, no legislation is available to guide businesses; only voluntary guidelines are available, making the decision somewhat an ethical and copyright black hole. One option is to let the clients decide, but as an organisation built around giving expert advice, that doesn’t align with our approach.

4. Does it matter where the images used to train the algorithm come from as long as the generated image doesn’t infringe on any copyright?

Disclaimer: this blog post is opinion and not legal advice!! However, it is worth noting that there have been multiple cases before the courts where artists allege that AI generators have breached copyright laws by using their images without permission. You can read more about some of these cases here and here. While the legal implications are still being worked through, existing copyright laws state that you need permission to use even a tiny part of someone else’s work, contrary to the myth about bypassing copyright law by changing the work by a certain percentage

Does taking an ethical standpoint mean being put at a commercial disadvantage?

5. Does taking an ethical standpoint mean being put at a commercial disadvantage?

This question is perhaps the biggest one for many organisations. If not implementing AI when your competitors are will have a negative commercial impact, will big businesses choose to take an ethical standpoint to protect photographers and artists' copyrights?

I know that I’ve asked more questions than I’ve answered, but the truth is that we don’t have a clear-cut stance. As I discussed in this previous post, we don’t use generative AI to write any of our copy. That decision was more straightforward, since as long as AI-generated copy still reads like you are talking to a robot, it doesn’t meet our requirements. But the jury is still out for us on AI-generated images. For now, I would say that we are in the "watch and see" camp. But I am curious to know how others are approaching it. Have you considered the implications? Are you embracing it wholeheartedly? Or are you, like us, monitoring closely for what comes next?