A New Kind of Creative Power
In 2022, an AI-generated painting won first place at the Colorado State Fair art competition. The backlash was immediate and predictable — artists felt blindsided, judges felt duped, and the creator of the image, Jason Allen, defended his choice to use Midjourney as simply a new kind of tool. What made this moment significant wasn’t the controversy itself but what it revealed: AI image generation had crossed a quality threshold that made the ethical questions unavoidable.
These tools — DALL-E, Midjourney, Stable Diffusion, and others — can now produce photorealistic images, detailed illustrations, and convincing artwork in seconds. That capability raises questions that reach beyond copyright into territory about authenticity, consent, labour, and what we think creativity actually is.
The Training Data Problem
AI image models are trained on billions of images scraped from the internet — the vast majority without the explicit consent of the creators. When a model learns to paint in the style of a specific living artist, it has done so by ingesting that artist’s work without permission, compensation, or credit. The artist’s labour contributed directly to a product that can now be used to undercut them commercially.
Legal frameworks haven’t kept pace. In the US, training on publicly available data has generally been treated as fair use, though this is increasingly contested. Several class-action lawsuits have been filed against AI image companies by artists who argue their work was used without consent. The outcomes of these cases will shape not just image generation but the entire AI industry’s relationship with creative work.
Even setting legality aside, there’s an ethical question worth sitting with: if a human artist spent ten years developing a recognisable visual style, and an AI can replicate that style on demand, what exactly is owed to that artist?
Deepfakes, Misinformation, and Non-Consensual Content
The most immediately harmful applications aren’t artistic. Deepfake technology — AI-generated images and videos that place real people in fabricated scenarios — has been used primarily to create non-consensual intimate imagery. A 2023 report found that over 96% of deepfake videos online were non-consensual, with women being the overwhelming targets. This is not a niche problem.
AI-generated disinformation presents a different but equally serious challenge. The ability to create convincing images of events that never happened — a politician signing a document, a public figure at a location they never visited — fundamentally changes what it means to see something. When photographic evidence can be fabricated cheaply and at scale, the epistemic foundations of journalism, law, and democratic accountability are genuinely threatened.
The Question of Style vs. Authorship
Art has always involved influence and imitation. Every artist learns by absorbing the work of predecessors. The Impressionists influenced each other; Picasso’s cubism built on Cezanne; countless contemporary illustrators work in styles that clearly derive from specific influences. Is AI image generation fundamentally different from a human artist learning from others’ work?
Many artists argue yes — primarily on grounds of scale and consent. A human artist absorbs influences slowly, transforms them through individual experience, and produces something genuinely new. An AI can be prompted to generate “in the style of [living artist’s name]” and produce thousands of images on demand, potentially flooding the market with work that competes directly with that artist.
Others argue that style itself is not copyrightable — and they’re legally correct, though the moral weight of that argument remains contested. The gap between what is legal and what is ethical is precisely where this debate lives.
Disclosure and Authenticity
One of the more tractable ethical questions is around disclosure: should AI-generated images be labelled? The answer, almost universally agreed upon, is yes — but implementation remains haphazard. Major platforms have begun requiring labels on AI-generated political content. Several news organisations have adopted explicit policies prohibiting AI images without disclosure. Adobe’s Content Authenticity Initiative is developing technical standards for provenance metadata that would allow images to carry verifiable information about how they were made.
The harder cases involve advertising, stock photography, and editorial illustration, where AI-generated images are beginning to replace work that would previously have been commissioned from human artists. Whether this requires disclosure is less clear — and the economic consequences for working illustrators are real regardless of what disclosure policies ultimately say.
Where Thoughtful Lines Can Be Drawn
Most serious proposals converge on a few principles: consent and compensation for artists whose work is used in training data; robust legal prohibitions on non-consensual intimate imagery generated by AI; mandatory disclosure for AI-generated content in contexts where it might mislead; and opt-out mechanisms that allow creators to exclude their work from training datasets.
None of these are technically insurmountable. They are primarily governance problems — questions about who gets to write the rules and who enforces them. The speed of capability development has outpaced the development of governance, which is exactly what makes the current moment both urgent and uncertain. The technology itself is not the villain here, but it is value-neutral in ways that make the humans deploying it bear unusual responsibility.
Sources
- Sensity AI. (2023). The State of Deepfakes Report.
- Adobe Content Authenticity Initiative. (2024). CAI Overview. contentauthenticity.org.
- Andersen v. Stability AI Ltd. (2023). US District Court, N.D. California.