When We Drew the Line at Photoshop But Not at AI: A Reckoning with What We Value
Is a fundamental redefinition of what value means in the commercial sense underway?
We’re witnessing something peculiar in our current moment: regulators are scrambling to mandate AI image disclosures while photoshopped images continue their reign largely uncontested. California’s SB 942, effective January 2026, will require major AI platforms to label generated or altered images with both visible and embedded disclosures. Yet the same logic that justifies this requirement, that consumers deserve to know if an image was manipulated, was conspicuously absent when digital retouching became ubiquitous in advertising.
The irony is sharp, and it reveals something far deeper than a regulatory lag. It suggests we’re not actually concerned about manipulation itself. We’re concerned about how the manipulation happened, and in that distinction lies a fundamental misconception about what we should be regulating.
The Photoshop Paradox
Here’s where the story becomes uncomfortable. Israel introduced the world’s first “Photoshop Law” in 2013, requiring disclosure when images in advertisements had been digitally altered. Ireland’s ASAI Code now mandates that influencers declare photoshopped content. The UK and EU have banned advertisements for exaggerating digital alterations. Yet in the United States, the birthplace of digital image manipulation at scale, no comprehensive federal disclosure requirement ever took root, despite a 2014 Congressional proposal for a Truth in Advertising Act that would have done exactly that.
Why? Because we convinced ourselves photoshopping was different. A photoshopped image, the logic went, is still a variation of something real. A photograph was taken, a moment was captured, and then refined. It’s an enhancement, a lie perhaps, but an honest lie built on foundation of truth.
The AI image, by contrast, has no moment. It emerges from pure statistical probability, trained on thousands or millions of images compressed into mathematical patterns.
But the rub is if we accepted photoshopped images, which demonstrably harm consumers by creating impossible beauty standards and misleading them about what products actually do, why shouldn’t we also accept AI images? If the manipulation itself wasn’t the problem, only the degree of manipulation, where exactly should the line fall?
The Authenticity Illusion
The answer that emerges is almost too obvious to articulate! We’re not regulating based on what consumers can verify. We’re regulating based on what makes us uncomfortable about the nature of the technology itself.
An AI system trained on 100 models, generating a face that never existed but resembles elements of many real people, feels like fraud in a way that a slimmed-down Kate Winslet cover, acknowledged or not, doesn’t. We have an intuitive sense that if photoshopping is a conversation with reality, then AI generation is talking to itself in a mirror.
But that intuition collapses the moment you examine the economic reality of training data.
The Royalty Question That Breaks Everything
The regulatory anxiety around AI images is beginning to expose something more fundamental: a system of rights, compensation, and ownership that was never built for a world where creativity emerges from collaborative statistical patterns.
India has just proposed something radical: mandatory royalties for AI training data. Under their One Nation, One Licence, One Payment’ framework, AI developers would automatically gain access to copyrighted works for training, while a central collective distributes statutory payments to creators. It’s an elegant solution to an impossible problem - until you confront the actual impossible problem.
If an AI system is trained on the works of 100 artists, photographers, or creators, how do you distribute royalties fairly? How do you even identify all 100? What if the AI’s inspiration doesn’t come from direct copying but from statistical patterns that emerged from thousands of works, none of them individually recognizable?
The answer, of course, is that you can’t, not in a way that honors the original intent of copyright, which assumed that creative works were discrete, identifiable, and traceable to specific creators.
This is where photoshopping never had to confront itself: a retoucher uses a specific image, modifies it, and the original creator can be identified. The chain of value is knowable. With AI training, the chain explodes into a thousand invisible threads, and we’re left with a choice: either pay everyone a tiny fraction, or pay no one anything, or create an entirely new system for thinking about ownership and value.
What Humans Can No Longer Expect to Be Paid For
This regulatory moment is, I’d argue, less about AI images and more about the collapse of a payment model that has governed creative work for the last 200 years: pay for skills, pay for looks, pay for the time and effort required to create something.
Machine learning has already begun dismantling this. AI can generate images, write copy, design layouts, compose music, all tasks that once commanded premium rates because they required rare human skills. The skills are no longer rare. They’re abundant. They’re available at the cost of compute.
So what remains? What can humans still expect to be paid for when looks and skills have been commoditized?
The evidence is beginning to crystallize, and it points toward something unexpected: humans will be paid for judgment, curation, authenticity, and relationship.
The research is telling. 81% of consumers say authenticity is a key factor in choosing brands. 86% value it in their purchasing decisions. Authentic human storytelling, a founder explaining why she built something, a creator sharing their perspective, outperforms algorithmic content reliably and measurably. Not because it’s “better” in any objective sense, but because it’s evidently human. It carries the mark of consciousness, intentionality, and accountability. Even irrationality!
Workers who combine AI literacy with emotional intelligence, ethical judgment, and creative synthesis command wage premiums of 21-40%. The jobs of the future aren’t those that avoid AI but they’re those that orchestrate it. AI Product Manager, Human-AI Interaction Designer, AI Ethics Officer, Automation Strategist. These are roles that place humans at the decision point, using machines as tools rather than being replaced by them.
The shift is profound: you’re no longer paid for what you can make. You’re paid for what you decide to make, why you decide to make it, and who you convince to care.
The Unexpected Wisdom of Animals
There’s a peculiar footnote to all of this that deserves contemplation. Animal models and influencers, from beloved dogs in advertisements to cats with millions of Instagram followers, have never negotiated for compensation in the way humans do. They don’t care about copyright. They don’t demand royalties. They don’t even understand that they’re being used for commercial purposes.
Yet they often generate extraordinary value.
A golden retriever in a car commercial doesn’t earn residuals, but the authenticity of watching a real animal be a real animal, no acting or pretense, is worth millions to advertisers. We trust it implicitly because we understand the dog has no economic incentive to deceive us.
What if this reveals something about the future of human value in an AI-saturated world? What if the model isn’t compensation for performance, but rather incentive for authenticity?
A creator who builds genuine trust with their audience doesn’t need to be paid for every piece of content. They benefit from a system of reciprocal relationship - attention, loyalty, long-term engagement. A researcher whose judgment is trusted doesn’t need to be compensated for advice alone; their credibility becomes their economic moat. A founder whose vision resonates doesn’t extract maximum value from each transaction; they build a stakeholder base that grows in value over time.
The treats that incentivize the dog aren’t payment. They’re the mechanism that keeps the dog showing up, being itself, and generating value through mere presence.
We may be moving toward a world where human value works similarly, not compensation for a discrete service rendered, but rather a system of mutual incentive where authenticity, judgment, and relationship are the actual currency.
Drawing the Line, Finally
So return to the original question: why did we require disclosures for AI images but not photoshopped ones?
The honest answer is: we’re not sure yet. Regulatory bodies are reaching for a sensible rule, transparency about how content was made, but they’re applying it inconsistently because they’re still working from old assumptions about value, ownership, and trust.
What they should be regulating, I’d argue, isn’t the technology. It’s the claim to authenticity.
A photoshopped image doesn’t claim to be real; it claims to show you what a product can do, and the law has mostly accepted that as a form of rhetorical persuasion (though not without debate). An AI image, when passed off without disclosure, claims to show you something that might exist, a person, a moment, a scene, and that claim is false in a way that a slimmed-down model shot isn’t.
But this too is a fragile distinction, and it won’t hold as AI becomes more prevalent and seamless. The line we draw today, between photoshopped images and generated ones, will eventually seem as quaint as the distinction between hand-painted advertising and printed advertising once did.
The real regulation we need isn’t about images but it’s about the economic and social contract we want to build when the traditional payment model - expertise, time, looks, effort is no longer scarce.
Do we want a world where creators are guaranteed a share of every dataset their work appears in, even if it’s infinitesimally small and impossibly to track? Or do we want a world where value flows toward those who can orchestrate, judge, and authenticate, who can tell you not just what is possible, but why it matters?
The answers we give will shape not just AI regulation, but the future of human work itself. And unlike images, photoshopped or otherwise, that’s something we can’t afford to get wrong.

