To Ai? Or Not To Ai? That is the Question (And the Lawsuit)

“The Guide”

What to Wear. How to Prepare. Brand-EyeQ.

Submit your email and our system will send you “The Guide” within five minutes.

Most Popular Posts
Subscribe

Enter your email below and we’ll add you to our monthly newsletter: Brand-EyeQ.

If Shakespeare were alive today, he’d probably be less concerned with daggers and Danish royalty and more with whether he needed to slap a disclosure label on his AI-generated LinkedIn post. Welcome to 2025, where the existential crisis isn’t whether to be, but whether to be replaced—by code.

Let’s talk turkey. Or maybe pixels.

As a commercial photographer, I’ve spent decades mastering light, glass, and the glorious chaos of liquid. Then along comes generative AI, whispering, “Hey, I can create that whiskey shot without you getting your feet wet… or charging licensing fees.” Cute. But behind the novelty, there’s a three-headed beast: morality, legality, and copyright. And depending on whether you’re in the U.S., the EU, or the state of California… the rules are as clear as a shaken Negroni.

Morality: Just Because You Can Doesn’t Mean You Should

Let’s start with the moral elephant in the server farm.

AI art tools are trained on billions of images… many from working artists, photographers, and designers. Most of us never opted in. We just woke up one day to see our work regurgitated into a “Bold Product Photo in the Style of Rob Grimm” by a machine that’s never spilled coffee on a set.

Now, I’m not anti-AI. I use it. I blend it. But the ethics? They’re slippery. If your AI composite starts with a foundation of stolen work, can you call it original? Or are we just remixing without credit?

If your business is built on visuals, here’s a keyphrase for your moral search engine: ethical use of AI-generated images for branding. It’s trending—and not just in your conscience.

Copyright: Who Owns the Pixels?

Here’s where things go from “huh” to “lawyer up.”

In the United States, the Copyright Office has made it clear: only humans can hold copyrights. So if your AI generates a killer product shot, no matter how much you massaged the prompt, it’s technically not protected. That’s a problem for businesses who want to license, control usage, or just keep competitors from swiping their content like a bored intern on Shutterstock.

Meanwhile, the European Union is taking a different route. The EU AI Act (already in play) doesn’t just ask where the content came from; it demands transparency. If AI touched your image, you may be required to disclose it. And that’s not optional. Think “This image contains AI-generated content” as the new nutritional label for marketing.

The EU is also more aggressive about training data. Under GDPR and other proposals, scraping copyrighted work to train your AI model could be considered a violation. And lawsuits are rolling in.

SEO phrase to know: AI copyright law in Europe vs. United States. Because clients will ask.

State-Level Chaos: California Dreamin’ (of Regulation)

If you think things are messy nationally, zoom in.

California, naturally, leads the parade with its AI transparency bills and deepfake disclosure laws. If you create AI-generated video or imagery that could “mislead,” you might be legally required to label it. Think political ads, but it could easily extend to commercial advertising next.

But the Golden State isn’t alone. Enter Colorado, which passed one of the first comprehensive AI accountability laws in the country. The Colorado Artificial Intelligence Act goes into effect in 2026 and applies to “high-risk” AI systems…  including anything that could impact employment, housing, or legal rights. That sounds like HR tools, but as photographers working with AI-enhanced headshots or facial composites, we’re inching dangerously close to that territory.

If you’re using AI to manipulate or generate images of people – especially in commercial, real estate, or political contexts – you’ll need to be able to explain how that AI works, mitigate bias, and possibly notify the subject. And no, “Photoshop but faster” won’t cut it.

New York and Illinois are sniffing around too, particularly around biometric data and AI face generation. For photographers using AI to “enhance” faces or skin (you know who you are), that could mean compliance with facial recognition laws, even if your AI doesn’t technically “recognize” anything.

So yeah… “state-level AI image regulations for photographers” isn’t just a mouthful, it’s your long-tail keyword of the day. Especially if you’re working in Colorado, where “Colorado AI law for creative professionals” might be the next thing your client Googles… right before asking why their headshot needs a disclosure.

So, Should You Use AI in Commercial Photography?

Yes…and no.

Use it smartly. Use it ethically. Use it like a tool, not a shortcut. When it complements your creative vision, saves time in retouching, or helps visualize comps? Fantastic. But when it replaces human artistry, licenses, or puts someone else’s unpaid labor in your portfolio? That’s not innovation—it’s theft with good lighting.

Remember: Just because AI can fake a drink doesn’t mean it knows how to pour one.

And that’s where we still have the edge.

Image Disclosure: Featured image above made with Midjourney V7

Faqs: Ai Legal Considerations

AI-generated images can create legal headaches if you’re not careful. The biggest risks? Copyright ambiguity, misleading visuals, and lack of disclosure. In the U.S., clients might assume they “own” the image when they actually don’t. In Europe, failing to disclose AI use could put you in violation of the EU AI Act. And if you’re using AI to replicate or remix existing styles or likenesses? That opens a whole other can of legal worms, including potential identity or publicity rights violations.

In the United States, copyright only applies to human-authored work. If a machine generated the image with no meaningful human input, it’s considered public domain. That means no one technically owns it and anyone could reuse it. In Europe, copyright law is a bit more nuanced, but the general consensus is the same: if the final creative decisions weren’t made by a person, it likely isn’t copyrightable. So if you’re using AI-generated images in your client work, they may not be protected in the way traditional photos are.

Yes, it’s legal, but there are strings attached. In the U.S., the FTC has already warned that misleading AI-generated content in ads can lead to regulatory action. Europe’s even stricter: the EU AI Act requires clear labeling and documentation when AI is used, especially in anything that could manipulate public perception. If you’re blending AI-generated visuals with product photography or promotional content, you may be legally obligated to tell your audience or risk fines, pulled campaigns, or worse.

Colorado is ahead of the curve with the Colorado Artificial Intelligence Act, which kicks in 2026. It classifies “high-risk AI systems” as anything affecting decisions related to housing, employment, or access to services… which could include things like AI-enhanced headshots used in job applications. As a creative pro, if you’re using AI in any way that could influence how someone is perceived or hired, you may need to provide transparency about the tool, how it works, and what bias mitigation steps you’ve taken.
Depends on where you live—and who you’re working for. In California, disclosure laws already apply to certain deepfake-style content. In Colorado, disclosure may be required for anything that could affect personal outcomes. In the EU, full transparency is a legal requirement under the AI Act. Even if your location doesn’t mandate it yet, disclosing AI use in your workflow can build trust, show integrity, and protect you legally down the line.

Share on Socials:

Table of Contents