Insights
“AI companies should build barriers against copyright violation”
Next up in Media4Growth’s ‘AI & Creative Ownership’ series, Vinit Patil, Creative Director at Sociowash, speaks about the viral AI-generated Amul creative controversy, copyright ambiguity, AI-generated ideas, and why brands need to publicly clarify their stance in the age of generative content.
When AI can replicate a brand’s identity, visual language, and tonality within minutes, where does originality end and imitation begin? This is a question that has gained much significance.
Recently an AI generated creative on social media was assumed to be an official Amul topical, until the brand publicly clarified that the creative was not associated with them in any way. The incident quickly became a larger conversation around AI-generated advertising, misinformation, copyright, and brand accountability.
For Vinit Patil, Creative Director at Sociowash, the issue reflects both the accessibility and the unpredictability of AI today.
“AI is accessible to everybody, it’s not just restricted to brands or agencies,” he says.
According to Vinit, while AI has become a powerful creative tool, the industry still lacks awareness around the responsible use of brand identities and copyrighted communication.
“I think copyright laws are not taken very seriously,” he says. “But that’s just one part. The other fact is that people are just not aware that they can’t just randomly use some brand’s ideology or identity and make something out of it.” Indeed, as Vinit points out that there is no conscious thinking regarding the ethics of simply borrowing a brand’s identity and creating something out of it.
At the same time, he points out that speculative advertising itself is not new to the industry. “Spec ads have always existed in advertising as a very fun medium,” he says. “But if done responsibly, it works well.”
However, AI changes the scale and realism of those creations dramatically, making it harder for audiences to distinguish between official campaigns and AI-generated replicas.
For Vinit, this makes public clarification essential for brands whenever such incidents occur. “It’s very important to come out and say something because not saying is also agreeing with it or letting it slide,” he says.
He believes silence only fuels public assumption. “It becomes imperative that you come forward and give your two cents on it. Make your position clear or your stance clear. This is how we stand with it. This is not something that we condone.”
Because once assumptions begin circulating online, controlling perception becomes significantly harder. “Unless you clear it, people are going to assume things,” he says. “And letting people assume something is a very dangerous place to be in.”
Despite the growing concerns around AI-generated communication, Vinit is clear that he does not see AI itself as the enemy of creativity. “For me, AI is always something that helps you do things, not does things for you,” he says.
According to him, AI works best when it assists execution after a human insight or idea already exists. “If you have an idea or an insight that you think of, sure, you can put that into AI and get it to do a screenplay and different versions of it. That works.”
But relying on AI to originate ideas, defeats the very purpose of creativity, he feels. “Using it to generate ideas defeats the purpose,” he says. “Technologically also, AI basically delves into the past, draws data from the past, and gives you something.”
He concedes that AI-generated “ideas” are fundamentally derived from existing patterns rather than true originality. “So if you are asking AI for an idea, it’s not a real idea anyway,” he says. “It’s an inspiration or a copy by default. It’s literally how AI works.”
Vinit also believes responsibility cannot rest only with brands and agencies. AI platforms themselves, according to him, will eventually need to introduce stronger safeguards around copyrighted content and brand replication.
He adds, “There are some AI agents that take copyright laws into account. If you ask them to generate a Hermès bag with XYZ things on it, it will refuse to generate it because it holds a copyright law.”
However, he acknowledges that implementing copyright enforcement inside AI systems is far more complicated than it appears. “The copyright definition is vague,” he says. “Sometimes a slight deviation from the law makes it original.”
He compares this ambiguity to music copyright structures. “With a song, if you change three tones in a song, it becomes a completely new song with no copyright, even if everything else is the same.”
Even so, he believes AI companies should still attempt to build barriers wherever possible.
“AI agents should implement it to whatever capacity they can,” he says. “That will at least reduce the masses from falling into the trap.” At the same time, he admits that no system will ever completely eliminate misuse.
“If someone has to get past it, they will get past it anyway,” he says. “But at least it will become a gateway to not do it. Like a first-level barrier.”
For Vinit, the Amul controversy is ultimately not just about one viral AI-generated creative. It represents a much larger transition the industry is now entering, one where technology is making imitation easier than ever, while brands, agencies, and platforms are still trying to define what originality, ownership, and accountability actually look like in the age of AI.