Why in the news?
A recent legal action by well-known Indian actors against major platforms has thrust the question of personality rights vs. generative AI into public view. AI systems are now able to synthesize faces, voices and mannerisms so convincingly that they can create lifelike but fake content which can harm reputation, market value, dignity and privacy. The incident has exposed gaps in existing law and judicial practice and prompted urgent debate on how the law should treat personality (or publicity) rights, digital impersonation, platform responsibility and remedies in the era of deepfakes and large-scale AI training.
Background: what are personality rights and why they matter now
Personality rights protect a person’s control over their image, name, voice and other identity markers. Historically grounded in privacy, dignity and property doctrines, they have two principal aspects:
- Personal dignity/privacy: protection from misuse or humiliating depictions.
- Economic/publicity: the right to monetise commercial uses of likeness or to prevent unauthorised commercial exploitation.
Why AI is a watershed:
- Generative models can produce high-fidelity replicas of an individual’s face, voice or behaviour from limited data.
- Training data often includes public images, videos and audio that are scraped without consent.
- Synthetic content is cheap to produce, viral in distribution, and technically hard to attribute or remove once spread.
Legal landscape — current position and its limits
India: hybrid and evolving
- India recognises privacy and dignity (landmark judgment on the right to privacy), and courts have dealt with individual instances of personality-related harms.
- No consolidated statutory right to publicity/personality: enforcement relies on privacy jurisprudence, defamation law, certain intellectual property claims and intermediary rules.
- Result: case-by-case remedies, uneven protection, and enforcement gaps (especially for digital impersonation and cross-institutional harms).
Comparative snapshot
| Jurisdiction | Legal focus | Strengths | Limits |
| United States | Right of publicity (state law); torts, IP | Robust commercial protection in many states; monetary remedies | Patchwork—varies by state; balancing First Amendment issues |
| European Union | Data protection (GDPR), dignity & personality | Consent and data-processing safeguards; emphasis on dignity | GDPR is process/data centric—not a dedicated personality right |
| China | Emerging regulation on synthetic content | Moves toward strict controls on deepfakes and authenticity | State-centric approach; human rights concerns |
| India | Privacy + torts + limited statutory rules | Strong privacy judgment precedents; flexible remedies | No single statutory right; enforcement and cross-border issues |
Key legal and operational issues highlighted by recent event(s)
1. Definition and scope of “consent”
- Under AI: consent to use a person’s data for model training or to reproduce their likeness must be informed, specific and revocable.
- Problem: content publicly available (performances, social media) is often treated as fair fodder for training; power asymmetries (e.g., student/teacher, celebrity/platform) make consent meaningful only with safeguards.
2. Personality vs. privacy vs. copyright
- Many cases sit at the intersection: is misuse a copyright issue (wrongful copying), a privacy/dignity violation, or an economic/publicity wrong?
- Courts sometimes apply different doctrines inconsistently; a statutory clarity would reduce friction.
3. Attribution, provenance and technical evidentiary challenges
- Synthetic content can be anonymised and distributed globally. Authenticating origin, proving training sources and demonstrating causation are technically demanding for courts and investigators.
4. Platform liability and takedown mechanisms
- Fast removal and transparency around content moderation are essential — current intermediary frameworks are partial and reactive.
- Platforms need protocols for detection, provenance tagging/watermarking and expedited notice-and-action for AI-generated abuse.
5. Cross-institutional harms and jurisdictional gaps
- In creative and academic ecosystems, actors and victims move across platforms and institutions. There is a policy void for multi-institutional complaints and for continuity of remedies.
6. Digital evidence and forensic capacity
- ICCs, tribunals and trial courts require technical SOPs and accredited forensic procedures to handle encrypted or ephemeral digital evidence.
Judicial trends and doctrinal pointers (what courts are doing)
- Courts globally are increasingly recognising personality/publicity interests distinct from privacy; some decisions protect voice and likeness from AI replication.
- Indian courts have been reactive—granting relief case by case—without a comprehensive statutory benchmark.
- Practical lesson: judicial attitudes are shifting toward protection of identity as both dignity and property, but the legal framework must catch up.
Policy and legislative gaps
| Gap | Consequence | Recommended legal/administrative reform |
| No unified statutory personality/right to publicity | Fragmented remedies, inconsistent outcomes | Enact a statutory framework recognising personality rights (commercial + dignity aspects) |
| Shortage of platform obligations on provenance | Difficulty tracing and removing deepfakes | Mandatory watermarking/provenance tech; transparency reports; platform liability norms |
| Weak institutional capacity (ICCs, courts) | Re-traumatisation; poor evidence handling | Mandatory training; technical forensic cells; SOPs for digital evidence |
| Limitation periods not suited to psychological harms | Delayed recognition of abuse barred | Extend/expand limitation rules; allow exceptions for pattern/psychological abuse |
| No coordination across institutions | Perpetrators exploit jurisdictional gaps | Mechanism for inter-institutional complaints and cross-border cooperation |
| Ambiguity around “malicious complaint” provisions | Chill on genuine complaints | Clear, narrow definitions and safeguards against misuse |
Recommended legal design
- Statute defining personality rights — cover name, image, voice, likeness, distinctive style and algorithmic replicas; recognise both non-economic (dignity) and economic (publicity) dimensions.
- Consent & data-use rules — informed, auditable consent for training data; special protection where power imbalance exists.
- Provenance & watermarking mandate — AI-generated audiovisual content must carry robust, tamper-resistant provenance metadata.
- Platform duties — detection, expedited takedown, human review, transparency; penalties for wilful non-compliance.
- Evidence & forensic protocols — lab accreditation, chain-of-custody rules for digital proofs, training for adjudicators.
- Remedies — interim injunctions, takedowns, damages (including reputational/market harm), correction orders, and criminal sanctions for fraud/blackmail.
- Inter-institutional & cross-border coordination — formal channels for complaints across universities, media houses and platforms; bilateral cooperation with other jurisdictions.
- Safeguards — carve-outs for satire, legitimate criticism, and freedom of expression with clear balancing tests.
Institutional measures and best practices (beyond law)
- Capacity building: mandatory training for complaint committees, judges and investigators in digital forensics and trauma-informed interviewing.
- Model ICC protocols: privacy-protective, time-bound inquiries, counselling and interim protection (no-contact orders).
- Public awareness: digital literacy campaigns to help users spot and report synthetic content.
- Industry standards: encourage AI developers to adopt “responsible datasets” and opt-in licensing for celebrities/creators.
How courts and regulators should balance rights
- Freedom of expression vs. personality and dignity: allow legitimate creative expression and reporting, restrict deceptive impersonation that causes harm.
- Innovation vs. protection: regulations should be proportionate and technology-neutral — avoid blanket bans that stifle research while ensuring accountability for commercial misuse.
- Due process for platforms: notice, an opportunity to contest takedowns, but with speed for harms that are time-sensitive.
Use-case scenarios and suggested remedies
| Scenario | Harm | Immediate remedy | Longer term fix |
| Deepfake-enabled sexual harassment | Reputational trauma, career loss | Emergency takedown, interim injunction, counselling | Criminalisation of non-consensual intimate deepfakes; fast track tribunals |
| AI clone voice used for fraud | Financial loss | Freeze offending accounts, injunctive relief, financial restitution | Mandatory voice provenance standards; platform liability for marketplace misuse |
| AI-generated fake testimonial misusing celebrity image | Market dilution | Order to cease & desist; damages | Right to publicity statute; watermarking of synthetic ads |
| Academic advisor’s voice replicated coercing student | Psychological harm, abuse of power | Institutional protective measures; inquiry | Multi-institution complaint mechanism; campus policies on AI misuse |
Institutional Response and Safeguards
- Adopt clear AUPs (Acceptable Use Policies) for AI-generated content and explicit consent rules for use of student/employee data.
- Create an AI misuse response team: take down, notify victims, preserve evidence, coordinate with law enforcement.
- Mandate ethical clauses in conferences and collaborations — no unauthorised recording or modelling of participants.
Conclusion — the way forward
The episode that brought personality rights and AI into the headlines is not an isolated controversy; it is a symptom of a systemic mismatch between fast evolving AI capabilities and slow, fragmented legal responses. Protecting identity in the digital age requires a coherent mix of law, technology standards and institution-building:
- Statutory clarity on personality rights and algorithmic impersonation.
- Robust platform accountability and technical provenance.
- Capacity building for institutions and the justice system to handle digital, cross-border harms.
- Balanced rules that protect dignity and commerce without killing legitimate speech or innovation.
Only with this “spine” — coherent rules, institutions and technical standards — will societies be able to harness creative potential of AI while preventing exploitation of human identity.
Source: Decoding personality rights in the age of AI – The Hindu
UPSC CSE PYQ
| Year | QuestionS |
| 2023 | What are the main socio-economic implications arising out of the development of Artificial Intelligence? |
| 2023 | Do you think that the Constitution of India does not accept principle of strict separation of powers rather it is based on the principle of ‘checks and balances’? Explain. |
| 2022 | Discuss the vulnerability of Indian society to deepfakes and the challenges posed by them. |
| 2021 | Right to privacy is protected as an intrinsic part of Right to Life and Personal Liberty. Elaborate in the context of the digital world. |
| 2021 | What are the challenges to our cultural practices in the name of Secularism? |
| 2020 | Critically examine the role of Social Media in facilitating social movements in India. |
| 2019 | Data protection has emerged as an essential requirement… Examine the challenges. |
| 2019 | Do you agree with the view that protection of cultural rights is as important as protection of human rights? Explain. |