Women’s Digital Safety in the Age of Artificial Intelligence

While Artificial Intelligence offers new opportunities for women’s empowerment, it also creates emerging risks to their digital safety. Examine the challenges and suggest measures to ensure safe and inclusive digital spaces for women. (GS Paper II Social Issues)

Context

  • The rapid integration of artificial intelligence into daily life has transformed societies, yet it has simultaneously amplified vulnerabilities, particularly for women in the digital realm.
  • On the eve of International Women’s Day 2026, the imperative to align technological progress with ethical AI and robust safeguards for women’s digital safety has gained urgency.

How AI and Digital Technologies Are Empowering Women

  • Expanding Economic Opportunities: Digital platforms enable women to participate in e-commerce, freelancing, and home-based entrepreneurship. For instance, platforms such as Meesho have enabled many women from small towns to earn income through online reselling.
  • Improving Access to Education and Skills: Digital learning initiatives under Digital India and Pradhan Mantri Gramin Digital Saksharta Abhiyan (PMGDISHA) have improved digital literacy and access to online education, especially for rural women.
  • Strengthening Financial Inclusion: Schemes such as Pradhan Mantri Jan Dhan Yojana have enabled millions of women to open bank accounts and access digital banking and Direct Benefit Transfers (DBT).
  • Enhancing Access to Healthcare: Telemedicine platforms such as eSanjeevani allow women, particularly in rural areas, to consult doctors remotely and access health services.
  • Promoting Social and Political Participation: Digital platforms help women express opinions, participate in public debates, and mobilise social movements, as seen in campaigns such as the #MeToo movement.
  • Encouraging Women’s Participation in Technology: Increasing women’s representation in AI research and STEM (Science, Technology, Engineering, and Mathematics) fields can lead to more inclusive and ethical technology design, helping address gender-specific concerns in digital systems.

Key Challenges in Ensuring Women’s Digital Safety in the AI Era

1. Rising Online Harassment and Digital Abuse

  • Expansion of internet access has increased women’s exposure to cyberbullying, trolling, stalking, and doxxing (revealing personal information online without consent).
  • A study by UN Women and The Economist Intelligence Unit (2021) found 38% of women globally have experienced online violence, while 85% have witnessed digital abuse.
  • In India, the National Crime Records Bureau recorded over 65,000 cybercrime cases in 2022, indicating the growing shift of gender-based violence into digital spaces.

2. Misuse of Deepfake and AI Technologies

  • Advances in Artificial Intelligence have enabled deepfakes—manipulated videos, images, or audio that falsely portray individuals.
  • Research by Deeptrace shows around 96% of deepfake videos online are non-consensual pornographic content, largely targeting women.
  • A 2023 analysis by Sensity AI also found that women constitute the primary victims of deepfake-based sexual exploitation.

3. Anonymity and Weak Platform Accountability

  • Online anonymity and pseudonymity make identifying perpetrators of cyber abuse difficult. Harmful content spreads rapidly across platforms, often outpacing moderation efforts.
  • Despite the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, enforcement and accountability mechanisms remain uneven, allowing abusive content to persist.

4. Gender Gap in AI Development

  • Women are significantly underrepresented in AI research and leadership. According to United Nations Development Programme and UNESCO, women constitute about 22% of AI professionals globally and less than 14% hold senior roles.
  • Limited diversity in AI development can lead to algorithmic bias and inadequate safeguards against gender-based misuse.

5. Absence of Specific Legal Provisions for Deepfakes

  • India addresses cyber offences through laws such as the Information Technology Act, 2000 and the Bharatiya Nyaya Sanhita, 2023, covering obscenity, impersonation, and privacy violations.
  • However, no dedicated legislation currently regulates AI-generated deepfakes, creating challenges in addressing AI-enabled harassment.

6. Lack of Digital Awareness and Education

  • Many users, particularly youth and new internet users, lack awareness of cyber safety, AI misuse, and reporting mechanisms.
  • According to the Internet and Mobile Association of India, India had over 820 million internet users in 2023, but digital literacy remains uneven.
  • This limits the reporting of cyber offences through platforms like the National Cyber Crime Reporting Portal, increasing vulnerability to online exploitation.
  • India has a 40% gender gap in mobile internet use — women’s lower digital access means lower digital safety awareness and fewer tools to protect themselves. (GSMA Mobile Gender Gap 2024)

Existing Policy and Institutional Measures in India

  • IT Act, 2000 (Section 66E): Section 66E of the Information Technology Act, 2000 penalises the violation of privacy through capturing, publishing, or transmitting images of a person’s private area without consent, with punishment of up to three years’ imprisonment or a fine up to ₹2 lakh, or both.
  •  IT Act, 2000 (Section 67A): This provision under the Information Technology Act, 2000 criminalises the publication or transmission of sexually explicit material in electronic form. Although it can be applied in cases involving deepfake content, it does not specifically address AI-generated media.
  • BNS, 2023: The Bharatiya Nyaya Sanhita, 2023 criminalises the non-consensual distribution of intimate images, strengthening protections compared to the earlier IPC framework. However, the law still lacks specific provisions dealing with AI-generated deepfakes.
  • MeitY Deepfake Guidelines, 2023: The Ministry of Electronics and Information Technology has issued guidelines requiring online intermediaries to remove reported deepfake content within three hours of receiving a takedown notice.

International Best Practices for AI Governance

  • EU AI Act, 2024: The Artificial Intelligence Act is the first comprehensive global legislation on Artificial Intelligence, categorising systems that generate non-consensual intimate images (NCII) and deepfakes as unacceptable risk, and imposing strict restrictions on their use.
  • AI Ethics Framework: The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) emphasises conducting gender impact assessments before deploying AI systems and promoting diversity in AI development teams.
  • Responsible AI Principles: The Organisation for Economic Co-operation and Development outlines principles of inclusive growth, human-centred values, transparency, and accountability in AI governance, which India has endorsed but still needs to translate effectively into domestic regulatory frameworks.

Way Forward: Strategic Policy Interventions for Ensuring Women’s Digital Safety

A multi-stakeholder strategy is required to align AI innovation with women’s digital safety through focused, actionable measures.

1. Strengthening the Legal and Institutional Framework

  • Dedicated Deepfake Legislation: Enact a specific law to regulate deepfakes and synthetic media, criminalising the creation, distribution, or hosting of non-consensual AI-generated intimate content and ensuring stronger victim protection.
  • Enhanced Platform Accountability: Amend provisions such as Section 79 of the Information Technology Act, 2000 to require proactive AI-based detection and moderation of harmful content, with stricter liability for platforms in cases of systemic negligence.
  • Fast-Track Cyber Justice Mechanisms: Establish specialised cybercrime courts and trained digital-forensics units in states to ensure speedy investigation and timely disposal of cyber offences, including deepfake-related crimes.

2. Increasing Women’s Participation in AI Development

  • Promoting Women in AI Research: Encourage greater participation of women in government-supported AI initiatives, including programmes under the IndiaAI Mission, through targeted fellowships and research incentives.
  • Strengthening the STEM-to-AI Pipeline: Expand scholarships, mentorship programmes, and industry internships to promote women’s entry into STEM (Science, Technology, Engineering and Mathematics) and AI-related careers.
  • Encouraging Diversity in the Tech Sector: Promote gender diversity audits and inclusive hiring practices in technology companies to reduce algorithmic bias and strengthen ethical AI development.

3. Enhancing Digital Safety and Awareness

  • Digital Safety Education: Integrate AI ethics, deepfake awareness, and cyber safety modules into school curricula developed by National Council of Educational Research and Training to build early awareness.
  • Strengthening Reporting and Support Systems: Improve cybercrime response mechanisms, including dedicated helplines and rapid response teams for victims of AI-enabled harassment.
  • Community-Level Digital Literacy: Use grassroots platforms such as Self-Help Groups under the Deendayal Antyodaya Yojana – National Rural Livelihoods Mission to promote digital literacy, reporting awareness, and support networks for women.

Conclusion

Balancing AI innovation with women’s digital safety requires urgent and coordinated action to harness technological benefits without compromising dignity and rights. Prioritising ethical AI governance, stronger legal enforcement, and greater gender diversity in technology development can ensure that women become equal stakeholders in the digital ecosystem.