The Right to Be Forgotten vs. AI's Infinite Memory: A Regulatory Dilemma

POSTED ON JUNE 27, 2025 BY DATA SECURE

Introduction

fine

The “Right to Be Forgotten” (RTBF), enshrined in Article 17 of the General Data Protection Regulation (GDPR), allows individuals to request the erasure of their personal data, asserting control over their digital identities in an era where privacy is increasingly under threat. At the same time, artificial intelligence (AI), especially systems built on machine learning, depends on vast, persistent datasets to learn, optimize, and evolve. Once data is used in training, it often becomes embedded within the model’s architecture, making its removal complex and, in many cases, technically infeasible. This creates a fundamental tension: Can a system designed to remember everything ever truly forget?

This dilemma lies at the crossroads of privacy law and technological design. As AI continues to play a central role in critical sectors governance, healthcare, education, and commerce, questions around the enforceability of the RTBF become more pressing. This article explores the origins and legal foundations of the right to be forgotten, its theoretical underpinnings, and the formidable challenges posed by AI’s infinite memory.

The Concept of the Right to Be Forgotten

fine

The right to be forgotten finds its roots in the European Union’s Directive 95/46/EC on Data Protection and the Directive 2000/31/EC on Electronic Commerce. These instruments established the legal framework obligating search engine providers to remove or de-index certain links. A landmark moment arrived in 2014 with the Google Spain v. González case, where the European Court of Justice affirmed the right to have outdated or irrelevant personal information removed from search engine results.

In this case, Mr. González sought the removal of search results linked to a real estate auction notice published years earlier. The Court cited Article 12(b) of Directive 95/46/EC, stating that when the processing of information is deemed inadequate, irrelevant, or excessive, both the data and its associated links must be erased. This judgment provided a definitive legal foundation for the RTBF and shaped its future application across the EU.

Initially, the right to be forgotten applied only to minors, as introduced in the 2012 draft of the GDPR. However, following the GDPR’s formal adoption in 2018, the right was extended to all individuals. The essence of the RTBF lies in the idea that information, though once legitimate, can lose its relevance or legality over time. Its legal interpretation involves both temporal and spatial dimensions, relying not just on the subjective will of the data subject but also on objective contextual facts.

Legally, the RTBF comprises two interrelated dimensions: the right to forget and the right to delete. The right to forget protects individuals from being indefinitely tied to past actions or events, thus safeguarding human dignity. The right to delete provides individuals with agency over their own information, allowing them to control its existence and dissemination. These rights are anchored in broader legal principles such as informational self-determination, the right to privacy, and the right to control one’s personal data. Together, they form the basis of a right that, if no legitimate grounds exist to retain the data, ensures that personal information should no longer remain publicly accessible.

Personal Information Self-Determination and the Right to Be Forgotten

fine

While privacy rights traditionally aim to prevent exposure of confidential information and identity rights focus on safeguarding reputation, neither fully addresses the complexities of outdated but publicly available information. The right to be forgotten goes further; it seeks to restore control over personal data that, while once legitimate, may no longer serve a public interest or may harm individual dignity when left accessible.

Here, the theory of personal information self-determination becomes crucial. Unlike privacy or identity, which deal with exposure and social perception, this principle asserts that individuals should have control over how their data is used, disseminated, and stored. As a civil and personality right, the RTBF helps ensure data integrity, transparency, and individual agency.

This conceptual shift is supported by scholars like Wilhelm Steinmüller, whose theory of informational self-determination strongly influenced legislative developments such as the GDPR and France’s Digital Republic Act. His argument that individuals must hold ultimate authority over their own data is more relevant than ever in today’s digital ecosystem. Ultimately, the RTBF should stand as an independent right, rooted in the principle of information self-determination. However, its application must be carefully balanced with legitimate boundaries to prevent its misuse or overextension.

fine

Although absolute in intent, the right to be forgotten faces significant limitations in practice, especially in the context of modern AI technologies. The nature of digital data, its ease of duplication, persistence, and integration into complex systems, makes it difficult to implement uniform deletion mechanisms that adequately protect individual interests. This challenge becomes even more pronounced with the rise of generative AI.

Generative AI, a subset of artificial intelligence, refers to algorithms capable of producing entirely new content text, images, videos, and even code based on the data they’ve been trained on. Large Language Models (LLMs) such as ChatGPT exemplify this technology, generating coherent, human-like outputs across various domains. These models are trained on massive datasets, often harvested from publicly available sources, and can unwittingly retain and reproduce personal information found within those datasets.

The implications for data privacy are profound. In an environment where AI systems might indefinitely store or replicate personal data, enforcing the RTBF becomes both a legal and technical challenge. Given the opaque nature of many AI models, especially deep learning architectures, it is often impossible to isolate and erase specific data influences once training is complete. Therefore, the evolving capabilities of generative AI demand a re-examination of how data deletion and data minimisation are defined and enforced in law.

Safeguarding Information Rights in AI’s Era of Forgotten Data

fine

The exponential growth in computing power and data collection has enabled AI to perform tasks once thought to be uniquely human. From AlphaGo’s historic victory over Go champion Lee Sedol to AI-driven developments in medicine, transport, and legal services, AI is now a foundational technology across industries. These capabilities, however, come with serious privacy concerns, particularly when it comes to the collection, use, and storage of personal data.

Unlike conventional digital services, AI systems process data in increasingly abstract and inaccessible ways. This complexity necessitates robust legal safeguards, including a modernised and enforceable right to be forgotten. When paired with the right to deletion, the RTBF enables individuals to reclaim control over their digital footprint, countering the imbalance that often favours powerful data controllers.

To effectively mitigate the risks posed by AI, a broader interpretation of the RTBF is necessary, one that spans the entire data lifecycle. This means considering the legal and technical implications of data not only at the point of deletion but also during collection, storage, and processing. Each stage presents unique challenges that require tailored regulatory responses. Expanding the scope of the RTBF to address these lifecycle stages is essential if the right is to remain meaningful in an AI-dominated era.

Way Forward

To reconcile the Right to Be Forgotten with the persistent nature of artificial intelligence, a balanced and forward-looking approach is essential. First, AI systems must be developed with built-in mechanisms that support privacy by design, ensuring that data minimisation and user control are considered from the outset. This would allow for easier compliance with data erasure requests without compromising system integrity. Additionally, the advancement of machine unlearning techniques, capable of removing specific data influences from trained models, offers a promising technological solution, though it remains in its early stages. On the legal front, existing data protection laws must evolve to address the unique challenges posed by AI. This includes establishing clearer standards for data deletion in AI contexts, especially in cases where models continue to produce outputs based on erased data. Regulatory bodies should also mandate regular impact assessments and audits to ensure that AI systems are not retaining or reproducing data that should be forgotten. Finally, user empowerment through transparency is critical. Individuals must have access to understandable information about how their data is used, along with accessible mechanisms to request deletion. Without such reforms, the RTBF risks becoming a symbolic right in an era defined by AI’s infinite memory.

Conclusion

The right to be forgotten was conceived as a powerful tool for individual empowerment in the digital realm. However, its enforcement in the age of AI, especially generative models that thrive on data retention, reveals significant legal and technical obstacles. While grounded in principles of privacy, dignity, and self-determination, the RTBF must evolve to address the enduring memory and complexity of AI systems. A broader, lifecycle-based approach, supported by both regulatory reform and technological innovation, may provide the clarity and effectiveness needed to safeguard individual rights in an era where forgetting is no longer easy.

We at Data Secure (Data Privacy Automation Solution) DATA SECURE - Data Privacy Automation Solution  can help you to understand EU GDPR and its ramificationsand design a solution to meet compliance and the regulatoryframework of EU GDPR and avoid potentially costly fines.

We can design and implement RoPA, DPIA and PIA assessments for meeting compliance and mitigating risks as per the requirement of legal and regulatory frameworks on privacy regulations across the globe especially conforming to GDPR, UK DPA 2018, CCPA, India Digital Personal Data Protection Act 2023. For more details, kindly visit DPO India – Your outsourced DPO Partner in 2025 (dpo-india.com).

For any demo/presentation of solutions on Data Privacy and Privacy Management as per EU GDPR, CCPA, CPRA or India DPDP Act 2023 and Secure Email transmission, kindly write to us at info@datasecure.ind.in or dpo@dpo-india.com.

For downloading the various Global Privacy Laws kindly visit the Resources page of DPO India - Your Outsourced DPO Partner in 2025

We serve as a comprehensive resource on the Digital Personal Data Protection Act, 2023 (Digital Personal Data Protection Act 2023 & Draft DPDP Rules 2025), India's landmark legislation on digital personal data protection. It provides access to the full text of the Act, the Draft DPDP Rules 2025, and detailed breakdowns of each chapter, covering topics such as data fiduciary obligations, rights of data principals, and the establishment of the Data Protection Board of India. For more details, kindly visit DPDP Act 2023 – Digital Personal Data Protection Act 2023 & Draft DPDP Rules 2025

We provide in-depth solutions and content on AI Risk Assessment and compliance, privacy regulations, and emerging industry trends. Our goal is to establish a credible platform that keeps businesses and professionals informed while also paving the way for future services in AI and privacy assessments. To Know More, Kindly Visit – AI Nexus Home|AI-Nexus