As artificial intelligence (AI) becomes increasingly embedded into society — at home and in the office — through virtual assistants, automated services and workplace productivity tools, there is a pressing need for regulatory guardrails and governance policies to protect consumer privacy and data rights. Corporate AI adoption presents organizations with a rising threat of potential privacy violations.

Shockingly, 45% of surveyed organizations have experienced unintended data exposure when implementing AI systems — a harsh reality check on their biggest pre-deployment fears. Despite 88% of respondents claiming to have data management strategies in place, a troubling 44% lack even basic measures like archiving policies and data lifecycle controls.

More data, more problems

This lapse in data management is particularly concerning given the amount of data consumed by AI solutions. Currently, 64% of organizations grapple with over one petabyte of data, and over 41% handle a staggering 500 petabytes or more. The potential for mishandling the troves of personal data contained within these datasets expands dramatically through the adoption of AI technology.

Compounding these risks, only 38% of executives currently offer AI training and education opportunities to employees — despite many executives citing generative AI as a “top investment priority.” This lack of formal education could enable preventable errors that compromise privacy. However, the stakes extend far beyond any individual corporation’s security protocols. Generative AI models, such as those from OpenAI and Google, are trained on massive volumes of data scraped from across the internet — likely containing sensitive personal communications, transaction histories and demographic details — all without explicit user consent. 

What’s more, it’s recently been found that more than 74% of the use of ChatGPT at work was through non-corporate accounts, and more than 90% of the use of Gemini (and its predecessor Bard) was through non-corporate accounts — underscoring the need for comprehensive education and policies surrounding the safe use of these tools, especially with the rise of BYOD policies. 

Governing the ungoverned: AI’s data consumption needs rules

As businesses rush to capitalize on AI’s competitive advantages with minimal guardrails, consumer faith in data privacy protections could emerge as an unintended sacrifice. Consequently, there is now an urgent need for national and international regulatory frameworks that enforce robust data management, transparency over AI data practices and penalties for privacy violations. Stricter data governance policies on an individual and company-wide basis are also essential.

Europe’s AI Act, the world’s first comprehensive AI law, is a strong start. It categorizes technology risk, ranging from “unacceptable” — which would see the technology banned — to high, medium and low hazard. Additionally, companies like Google have attempted self-regulation to avoid contributing to the spread of misinformation. While the United States has historically lagged behind Europe in data privacy regulation, individuals and organizations alike must push for federal standards surrounding trust and transparency in the coming years.

Beyond industry-wide regulation

Organizations must also do due diligence to protect users’ privacy and workplace data when using AI. An advanced information management strategy is crucial, prioritizing consistent and structured data classification and archiving policies, along with automated processes to effectively manage snowballing amounts of data. With strong data privacy protections and a clear understanding of who owns and uses our data, society can develop trust in AI and work towards productive, powerful and ethical use of these tools.

As AI continues to shape our world, safeguarding consumer privacy becomes paramount. Regulatory frameworks, education and robust data management are essential to ensure responsible and transparent AI practices — as society strives for a future where AI empowers individuals without compromising their fundamental rights.