Everyday business users are experimenting with ChatGPT and other generative AI tools. In fact, Gartner predicts that by 2025, 30% of marketing content will be created by generative AI and augmented by humans. Companies like Samsung, however, have discovered the hard way that users who don’t understand the risks of the new technology are becoming unwitting insider threats. Thus, rushing ahead without building guardrails could result in data breaches and other security events.
There’s no doubt that generative AI will be a useful tool for businesses, but it requires careful implementation. As unstructured data proliferates and is incorporated into new algorithms and the apps using them, it’s imperative that businesses establish a strategy for responsible AI use and data protection that can withstand this new age.
The insider threat is bigger than ever
The Samsung data leak is not an isolated incident. Research from the 2022 Ponemon Institute shows insider threat incidents have risen 44% in the last two years. While there will always be some level of human error, this unprecedented level of risk can be mitigated. Many CIOs have been reluctant to put rules around generative AI, concerned that employees will feel untrusted; however, a lenient approach will leave organizations vulnerable to exposure.
Ultimately, IT needs to create a balance between giving employees access to the tools and data they need, and the chance that people will make mistakes. Human error is one of the biggest security risks of all. The best way to protect data is to ensure all users feel responsible and know how to do so.
Companies should train employees — based on their roles and levels of access — to be data stewards. CMOs, developers, database administrators, and HR associates will all have different relationships to the data they work with. Each employee needs to understand what risk they pose and how to better protect data. If we’re all active citizens within our communities, following regulations like traffic laws to ensure safety, employees need to be doing the same — bringing a data security and safety attitude to everything they do.
Rooting out shadow IT
In some organizations, generative AI might be only one of the myriad applications with data flowing in and out that IT should monitor but doesn’t always have full visibility into. The reality is there’s a “shadow IT” apparatus in most companies, which allows unstructured data to pass through IT landscapes, unaccounted for and unprotected. Shockingly, Quest Software research discovered that 42% of IT leaders say that at least half of their data is in the shadows, meaning it cannot be located, managed, or secured.
Employees using unsanctioned apps, like generative AI tools, may unknowingly add rogue IT and dark data assets, creating a scenario where businesses lack the insight to prevent unintentional data breaches. Access to data and apps is still very important, but ensuring that proper visibility is in place for these assets — and that access is managed carefully — can help businesses maintain a balance.
This can be done on a few different levels:
- Access privileges: IT departments should regularly update and monitor who really needs access to a dataset or application and make updates when roles or employment status changes.
- Preventing data scraping: Generative AI tools are trained on internet data. If an employee drops sensitive company data into a chatbot, it could expose the information and it inadvertently becomes part of the public domain. Vendors like OpenAI and Anthropic are working to put privacy controls into place, but businesses should not wait nor rely on outsiders to protect their assets. They need to create their own controls.
- Active observability: Regular scanning of data environments will enable greater understanding of threats from discovering shadow data and third-party apps to gauging how data flows across the organization. For example, IT professionals can catch when an overexcited employee is putting data into a third-party extension, such as SlackGPT, and take steps to both remedy third-party access and educate that employee about their role in data protection.
An organization-wide effort
The best way to prevent proprietary data leaks stemming from generative AI is with responsible AI use. Implementing an intentional data strategy that balances security and access is imperative for every organization.
Businesses can protect themselves from today’s security threats while taking advantage of the best aspects of AI, but only as data security becomes a priority: a coordinated effort that includes technical guardrails and processes for accountability but also tailored employee education throughout the entire organization. Only then will businesses be truly protected against today’s security threats while being able to take advantage of the best aspects of AI.