Artificial Intelligence (AI) has catapulted further into the corporate conversation with the introduction of ChatGPT and other large language, generative AI models now in the mainstream and accessible to the general public. In February 2024, a Canadian court ordered Air Canada to pay damages to a customer after the company’s virtual AI assistant gave the customer incorrect information regarding bereavement pricing for tickets. A current criminal case in a Maryland court brought charges against a former high school athletic director accused of using AI to impersonate the school principal on an audio recording that included discriminatory remarks. During the 2024 New Hampshire primary, AI-generated phone calls impersonated President Joe Biden’s voice in an apparent attempt to discourage voting.

AI automates business tasks and has the ability to sift through a lot of data. But, for all AI does to help, attackers or bad actors are also exploiting the technology and using it to their advantage. The risks to organizations and greater society are real and can translate to reputational damage, lost revenue, data exploitation or worse.

Of course, AI exploitation goes beyond misinformation and audio fakes and can target company data, personal information, surveillance images and more. In many ways, AI is in its infancy in terms of supply, use, legislation and guidance surrounding protective frameworks. Early debate surrounds the privacy issues and ethical implications of AI as well. In order to adapt and change as AI rapidly continues to evolve, it’s imperative that security professionals understand how AI is being used both by their organization and against it.

AI Use Within the Organization

AI comes in many forms and organizational stakeholders must understand how their organization and employees are using AI, along with considering the metadata they are collecting and how they are protecting it. As it applies to physical security, AI is often used in high-performing analytics for security equipment, such as face recognition, license plate recognition, object detection or gunshot detection. AI machine learning and large language models also aid in video surveillance forensics, searching and investigations for quick, efficient results.

AI-powered analytics are also being used by organizations to increase operational efficiencies and business intelligence beyond traditional security use cases, and it’s many of these use cases that can translate to large losses if compromised. For example, at one major automobile manufacturer, video analytics are set up on the production line to detect anomalies straying from set quality controls during assembly. Other organizations use video analytics to detect OSHA violations or detect vehicles without parking stickers. In retail, marketing and business operations use analytics to manage queue lines and determine display performance.

AI is beneficial to IT and cybersecurity departments too, automating incident response and correlating threat intelligence. For example, machine learning technology is very efficient at running through data logs and network traffic looking for vulnerabilities and anomalies, such as brute force attacks, to aid in incident detection.

Outside of AI powering physical and cyber security departments, generative AI is being used to increase operational efficiencies and improve day-to-day office tasks such as crafting emails, sitting on web meetings, taking notes, performing basic coding, and generating ideas or presentations.

Exploitation of AI

Just as AI is helping organizations boost protections and efficiencies, it’s also helping bad actors increase their attack efficiencies as well. Social engineering, including deep video or audio fakes can be used in a number of nefarious ways. In the cybersecurity realm, the potential for emergency audio fakes or tampered video evidence are real threats.

Cybersecurity vulnerabilities in AI-powered surveillance systems may pose risks to the security and integrity of collected data and potentially enable espionage or cyberattacks. Another vulnerability may lie in the way organizations are processing video analytics. If organizations are processing their video analytics in the cloud rather than on the edge, video compression algorithms must reduce the data of the original image, which — because the video data is not in its raw, cleanest form — can potentially lead to more errors or less confidence of a match by the AI algorithm.

AI models are even aiding more traditional phishing campaigns, allowing them to perform more effectively than ever before, mimicking perfect grammar and details that can be difficult to detect.

AI-powered intelligence models also allow bad actors to automate vulnerability detection for exploitation as well as aid in writing the code for the exploit itself.

Considerations Surrounding AI and Building Blocks for Protection

In order to manage the risks that AI and its data pose to an organization, stakeholders must take care to build a responsible framework for AI use, defining acceptable levels of risk and how the organization will manage those risks. Taking a risk-based approach to AI deployment and protections will help the organization determine potential threats and lay out steps to ensure ethical AI adoption, accountability and transparency.

In order to manage the risks that AI and its data pose to an organization, stakeholders must take care to build a responsible framework for AI use, defining acceptable levels of risk and how the organization will manage those risks.”

With this rapidly evolving landscape, it’s crucial that security stakeholders start the conversation surrounding AI now. Here are a few considerations to begin.

1. Advocate for a seat at the table.

As analytics and other AI continues to boost operational efficiencies across departments, personal information, data and metadata collected become more important to the organization. A deep understanding of the data collected and its use within both physical security and cybersecurity are critical elements to bring to the conversation surrounding AI protections.

In addition to physical and cyber security, stakeholders should include governance, legal, privacy, IT and operations, to name a few. Each stakeholder can bring meaningful perspective and specific expertise to discussions on risk and where responsibilities lie, and exposure to hefty fines due to new regulations

2. Take a hard look at what AI solutions, sources and vendors are being deployed.

Understand what AI-powered solutions and sources are being deployed within the organization from cybersecurity solutions to video analytics to operational efficiencies. If the organization is deploying AI-powered vendor solutions in any department, including video analytics, it’s critical to ensure all partners and all vulnerabilities are properly vetted and assessed.

When working with vendors to deploy AI solutions within the organization, look for systems with embedded technology with both security and privacy by design principles for increased data protection. For example, Axis cameras have a Signed Video feature, which guards video data from being manipulated, ensuring the organization can determine true capture versus altered or fake video.

3. Conduct risk assessments to determine potential threats of deployed use and attacks using AI.

Evaluating risks, potential risks and the levels of those risks are the cornerstones of a framework for protection. In addition to regular risk assessments, stakeholders should ask questions surrounding AI deployment within the organization, including where training data is coming from, what data is being collected and what level of encryption is being deployed; ask for proof of the AI model’s ethical training; and ask whether training data has been anonymized for privacy issues.

4. Consider the risks to privacy, civil liberties and potential abuse of power with deployed AI-driven surveillance.

The proliferation of AI in the surveillance industry can pose several potential risks that require an organization to carefully consider when building a responsible framework of transparency and accountability. For instance, AI-driven surveillance may compromise individuals’ privacy by analyzing personal data without consent. Certain AI algorithms can perpetuate biases, resulting in discriminatory outcomes. Without precautions, AI surveillance may infringe on civil liberties and an individual’s freedom of expression.

It’s imperative that organizations consider how they are protecting personal data from misuse or unauthorized access. Additionally, transparency and accountability should remain top-of-mind during framework discussions as opacity of any AI program can lead to potential abuse of power.

Finding the Right Resources

In today’s global economy, it’s essential for organizations to look at AI risk and compliance through a global lens, regardless of where the organization does business. It’s equally essential to stay up to date on emerging guidance, rules and regulations globally whether the organization is a manufacturer or consumer of AI.

Though legislation surrounding AI is just beginning, there are a number of resources that organizations can turn to facilitate the conversation and educate their stakeholders on the current landscape.

  • U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence — The Executive Order signed by President Biden in the fall of 2023 establishes standards for AI safety and security, while also aiming to tackle privacy concerns surrounding the technology.
  • EUAI Act — The European Union’s Artificial Intelligence (EUAI) Act, enacted in March 2024, is the world’s first major law to regulate AI. The rules-based legislation classifies AI systems into risk levels and applies to developers, providers, distributors and deployers of AI systems marketed, sold or used within the EU, though it can be a great starting point for any organization.
  • The White House’s Blueprint for an AI Bill of Rights — In the United States, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights in the fall of 2022. The non-binding document aims to give guidance surrounding access and use of AI systems and solutions. The rights-based framework includes best practices and assorted resources for discussion.
  • NIST AI RMF — The National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (AI RMF) is a useful resource to help organizations manage the risks associated with AI. The framework focuses on transparency, accountability and considerations for design, development, use and evaluation of AI solutions.

Conversations surrounding transparency, accountability and ethical deployment of AI is a good place for organizations to start as they delve into creating a risk-based approach for both deployment and protections. Preparing now will position organizations to remain vigilant and ready to adapt as the AI landscape continues to evolve.