AI overhaul at McDonald’s needs a super-sized security approach

Jason Miraples via Unsplash
AI-enabled drive-throughs, a GenAI virtual manager to tackle admin, sensor-connected kitchen equipment, computer vision to check food orders: McDonald’s is hoping a major investment in AI will be the secret sauce to flavor its finances. Customers, staff and shareholders are entitled to ask: can I get extra security with that?
With almost 43,500 restaurants and about 70 million daily customers, McDonald’s certainly meets the use case criteria for big AI investment. Its 175 million ‘loyalty users’ give it oceans of data about preferences and spending, and its kitchens contain expensive kit packed with hard-working components that are well worth tracking.
It makes good business sense to apply the latest technology to process, catalog and analyze that data for pain points and potential improvements. For McDonald’s, the priority will be to ensure the AI investment doesn’t itself become a major pain point.
The chain seems to recognize the risks. In its corporate filings, it notes that “the artificial intelligence tools we are incorporating into certain aspects of our business may not generate the intended efficiencies and may impact our business results.” Its long list of potential IT risks now includes “deepfakes and other malicious uses of artificial intelligence” alongside traditional cyber threats.
A key challenge for McDonald’s and other quick service restaurants is that there are simply so many interfaces with customers, from counter and onscreen ordering in-store to drive-through terminals, in-app purchasing, and third-party food ordering and delivery services.
Introducing any new technology at any of those interfaces opens up fresh attack surfaces for threat actors. With AI, the risks are magnified: this is new technology, the threat surface is nascent, and infosec professionals don’t have the tools or knowledge to discover or defend against every threat.
We know that threat actors target their efforts at accessible assets, things people interact with such as websites, mobile apps and public-facing servers. A little-used network point that is accidentally left live can be exploited to access critical systems. Multiply those examples out across the Golden Arches, with all its screens and drive-through terminals, and you begin to see the scale of the security challenge.
Voice recognition and computer vision systems throw up all sorts of concerns, both in terms of security and privacy. Alongside accidental and malicious threats, McDonald’s must consider environmental aspects: its drive-throughs are surrounded by continuous and dynamic sounds that can confuse an AI system or give a low quality of service. What works in one location may not work in another.
Meanwhile, experience teaches us that humans tend to utilize any loopholes they can find, potentially triggering the type of losses that McDonald’s fears in its corporate filings. AI systems, with all their component parts, complicate the picture even further: as AI is increasingly connected to both virtual and physical entities, via agents, assets that were previously beyond reach may become accessible.
An agentic AI ‘virtual manager’ is an appealing prospect for enterprises but will need access to staff information and HR systems to compile rotas, for example. If that access is not adequately protected, it could be a target for attack and manipulation. Meanwhile, splitting responsibility between human and digital systems can result in gaps in and the creation of new threats, even by accident.
In this fast-paced environment, it’s obvious that enterprises must be certain their AI systems are subject to rigorous testing, both pre-launch and on an ongoing basis. At the moment, however, testing of AI models is still largely human-centric, even at the biggest and best-known AI companies.
That’s an issue because manual red-teaming exercises can span weeks or even months, resulting in a stale and incomplete understanding of current risks. In a world where each AI system — and the agents within it — can be uniquely equipped and tasked, the permutations involved in how the system operates explode in scale. So too do the attack options for bad actors, who have time on their side.
Threat actors can afford to invest time in probing widely and deeply for weaknesses but time equals money for enterprises. Their sweet spot is to find the most effective security solution by red-teaming in the quickest time possible, in order to get an effective perimeter around their IT assets.
Preparing for AI systems to fail is just as important as preparing for them to succeed but the task of trying to find all the permutations in an agentic system is incomprehensibly difficult and time-consuming for human testers. With the make-up of AI agents constantly changing, even a highly-skilled specialist with lots of existing software tools can never hope to keep up.
The solution is Agentic Warfare, or using customizable AI agents to simulate real-world adversarial interactions and automate red-teaming of AI systems for weaknesses and flaws. It applies brute-force permutation attack abilities, with a layer of intelligence built in, eliminating the mismatch that exists between the promise of agentic AI and the limitations of manual red-teaming.
Agentic Warfare meets the challenge of keeping up with the latest model releases and addresses the reality that what’s safe today will not be safe tomorrow. It also turns on its head the misconception that AI security is a bottleneck; by providing the ability to quickly find flaws in AI systems, the entire system is inherently safer and the rate of iteration is expedited.
In continuous testing, flaws have been found in all the world’s leading foundation models and have been ranked on a leaderboard. As enterprises such as McDonald’s rush to deploy new systems built upon these models, understanding the relationship between performance, cost, and security has never been more critical.
McDonald’s is seen as a leader in technology adoption; where it goes, others in its industry often follow. A super-sized approach to AI security at this stage should likewise set the standard for its sector. Otherwise, it risks making a meal of its AI ambitions.