The Philosopher Hericlitus once opined – “You can’t step in the same river twice,” implying that no concept remains static but is always churning and evolving. For none is that truer than the concept of Trust. More than 23 years ago, while assigned to the FBI’s Computer Investigations & Infrastructure Threat Assessment Center, we partnered with the Cleveland office in launching InfraGard, a public-private partnership intended to facilitate the exchange of threat data, indications of compromise and other critical pieces of information calculated to protect our mutual interests. At the heart of this exchange between humans – or what I affectionately call “carbon units” – was the notion of “trust.” If these individuals coming together did not trust each other, they were not going to share this information, let alone at a rate or pace that was going to keep abreast of the threat actors. How to engender trust amongst these participants emerged as a core challenge if we were to capture the benefits of speed.
Stephen M.R. Covey, in his iconic work “The Speed of Trust”, artfully captured the liberating efficiencies our processes can enjoy once infused with trust. Getting to that point was challenging enough when just humans were involved, engendering no small quantity of angst as we moved to jettison the debilitating controls historically required in the absence of trust. Add to that environment processes and threats that now morph at almost the speed of light or at least the speed of machine learning, and the challenge compounds exponentially.
Historically, there was a sense of liberation that we enjoyed once trust was established. Recent articles characterize those days as gone as a new mantra of “Zero Trust” is advanced. This concept, introduced by Forrester and the National Institute of Standards and Technology (NIST), demands that organizations no longer inherently trust the entities surrounding their perimeters. Instead, the model requires organizations to verify all requests to connect to their systems, no matter whether the request is coming from an inside or outside party.
With new forms of technology sprouting up continuously and the growing adoption of mobile computing, cloud integration and BYOD, the number of endpoints on corporate networks needing protection is ever-increasing. That is when it’s said Zero Trust matters most, shifting the focus from a single, large perimeter to every endpoint and user within the company. This model is designed to stop a breach that has compromised one endpoint from compromising the entire company. And while the implications of a Zero Trust model can seem daunting at first, the implementation of this framework can ultimately save an entire enterprise from a security headache of monumental proportions in the future.
With the help of machine learning and what is now characterized as “continuous authentication,” enterprises can get a leg-up against malicious actors and prevent corporate data from being stolen or lost. Machine learning helps enterprises analyze locations, devices, IP addresses and the times a typical employee logs into a system, developing insights over time to pinpoint unusual account behavior. By detecting this abnormal behavior, access can be automatically denied, leaving corporate systems secure at a speed that does not encumber operations. In fact, rather than jettisoning the concept of trust all together, this shift is characterized as that in which trust is established continuously or instantaneously – as ever with us. Rather than thinking of controls as that which trust allows us to jettison, these new controls engender trust anytime and at all times.
As the cybersphere continues to evolve, it’s critical to keep in mind the risks involved with sharing mechanisms. By adopting the mantra of “Continuous Authentication”, we acknowledged that the speed of trust is now almost instantaneous, allowing us to regain much of that confidence lost in times past in the security of our enterprise’s devices and endpoints.