From almost the dawn of the public internet, threat actors identified the user-browser-user device combo as the best, most-direct access route between the web and enterprise systems and, therefore, the best conduit for delivering malware and executing cyberattacks.
Protecting endpoints from known enemies
By the late 80s, security-conscious organizations were sufficiently concerned about network intrusions and began developing anti-virus software to scan their systems and remove threats that were found. Firewalls were created to restrict the spread of viruses that got in. Both of these technologies, which by the early 90s were being offered commercially by early entrants into the cybersecurity space, took a “know-your-enemy” approach of implementing defenses against specific forms of known malware.
Within a very short while, however, it became apparent that detection-based approaches that initially seemed reasonable were anything but. There was no way to keep up with the rapidly expanding pool of viruses and malicious software that threat actors were constantly creating and refining. Requiring that malware be known and identifiable in order to defend against it meant that cyber defenses were always at least one step – and more often many steps – behind cyberthreat actors.
Taking a page from cybercriminals who, for the most part, do not create new malware but rather alter existing strains sufficiently to evade detection, leading cybersecurity industry players broadened their detection-based approaches to recognize malware signatures and patterns, rather than just specific strains. This represented a significant defensive improvement.
Responding to this move with a hearty “Game on!”, criminals developed new signatures, patterns and methods. In addition to suffering from the same defensive gap as earlier detection-based solutions, signature-based detection is also ineffective against zero days, distributed denial-of-service (DDoS) and social engineering attacks.
Shutting down unknown threat access
As early as a decade ago, in an attempt to escape this whack-a-mole approach to cybersecurity, the Defense Information Systems Agency (DISA) sought a solution that would harden their internet-facing surfaces — users, web browsers and the network-connected devices they use — in addition to focusing on what kinds of bullets were being shot. The idea was to prevent delivery of all executable processes from the web, allowing only the desired, legitimate results of those processes to be delivered to endpoint browsers.
This approach, now known as remote browser isolation (RBI), executes website code in a browser that is remote from the endpoint device and sends only safe — and interactive — rendering information to the endpoint browser. It has proven to be highly effective in keeping unknown malware and zero-day threats off endpoints and networks, exactly the kinds of code that detection-based approaches let through. However, despite the effectiveness of RBI, early versions required significant computing resources to render web pages remotely and deliver them to devices as a safe stream of data.
In many cases, slow browsing speeds and poor rendering results exacted a “security tax” that end users were simply unwilling to pay. More recently, however, innovative rendering approaches, combined with cost efficiencies of the public cloud have helped address this issue, vastly increasing the speed and quality of RBI-based browsing. Especially when used as part of a layered, defense-in-depth security approach, RBI has seen broad adoption in recent years.
A new target for known and unknown threats
Today, almost 35 years after the start of a still ongoing, resource-draining cat-and-mouse game around endpoint browser-delivered malware, Web 2.0 and the move to the cloud have opened a new “front” in the cybercrime wars.
Much to their delight, cybercriminals are no longer limited to weaseling their way into user endpoints to get at corporate resources, nor are they reliant on brute force attacks on a dwindling number of vulnerable VPNs. The increased use of Software as a Service (SaaS)/cloud apps and the proliferation of web-facing corporate apps provide an attractive and more direct alternate path into organization networks — and one that provides ample vulnerabilities to breach.
While the attack path has changed, the defensive responses now being played out to address this new web-app-attack surface are remarkably parallel to those that have been described for user-browser-endpoint attacks: Identifying flaws and vulnerabilities, per the Open Web Application Security Project (OWASP) Top 10, and trying to patch, code and update them away.
Of course, patching, updating and repairing known flaws are part of essential app hygiene. The problem is that when it comes to apps and code, there are always new vulnerabilities that can be exploited, new ways to exploit legit app capabilities and updates and patches that are not (or cannot be) applied quickly or completely enough. Just as when it comes to endpoint attacks, there are always new (or newly discovered) browser vulnerabilities, new malware strains, new social engineering and phishing techniques and new ways to exploit legitimate sites to deliver malware.
And like browser-delivered attacks, there is no single magic bullet that can stop web app exploits. Today’s code is simply too complex and generated too quickly to ensure that every “i” is dotted and every “t” crossed. Zero trust is a laudable ideal but in today’s fast-paced digital environment, “always verify” frequently falls by the wayside under pressures of getting new versions out.
That’s why, for attacks leveraging vulnerabilities in the user-browser-endpoint constellation, defense-in-depth is key. To protect against attacks that leverage vulnerabilities in corporate, cloud and SaaS apps, web app defensive arrays should include web application firewalls (WAFs) to raise alarms about known threats to web apps; procedures for rapidly resolving, patching and updating known vulnerabilities; and “inverse” isolation, which darkens app services to criminals searching for flaws and prevents malware from unmanaged devices from penetrating app surfaces when users log in.
Roughly 40 years since the internet emerged from its Advanced Research Projects Projects Agency Network (ARPANET) roots, it has hit middle age. For humans, that stage is a decidedly mixed blessing. But in this case, it means that we’ve accumulated sufficient internet experience to do things faster, smarter and better than in earlier days. For addressing the challenges of web app cyberattacks, that means combining isolation-based protection with detection-based response, right from the start.