In the early days of computer security, researchers set out in search of the holy grail of perfect code. They sought formal methods to analyze code and assure that software was doing all and only what it was supposed to do. There would be no such thing as software vulnerabilities, and hacking was a thing of science fiction. Significant advances were made along this quest, and the approach remains enticing as a theoretical possibility. But despite years of research and many dollars of investment, the approach remains impractical to this day. In addition, the code developers who must bear the cost of using these methods are often not the users who reap the benefits of less vulnerable code.

The economics of the software market do not favor investment in software assurance methods. So, are we doomed to vulnerable code and the dogged pursuit of finding and fixing vulnerabilities? Or should we aim for better, not perfect, and look for protections from our imperfect code elsewhere in the cybersecurity ecosystem?

If we start with the premise that code will never be perfect, that vulnerabilities are a permanent feature of our codebase, a host of opportunities to understand those vulnerabilities and minimize their impact on our systems becomes apparent. Importantly, the technology to address these opportunities is affordable and can fit in existing development processes, partially addressing the economic issue of assigning the cost of software assurance.

The development process is the first line of defense. The DevOps process, the speed of which is often cited as a cause of vulnerable code, can be instrumented to help developers produce less vulnerable code. Static application security testing (SAST) can be run when code is checked in. There are fast and accurate products on the market to address this need. Context-sensitive analysis can determine which of the vulnerabilities identified by SAST need to be fixed and which pose a low threat to the system and can be left in place (e.g., because they are in parts of code rarely executed or because they can be contained in other ways). The key here is that a risk analysis is performed to determine which vulnerabilities should be addressed; the developer is not faced with the impossible chore or exorbitant cost of producing perfect code. 

This process improves the code base in several ways: it fixes the critical vulnerabilities which limit the attack surface; it identifies other vulnerabilities that must be monitored or addressed by other system components; and it encourages developers to assume responsibility for the security of their code — not by becoming a security expert, but by becoming a better programmer. 

The developer’s responsibility for the security of their code must extend to the third-party libraries they incorporate in their applications. There are several information resources and products that identify the vulnerabilities in software libraries and other third-party code. Development teams should understand the risk profile of the software they are incorporating and make sure it aligns with the risk profile of their application. In this case, the developer does not have the option of fixing the vulnerabilities and must assure that protections are in place elsewhere in the application or in the system as a whole that mitigate the code risk to an acceptable layer. 

Interestingly, frequently used libraries may be one place in which the quest for perfect code makes sense. Given that the code is so widely used, it could be worth the cost to completely address its vulnerabilities. Unfortunately, the developer of the library and the user of the library are generally not the same. This leads to the wide use of the code being spread over many developers, so the cost and benefit of the fix accrue to different parties, complicating the economics of investing in the fix. For enterprises that write software for their own use and not for sale to others, these more affordable methods and targeted approaches go a long way in making the economics of software assurance make sense. For companies that produce software for sale to others, the question of whether it is worth it to use these methods remains open.

The consumer plays an important role in making the investment in software assurance make sense for code providers. An educated consumer (or one influenced by the recent Executive Order on cybersecurity) should demand information about the risk profile of the software they are buying. Software supply chain security tools can help assure that the consumer knows what risks are inherent in the software they are buying and can give leverage to consumers to insist upon reparations for vulnerable code (in the form of discounts, update requirements or warranties). 

These tools can also help the consumer determine if they have a security architecture in place that lets them take on those risks safely. It would be irresponsible to introduce code with known vulnerabilities into the system without taking steps to mitigate the risk of those vulnerabilities. Runtime monitoring can be an effective way to keep an eye on running code to make sure it is executing properly. If runtime monitoring is coupled with information on the known vulnerabilities found in SAST, it becomes even more effective. Another way to address these vulnerabilities is to simply defend against the malware used to exploit them. An unexploited vulnerability is not a risk. Here, again, we can aid our defense with knowledge from the SAST, which can alert us when a vulnerable part of code is executed, but even threat-agnostic approaches can be effective.

This article originally ran in Today’s Cybersecurity Leader, a monthly cybersecurity-focused eNewsletter for security end users, brought to you by Security magazine. Subscribe here.