This is the second part of a three-blog series on startup security. Please check out part one too.
The anatomy of a software vulnerability is a bit like mercury accumulation in seafood. Trace amounts of naturally occurring mercury in seawater is absorbed by algae and bioaccumulates up the food chain. Large fish at the top of the food chain contain the most mercury and should be consumed in limited quantities. Software vulnerabilities similarly propagate and accumulate throughout the development ecosystem from small snippets of code to large programs.
The largest software products must contend with a great number of vulnerabilities just to stay afloat. For example, Microsoft normally patches between 50-100 security vulnerabilities in Windows every month. As a user, the constant need to update applications can be fatiguing. You might be wondering why your music player app keeps bugging you to install security updates or why your smart TV will not let you launch Netflix without updating. Understanding where software vulnerabilities come from helps security professionals and developers effectively manage, communicate, and avoid them.
At the lowest level are vulnerabilities affecting programming languages, compilers, and development and runtime environments. This means that your application may already vulnerable before you even begin writing it. Even a "Hello World" program may be susceptible to vulnerabilities depending on how it runs. While severe vulnerabilities at this level are not very common, they can have far-reaching consequences due to the number of software products affected.
A bit further up the chain are vulnerabilities affecting other parts of the programming stack. Front-end and back-end frameworks, content management systems (CMS), databases, etc. can all introduce vulnerabilities of their own. Therefore, the decisions made before writing your first line of code may impact your ability to create and maintain a secure application.
Next up are open-source libraries. The individuals or small teams developing open-source libraries provide an invaluable service to the software development ecosystem by creating freely reusable programs for little or no compensation. Virtually all the software tools we depend on daily make use of open-source libraries, and the most widely used libraries are integrated into a significant percentage of all commercial software. By importing open-source libraries, developers can instantly add new features to their software without having to write the code themselves. Simple applications can be completed in mere hours just by stinging together existing libraries and writing a small amount of integrating code.
The use of open-source libraries has some security benefits. Opting for a well-known library instead of writing custom code can often result in more mature, better-vetted code with fewer vulnerabilities. The old adage "Don't roll your own crypto" applies here. However, this does mean that any vulnerabilities that are present in a single open-source library can potentially affect many software products. In the past decade, some of the most widely proliferated vulnerabilities were tied to open-source libraries used by many commercial products.
Once you finally begin writing your own code, there are countless ways in which vulnerabilities may be introduced. I will not discuss all the programming pitfalls that result in exploitable vulnerabilities as there are plenty of resources that cover the topic in detail (e.g., The OWASP Top 10). To create a fully functioning application, even one that heavily relies on open-source libraries, custom code is usually required to pass data from the font-end to back-end functions, manage database read/write operations, present user-specific UI elements, etc.
All of these could potentially cause security issues and every code commit must be sufficiently reviewed and tested to prevent new vulnerabilities. In addition, the act of integrating code, including libraries, means potentially combining vulnerabilities to produce new or amplified issues. For example, improper logging practices in one section of code combined with a directory traversal vulnerability in another can turn two relatively low-severity issues into a critical authentication bypass vulnerability.
Commercial software products
Things get quite interesting once an application enters the commercial software market. The eventual goal for any new software company is to get acquired by a larger company or grow itself into a large company. Along the way, its software matures with it through refactoring.
It is common for an application to be completely rewritten several times between its initial release and post-IPO or acquisition product. At the same time, rearchitecting code from scratch is very time-consuming. So it can be eye-opening just how much of the design and code of a mature software product dates back to its initial proof-of-concept developed by the founding team.
As a software company grows, in size and revenue, so does its ability to invest in detecting and mitigating vulnerabilities in its products. The added investment is necessary to defend against increasing attacker interest because of user growth. However, not all code receives the same care.
Legacy code, or code that is left untouched and is often not well understood by the development team, can present a significant security risk. Legacy code may be tied to specific features that rarely require updates. It could also be the result of a developer or team that left without a proper handoff. Mergers and acquisitions, partnerships, abandoned features, and pivots can also result in pieces of poorly maintained code if handled incorrectly.
As the rest of the codebase is maintained to current security standards, legacy code is left behind, presumed to be sufficiently secure due to its stability. The quality of legacy code may also not reflect the current maturity and userbase of the software product, potentially resulting in security issues that are uncharacteristic of a mature product.
When a vulnerability is discovered in legacy code of an otherwise well-maintained and widely used software product, it can have a devastating effect. Because the code is not updated to current security standards, the types of vulnerabilities present may include severe issues that were previously common but are now well understood and mostly mitigated in newer code. These types of vulnerabilities tend to be the easiest to exploit with readily available tools.
When a critical vulnerability is discovered in legacy code, related vulnerabilities are often discovered soon after because the associated feature or function becomes an easy target for attackers. The recent print spooler vulnerabilities are one example of this and highlight the dangers of unmaintained code.
There are many other potential sources of vulnerabilities that I have not covered, but it should be clear that vulnerabilities can arise during the earliest stages of development and propagate and persist far longer than one might expect. It should be no surprise then, that even seemingly simple applications may require frequent security updates.
A long list of CVEs for a software product does not necessarily mean that the product is insecure but is rather an indication that security concerns are regularly being identified and addressed. Still, if patches are frequently required for the types of vulnerabilities that should not be present in mature code, it could indicate that the vendor carries unresolved technical debt. To reduce the number and impact of avoidable vulnerabilities, secure development practices must be implemented early, reevaluated regularly, and applied diligently through the entire codebase.
This article is part 2 of a 3-part series on startup security. Part 1 discussed how startup culture is creating security gaps in new companies. Part 3 will focus on how to approach security at the earliest stages of a new company.