Searching for Bugs in Open Source Code

Let’s dispel the myth first: Open source software isn’t any less secure than closed source software. However, once a vulnerability is found in an open source program, it tends to be much easier to weaponize and exploit than a vulnerability found in closed source.

“The biggest risks of open source come from the fact that open source projects are often resourced as a labor of love, run by people in their spare time,” explained Casey Ellis, founder and CTO at Bugcrowd, in an email interview. “It’s not uncommon for a piece of software maintained by two-thirds of a person and their cat ending up as something that undergirds a sizeable chunk of the internet.”

This creates two big risks, Ellis added. The first is that there is limited bandwidth to remediate issues, which delays fix time and, ultimately, bottlenecks the capacity for the open source ecosystem to self-heal. The second, related to the need for help, is the relative ease with which an adversary can subversively work their code into a project or even socially engineer themselves into a position where they are the “evil” deputy responsible for its upkeep.

All organizations are relying more heavily on open source. As a GitHub report stated, it has become nearly impossible to find a situation where data isn’t passing through at least one open source component. And all industry verticals are using open source code, meaning one exploited vulnerability could do major damage. Yet, GitHub reported, it takes an average of four years to discover a vulnerability in open source software.

Enter Bug Bounties

Four years is way too long for a vulnerability to sit undiscovered, which is why more and more companies have introduced bug bounty programs. In the Big Tech world, Apple, Facebook, Microsoft, AWS and Google all have programs that pay out millions to successful security researchers and ethical hackers that find serious problems.

Another example is Clubhouse, the audio social networking app, which just partnered with HackerOne. “Clubhouse’s public bug bounty program will offer their in-house security team continuous testing support from a diverse pool of talent through our global community of more than 1 million hackers,” said Michiel Prins, HackerOne co-founder.

The goal of this bug bounty program is to find and fix as many vulnerabilities as possible before they impact the company’s open source code.

Bug bounties are important for a couple of reasons, said Jake Williams, co-founder and CTO at BreachQuest, in an email comment. First, they incentivize more researchers to analyze code and find vulnerabilities. Second, they provide an incentive for researchers to only publicly disclose those vulnerabilities after they’ve been patched, leading to a safer cybersecurity landscape.

Starting a Bug Bounty Program

Bug bounty programs are effective at identifying vulnerabilities (even more so when source code is available) which is half of the problem, according to Ellis, but they can also draw security-minded engineers with the ability to contribute open source fixes to projects in need of this sort of help.

Anyone can run a bug bounty themselves, however, running a program is extremely complex. “Organizations without a good bug bounty implementation plan can run into significant challenges, including reputational harm if they don’t patch reported vulnerabilities quickly enough,” said Williams. “Any organization looking to implement a bug bounty program for the first time should engage a third party.”

Aside from funneling greater security scrutiny and help into the products that often power their own organizations, there’s tremendous ecosystem benefit and a degree of internet protector prestige that comes with funding projects that bring a benefit outside of the company writing the check, Ellis added. And it doesn’t matter much who makes the discovery, even if it is a threat actor revealing the vulnerability.

Open source gives a pretty clear demonstration of why “black hat and white hat” designations quickly become irrelevant with respect to pure vulnerability management: The codebase is publicly available, so there’s no practical gain in trying to gate who you might be able to get actionable feedback from, Ellis pointed out.

“Ultimately, organizations cannot reliably control the potential actions of an adversary, but they can influence how difficult it will be for the attacker who eventually shows up. In this case, the information is what is important—not where it came from,” said Ellis.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba