Just more than a week ago, Wikileaks released “Vault 7,” a collection of information on various digital security exploits used by the CIA.
This release included information on methods for spying on people through their phones, laptops and smart TVs. It also included information on vulnerabilities in commonly-used software that is supposed to provide secure and private communication.
Notably, this release did not include the vulnerabilities themselves – Wikileaks wants to make sure they can be released in a manner that will allow affected companies to fix the software quickly.
To understand what the CIA had and Wikileaks now has (and who knows who else), let’s look at one of the most-publicized security flaws of the last few years. It’s known as “Heartbleed” because it was a bug in the heartbeat system for OpenSSL, one of the most widely used pieces of software for servers to encrypt incoming and outgoing traffic. Encryption is what allows for things like online banking and safely buying and selling online.
Heartbeat systems check if a computer such as a website server is online and able to receive and send data.
The Heartbleed bug allowed for extra data to be requested when a server’s “pulse” was checked with the heartbeat system. It’s kind of like calling someone and saying, “Hello, tell me you’re awake and then the next hundred random things you think of.”
If a hacker tried this enough times, they could potentially get passwords and security codes that would allow them to access a system. This is known a buffer over-read vulnerability, where a system can provide more data than intended.
The part of the code for the heartbeat system in OpenSSL that caused the vulnerability was shorter than this sentence. The fix was shorter than this paragraph. Upsettingly, though the fix was created in 2014 when Heartbleed was discovered, almost 200,000 websites had not applied that fix as of January 2017. Not all software vulnerabilities are buffer over-read vulnerabilities, but they usually are small coding errors where human programmers forgot to do something or failed to do something correctly.
In comes the United States government with its vast surveillance apparatus and apparently large databases of software vulnerabilities and the methods to exploit them.
The Vault 7 leak is the second major leak of information on specific software vulnerabilities. The first was last August with the “Shadowbrokers” release of NSA-held vulnerabilities.
There is also strong evidence that these agencies do not disclose software vulnerabilities to the companies that produce the software, despite repeated insistence from government officials that they do so. They just leave them open so they can continue to use them.
The existence of these leaks does not necessarily show that the CIA is spying on you or on anyone in particular. The fact that these exploits have been leaked means that someone else could decide to spy for the purposes of identity theft or ransomware against you or your employer.
With these leaks, we now know that not only does the government hoard software vulnerabilities, but they can’t keep their hoard from being released. This is a serious problem.
This is a serious problem because the goal is to make us safer. Even if we’re safer from relatively rare US terrorist attacks, is it really worth it if we’re much more vulnerable to digital crime?
This is also a serious problem because our increasingly connected world means the chance grows year over year that the next terrorist attack that kills a large number of people will be a digital attack on computerized infrastructure. The software running that infrastructure is going to have vulnerabilities.
If our government continues to hoard software vulnerabilities, it is responsible in part for harms caused by future digital crime.