Hacks make headlines. But usually, the focus is on who did it -- notorious cyber criminals, hacktivists, or state-sponsored actors. Readers want to know who they are, where they're from, what they did, and why they did it. How they did it gets glossed over.
In fact, the "how" is the most important part -- and application vulnerabilities are common culprits. Subtle programming errors allow hackers to subvert security controls, steal user credentials, or run malicious instructions on a remote system. Programmers, like everyone else, screw up sometimes.
Screw up how, you ask? Here's a list of some of the most common (and egregious) security mistakes that coders make.
1. You trust third-party code that can't be trusted
If you program for a living, you rarely -- if ever -- build an app from scratch. It's much more likely that you're developing an application from a pastiche of proprietary code that you or your colleagues created, partnered with open source or commercial, third-party software or services that you rely on to perform critical functions. These functions could range from licensed presentation and graphical interface elements to user authentication and encryption (think OpenSSL).
Often, third-party components are poorly managed and rife with exploitable vulnerabilities that may have gone unnoticed. Yet most development organizations can't even say for sure what third-party components they're using, let alone whether they were audited for security holes.
What's a developer to do? Writing from scratch isn't an option, but neither is crossing your fingers and hoping for the best. At the least, review recent guidance on ensuring the reliability of third-party software. For example, check out the U.K.'s Trustworthy Software Initiative and the Financial Services ISAC's Appropriate Software Security Control Types for Third Party Service and Product Providers.
2. You hard-code passwords and backdoor accounts
We know, we know. You put it in there for testing and forgot to take it out. Or maybe that backdoor administrative account was (quietly) suggested by folks higher up. After all, who's going to find out about the account, anyway? The short answer: maybe nobody -- or maybe the wrong people. As with extramarital affairs, in the case of backdoor accounts, you could argue that only fools get caught and everybody gets caught, yet be right both times.
Earlier this year, Cisco Systems acknowledged the existence of undocumented backdoors in several of its routers that could give a remote attacker "root" access to an affected device. In 2012, Project Basecamp, a volunteer effort to audit software used in the industrial control sector, found that login credentials for administrative accounts were frequently written into the actual firmware running ICS devices like programmable logic controllers. Rather than rush to fix the problem, many vendors argued that the accounts were "features" designed to make their products easier to manage. Nice try -- the truth is you don't know who or what will take an interest in your application or its security features.
3. You don't check inputs
SQL injection and remote file inclusion are two of the most common and most dangerous security vulnerabilities around. SQL injection in particular is the common thread that ties together almost every major breach in the last 10 years, from Sony Pictures to MySQL to LinkedIn to the security firm Bit9. The cause: Application developers trust data input from an external source like a Web-based form or even a database. By manipulating the SQL query submitted, a malicious actor can cause the database to perform an action that the programmer did not intend, such as dropping database tables containing user logins and passwords, credit card numbers, and so on.
Many fixes are available to mitigate these threats. Organizations like Mitre recommend that developers take the position that "all input is malicious" and design around that. At a practical level, programmers must make sure the applications they write to accept input run with the least amount of privileges needed to accomplish the task at hand. Queries or commands must use properly quoted arguments and escape special characters within those arguments.
4. You don't secure your data
After input handling, data security is probably the single biggest category of insecurity in the programming world. Insecure data handling takes many different forms and turns up on many different checklists of programming no-nos. Missing encryption comes in at No. 8 on the SANS Institute's list of the 25 most dangerous programming errors. On the OWASP list of the top 10 Web application security problems, "sensitive data exposure" is number six.
Simply put, it's not acceptable these days to handle sensitive data without encrypting it in transit and at rest. That includes user names; passwords; personally identifiable information; and data covered by state, federal, and international regulations, including (in the United States) HIPAA, Sarbanes Oxley, PCI, and their counterparts in the EU and elsewhere.
Merely employing encryption in your application isn't enough. It needs to be implemented properly, using robust encryption tools that are resistant to brute-force attacks -- and taking likely attack scenarios into account to prevent the protection that encryption provides from being subverted. Old encryption algorithms are vulnerable to brute-force attacks.
The rash of major data breaches in the last year familiarized a surprisingly large swath of the public to the nuances of encryption. In the course of trying to figure out whether or not their data was stolen, consumers had to wade through crypto terms such as "symmetric algorithm," "hashing," and "salt." This exposed the fact that not every firm used due care in protecting the data they collected.
In one example, Adobe found itself in hot water when it admitted to using a reversible "symmetric" encryption algorithm to protect the pass codes for 130 million customers that were stolen by hackers. Unlike one-way hashed (and salted) passwords, the encrypted values could be reverted to the actual cleartext password by a knowledgeable attacker who could figure out the key used to encrypt them.
Even if the crypto you use is rock solid, it matters little if the application code that surrounds it is filled with exploitable holes that give attackers access to data in the clear. The security firm Matasano, for example, has detailed the many ways in which Web browsers are hostile to cryptography. Given the inherent insecurities in platforms like Java, Matasano argues it is impossible to deploy a secure cryptographic system natively in Java.
5. You ignore layer 8
One of the biggest mistakes programmers make is to work without awareness of the carbon-based life forms who use software -- that is, humans, or what some folks refer to as OSI "layer 8." Much of the security advice you read is concerned with malicious hackers. But the truth is that well-meaning users and administrators are often the linchpins of attacks. The term of art is "social engineering," which refers to manipulating individuals toward trust and intimacy, sowing confusion, or playing upon modern workers' chronic distraction and need to get 10 things done at once.
How is that your problem as a programmer? Take a look at your user prompts, UI elements, and error messages. Are they as clear cut as they should be? When users take an unusual or risky action, do you warn them in a way that would make them think twice, or do you barf up the names of registry keys and executables that 9.99 out of 10 users will click past or ignore? How about your default configuration? Can end-users interact with your application with "least privilege" accounts, or do they need administrative rights? If you ship with default credentials, do you force users to immediately update to unique (and strong) credentials, or can they stick with the default ... forever?
The Department of Homeland Security advises programmers to "assume that human behavior will introduce vulnerabilities into your system," noting simply that "people introduce vulnerability." You can't prevent that, but you can certainly plan for it.
This story, "5 big security mistakes coders make" was originally published by InfoWorld.