You're a software engineer or architect. Imagine a security audit successfully exploited holes in the security of your system. Or worse, imagine the product was the victim of a data breach and received lots of press. Your immediate focus is likely providing answers to C-level executives that show the issue is patched with no additional customer impact. After the smoke clears, is your team asking and answering this fundamental question:
Why did this happen?
There are numerous reasons code can be vulnerable. The basic reason is writing software is hard, and writing secure software is even harder. As a software engineer, you just have to remember everything you can about authentication, confidentiality, buffer overflows and the like when designing and implementing your systems, right? It certainly doesn't hurt to know as many of those concepts as possible. But to definitively answer the why question, three core reasons may provide the answer:
- You haven't spent time developing the mindset that the products and services you create need to be protected.
- You lack confidence that the work you've done is providing the necessary protection for your prized inventions.
- You're not totally sure who you should be protecting your software from.
Issue 1: Protecting Your Software Isn't As Important As It Should Be
As an engineer in a big software organization, you're focused on designing, implementing, and testing say a single component of the system, and must complete your tasks on-time. Focus is devoted to necessary features that meet a defined measure of quality. Thinking about security when implementing those features is often low on the priority list or entirely disregarded. For example:
Not my job:
- "We already have an internal security team."
- "We always hire those Carve consultants anyway."
- "Someone else will find any security issues in my code before it goes to production."
- "We're using the latest & greatest firewall, so we don't need to worry too much about it."
Too late now:
- "I can't think about that now, I'm already behind schedule."
- "Well, if someone's made it this far, there are bigger problems."
Most of these responses do not show unfamiliarity with what security is. Engineers are smart and create impressive works; you want to spend most of your time doing that; not debating if you should use a 128 or 256-bit encryption key. The problem is, security is your job, you may not already be protected, now is better than never, and your organization may end up with even bigger issues because your vulnerable code made it easy for an attacker to obtain sensitive data.
It is not uncommon for key stakeholders to be out of a job due to poor security choices or handling of security issues. While you don't need to become a crypto nerd, you don't want to have a lackadaisical attitude when it comes to the things you can do to ensure you're designing and implementing a secure system.
Engineers and architects should introduce security into their development life-cycle as early as possible. Review what it is you're creating and ask yourselves these questions:
- What kind of data will we process, and how does it get from point A to B?
- What kind of data will we have to store? Are there any rules or community best practices for handling this type of data?
- Are we performing the right activities at the right time?
- Do we have management buy-in to implement security in our development life cycle?
Conscious decisions need to be made during design that incorporate security. This requires research and during this time you should discover various tenets like session management, authorization, authentication, and the like. Though it definitely helps to know these piers are needed beforehand, as the engineer or architect, motivate each other to take on a security perspective when evaluating product architecture and implementation to protect your code, data, and ultimately your organization and its customers.
Software engineers at all levels should encourage management of the importance of training that would help spur and develop secure practices. Architects should incorporate security in their delivery estimates to ensure the extra time needed to develop secure mechanisms and test for security issues are accounted for.
The desire to learn secure design and implementation needs to be present so that integrating it into your product feels natural and not an impediment to the success of the product. Less vulnerable code will result.
Issue 2: You Really Aren't Sure If Your Code is Secure
You've implemented some community practices when it comes to security, but are you sure you're doing it right? Do these thoughts come to mind:
- There were unhandled scenarios because there was no requirement for it, and it's easier to pretend the scenario doesn't exist.
- That StackOverflow answer got the job done, but I just copied & pasted and made sure it compiled; I really don't understand why it works.
- The API or package I included had documentation, but I only looked at functions I cared about. I don't know much about the package outside of that.
- It may be fragile, but my code works, so let's just wrap it up and call it done before something breaks.
- I don't remember if I read the RETURN VALUES or ERRORS of that man page.
- I don't think we had any security-relevant test cases in our integration test suites.
- Oops, I forgot there was a threat model document; I probably didn't capture the situations that it prioritized.
Not having total confidence is to be expected, as writing good quality, secure software isn't easy. However, security vulnerabilities are a by-product of low confidence. As an engineer, you should try your best to write code you and your co-workers understand. Try to take less shortcuts and invest in the time it takes to be confident in what your code does and why. Make use of available internal and external documentation and ensure early activities that addressed security in the design phase are tackled during implementation.
In addition, your tests should be easily repeatable processes that serve as an additional check on secure implementation; so make sure they exist. Complex code that no-one understands well generally sees the least amount of coverage during review and test.
Write code that works and makes sense. If it doesn't make much sense or is poorly understood, it's a breeding ground for security vulnerabilities.
Issue 3: You Think There Are Only Bad Guys and 'Everyone Else'
As an engineer, you may think your software only needs to be protected from some malicious hacker in a basement wearing a hoodie who's goal is to leak company information to Pastebin. While a hacker is a valid threat to your system, that doesn't mean "Everyone Else" should be labeled as trusted.
Part of protecting your assets involves knowing who and where your threats come from. If you've determined that various authorization levels are needed during software design time, then ensure those authorization checks are in place during test time. Your users can be particularly clever and their ability to attempt to exploit vulnerabilities should not be discounted.
Insider threats and targeted victimization of authorized users are some of the most widely used approaches for attackers. A user who isn't expected to do much should not be allowed to do much. If the "perimeter" is compromised, don't make sensitive data access a piece of cake for an attacker. Hacking has come a long way from simple website defacements; organized crime, ransomware, business intelligence, commercial and academic espionage actors may not be easily seen or accounted for.
Humans generally trust each other in the absence of overt negatives. On the off-chance that trust is violated, how vulnerable is your system?
If you're a software engineer or architect who wants to reduce the occurrence of producing vulnerable code, you can individually play a role that encourages:
- Having a security mindset and willingness to research security-relevant issues in the course of creation. The earlier, the better.
- Having verifiable confidence that the pursuit of secure design and code is being achieved and advised within the organization.
- Recognizing and defining your threat actors; various actors may be missing.
When presented with a security vulnerability, try to understand why the vulnerability existed in the first place. The answer may be lurking within the above three issues.
Engineers do not need to write perfect code. Software development is a pipeline and Carve Systems encourages internal and external security integration at all stages of the pipeline.
But let's say you've followed each point, tried your best to do everything right, and a vulnerability is eventually discovered with code you wrote. Don't beat yourself up. At times, it takes a different set of eyes looking at your system to help uncover security lapses. However, take it as a learning experience to identify areas that you can improve your ability to detect & protect, build confident code, or see things that may not be so obvious.
Jonathan Wrightsell is a Senior Consultant at Carve Systems with a software engineering background at small and large Fortune 500 organizations.