Don’t Trust Your Code Security, Verify It

Don’t Trust Your Code Security, Verify It

It’s not nuclear weaponry, but for an enterprise developing a critical application that will handle sensitive data, application developed in-house can create existential-level institutional peril.

No One Plans To Leave Gaping Security Holes

…but they are there, year after year after year. An enterprise can (and should) layer on other security: Web application firewalls, for example, to block some kinds of common attacks, and client-side security to help reduce the risk of MageCart and similar. These all have a part to play and reduce risk. But none is a reasonable substitute for secure development on the application itself.

At the base of any secure development effort is secure development methodology, which should be formalized (and can even trendy – hello SecDevOps and DevSecOps!). At the base will be broadly applicable principles of secure development such as “Don’t trust user input to be valid.” and “Don’t trust buffers to be big enough.” (note the recurring theme).

Development environments, languages, and libraries can make secure application development easier too, and should be selected with that in mind where choice is possible. Where developers do not have control over these variables, methodology is one backstop that can carry over independent of tools.  The other, of course, is testing.

Use Tools for Security Testing

Independent of language and methodology, development security teams can and must insert tools for performing security testing into the production pipeline. These would start with code scanning as a part of unit testing: does code check to make sure an input will fit in a buffer? that array bounds are checked and respected? that input strings are sanitized before they are fed into output of any sort? Looking at the code even before running it can head off a multitude of sins.

Beyond checking form there is checking function, stretching all the way to attempted compromise of a release candidate. Again, IT has to add tools for direct verification. These can be passive, watching the application to see what it generates in the course of interacting with users or functional testing scripts; or active, making attempts to break or break in through the application.

Tools such as OWASP’s ZED Attack Proxy can work both ways: watching passively for various gaffes (leaving out security headers in responses, for example), or attempting various application-level compromises (e.g. to attempt cross-site scripting attacks on the site).

Automate the Testing…and Commit

But testing with tools is not enough.  Developers have to build the testing into every stage of the process.  Since most companies and teams are pursuing agile/DevOps methodologies, that means building automated security testing into the production pipeline as one more part of the broader automated testing suite. Having folks conversant with application security to design and maintain the testing routines is important, even if they are loaners from cybersecurity teams rather than permanent members of the development team.

And, as important as it is to have Chef or Puppet or whatever perform security testing, it is just as important to abide by the results. When a module or a program fails the security testing, it is imperative to fix the security issues immediately rather than passing the buck to a later sprint. Developers must treat security as equal to functionality, and reject a build for a security flaw just as decisively as they would reject one that didn’t work.

In other words: If it is not working securely, it is not working, period.  Commit to that, and an organization should be able to vastly reduce the risk associated with whatever applications it wants to develop.

 

Share this post