My friend interviewed a job applicant the other day who had 5 years of experience in “cyber”, a master’s degree, and plenty of certifications, only to find the only thing they knew how to do was run a few predefined scripts and kick the results upstairs to research.
The push to get better tooling for infosec is laudable, but I fear that we are making it all just a little too much about “repeatable process” and “automation” in a field where the point is to anticipate and react to adversaries doing things not anticipated in system design.
There are a lot of things security assessors look for that are clearly repeatable and testable in a pass/fail way, or that could be found by an automatic tool. And: those things should be automated unit tests and automated scans.
Doing those things should be considered a base level of due diligence, but the problem with making something part of a system is that systems are the targets of attackers. These things make the system more resilient, but they do not substitute for security engineering.
The way to keep ahead of attackers isn’t to ignore the tooling and do everything artistically by hand. It is to fix the bugs, implement the controls, but then continue to have people intelligently find and challenge assumptions on which the system relies.
You simply cannot deal with unknown unknowns by the use of more and better tools. You need someone who understands what is going on and who can make decisions.
So too, that process supports tool development and refinement, but only to prevent repeated work. And that work is not tool development: not everything a security engineer does has value as a repeatable test procedure.
The way to secure systems is not to take the union set of everything that every security engineer has ever tested for and that every incident responder has ever used as an indicator, and alert on or block all of them. A daisy chain of middleboxes and scanners is not security.
Some attacker goals have well defined characteristics, but attacks in general do not. They depend on the social context of a system. And so, what do to depends on your threat model, and no general procedure for creating a threat model works. You have to know what you intend.
A general mechanical procedure for deciding what is good and what is bad will either deem the intended operation of AWS Lambda an RCE a vulnerability, or will miss sandbox escapes in the same service. Someone has to figure out what is OK in each context.
A general definition in OWASP style can document breakdowns in specific controls, but it cannot secure the entire web. Vulnerability is not just a state of a specific interface or API call. It is some possible state in a Turing-hard machine.
If it weren’t so, then security testing would be repeatable, automated, and pass/fail. But in the world of the systems we design and use daily, security testing is a limited tour of the infinite possibilities in a system. People have to keep thinking about it.
You can follow @FalconDarkstar.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.