I was featured in a UK article about the conference that I spoke at in Bristol a few weeks ago.
I’ve written a series of blog posts on the Veracode blog about application security. Check them out, if that sort of thing floats your boat, or if you just want to see what’s up in my professional life.
Note that I don’t generally write my own headlines, so I don’t claim responsibility for clickbaityness or comma splices. 🙂
InfoWorld (Chris Wysopal): Election system hacks: we’re focused on the wrong things. Chris (who cofounded my company Veracode) says that we should stop worrying about attribution:
Most of the headlines about these stories were quick to blame the Russians by name, but few mentioned the “SQL injection” vulnerability. And that’s a problem. Training the spotlight on the “foreign actors” is misguided and, frankly, unproductive. There is a lot of talk about the IP addresses related to the hacks pointing to certain foreign entities. But there is no solid evidence to make this link—attribution is hard and an IP address is not enough to go on.
The story here should be that there was a simple to find and fix vulnerability in a state government election website. Rather than figuring out who’s accountable for the breach, we should be worrying about who is accountable for putting public data at risk. Ultimately, it doesn’t matter who hacked the system because that doesn’t make the vulnerabilities any harder to exploit or the system any safer. The headlines should question why taxpayer money went into building a vulnerable system that shouldn’t have been approved for release in the first place.
I couldn’t agree more. In an otherwise mediocre webinar I delivered in June of 2015 on the OPM breach, I said the following:
After a breach there are a lot of questions the public, boards and other stakeholders ask. How did this happen? Could it have been prevented? What went wrong? And possibly the most focused on – who did this?
It is no surprise that there is such a strong focus on “who”. The media has sensationalized stories about Anonymous and their motives as well as the motives of cyber gangs both domestic and foreign. So, instead of asking the important questions of how can this be prevented, we focus on who the perpetrators may be and why they are stealing data.
It’s not so much about attribution (and retribution)…
…it’s about accepting that attacks can come at any time, from anywhere, and your responsibility is to be prepared to protect against them. If your whole game plan is about retribution rather than protecting records, you might as well just let everyone download the records for free.
So maybe we should stop worrying about which government is responsible for potential election hacking, and start hardening our systems against it. Now. After all, there’s no doubt about it: it’s the myth of fingerprints, but I’ve seen them all, and man, they’re all the same.
It seems I’m falling into a pattern where at least one day a week, I will end up posting for two days worth of material. This is one of those days. At least I have a good excuse for not posting. It was Veracode’s Hackathon IX this week, and that means craziness.
Monday’s activity? Live-action Pac-Man. What you can’t see from the photos is that there is actually a player. Pac-Man was wearing an iPhone on his chest, connected to Webex, with the camera turned on and headphones in his ears. Someone connected to a WebEx gave instructions to Pac-Man on how to move through the maze.
The ghosts all had simple rules of how to move just like in a real video game. So the whole effect was very much like feeding quarters to Pac-Man machines as a 12-year-old. But it gave me a new appreciation for the life of the ghost—all left turns and no free will. It got, frankly, boring after a while… until random turns brought me in contact with Pac-Man.
It all reminded me of this:
Today is the eighth anniversary of my first day at Veracode. It’s something I don’t talk as much about here, primarily because it keeps me so busy that I can’t write here very much. But it’s interesting to step back and understand how much things have changed—and how much they haven’t.
Here’s one of the first things I wrote about Veracode, a few days after I started. What hasn’t changed is the fallacy of trying to stop exploitation of application layer vulnerabilities by going after the network, or as Chris Wysopal said, “doubling the number of neighborhood cops without repairing the broken locks that are on everyone’s front doors.”
What has changed? Well, we were a tiny, scrappy little company when we started. But we just picked up senior sales and marketing leadership with pedigrees from RSA and Sophos, and we’re a lot bigger than we were eight years ago. It’s a fun day to be at Veracode, realizing just how rapidly we’ve grown.
Springer has published a bunch of its books online for free. (Hundreds more were free until this morning but the plug has been pulled.) I went looking to see what I could find. There are some interesting finds there, including a festschrift for Ted Nelson, the inventor of hypertext. And, relevant to my work interests, a text called The Infosec Handbook.
What’s that, you say? A free textbook on information security? Sign me up! Well, not so fast, pilgrim.
Admittedly, I come to the topic of information security with a very narrow perspective—a pretty tight focus on application security. But within that topic I think I’ve earned the right to cast a jaundiced eye on new offerings, as I’m going to celebrate my eighth year at Veracode next month. And I’m a little disappointed in this aspect of the book.
Why? Simple answer: it’s not practical. The authors (Umesh Hodeghatta Rao and Umesha Nayak) spend an entire chapter discussing various classes of threats, trying to provide a theoretical framework for application security considerations, and discussing in the most general terms the importance of a secure development lifecycle. But the SDLC discussion includes exactly one mention of testing, to wit, in the writeup of Figure 6-2: “Have strong testing.” And an accompanying note: “Similarly, testing should not only focus on what is expected to be done by the application but also what it should not do.”
It’s pretty widely understood in the industry that “focus(ing) on what is expected” and “[focusing] on what [the application] should not do” are two completely different skill sets and that even telling a functional tester what to look for does not ensure that they can find security vulnerabilities. The problem has been well known for so long that we’re nine years into the lifespan of the definitive work on the subject, Wysopal et al’s The Art of Application Security Testing. But there’s no acknowledgment of any of the challenges raised by that book, including most notably the need to deploy automated security testing to ensure that vulnerabilities aren’t lurking in the software.
As for the “eight characteristics” that supposedly ensure that an application is secure, take a look at the list for yourself:
- Completeness of the Inputs
- Correctness of the Inputs
- Completeness of Processing
- Correctness of Processing
- Completeness of the Updates
- Correctness of the Updates
- Maintenance of the Integrity of the Data in Storage
- Maintenance of the Integrity of the Data in Transmission
Really? Nothing about availability. Nothing about authorization (determining whether a user should be allowed to access information or execute functionality). Nothing about guarding against unintended leakage of application metadata, such as errors, identifying information, or implementation details that an attacker could use. And nothing about ensuring that a developer didn’t include malicious or unintended functionality.
The chapter also includes no mention of technologies that can be deployed against application attacks, though this may be a blessing in disguise given the poor track record of web application firewalls and the nascent state of runtime application self-protection technology.
All in all, if this is what passes for “state of the art” in a security curriculum from the second biggest textbook publisher in the world, I’m sort of relieved that information security isn’t a required curriculum in a lot of CS programs. It might be better to learn about application security in particular from a source like OWASP, SANS, or your favorite blog than to read a text as shallow as this.
On the Veracode blog (where I now post from time to time), we had a retrospective on the Microsoft Trustworthy Computing memo, which had its ten year anniversary on the 15th. The retrospective spanned two posts and I’m quoted in the second:
On January 15, 2002, I was in business school and had just accepted a job offer from Microsoft. At the time it was a very different company–hip deep in the fallout from the antitrust suit and the consent decree; having just launched Windows XP; figuring out where it was going on the web (remember Passport)? And the taking of a deep breath that the Trustworthy Computing memo signaled was the biggest sign that things were different at Microsoft.
And yet not. It’s important to remember that a big part of the context of TWC was the launch of .NET and the services around it (remember Passport)? Microsoft was positioning Passport (fka Hailstorm) as the solution for the Privacy component of their Availability, Security, Privacy triad, so TWC was at least partly a positioning memo for that new technology. And it’s pretty clear that they hadn’t thought through all the implications of the stance they were taking: witness BillG’s declaration that “Visual Studio .NET is the first multi-language tool that is optimized for the creation of secure code”. While .NET may have eliminated or mitigated the security issues related to memory management that Microsoft was drowning in at the time, it didn’t do anything fundamentally different with respect to web vulnerabilities like cross-site scripting or SQL injection.
But there was one thing about the TWC memo that was different and new and that did signal a significant shift at Microsoft: Gates’ assertion that “when we face a choice between adding features and resolving security issues, we need to choose security.” As an emerging product manager, that was an important principle for me to absorb–security needs to be considered as a requirement alongside user facing features and needs to be prioritized accordingly. It’s a lesson that the rest of the industry is still learning.
To which I’ll add: it’s interesting what I blogged about this at the time and what I didn’t. As an independent developer I was very suspicious of Hailstorm (later Passport.NET) but hadn’t thought that much about its security implications.
My software development lead and I are doing a webinar next week on how you do secure development within the Agile software development methodology (press release). To make the discussion more interesting, we aren’t talking in theoretical terms; we’ll be talking about what my company, Veracode, actually does during its secure development lifecycle.
No surprise: there’s a lot more to secure development in any methodology than simply “not writing bad code.” Some of the topics we’ll be including are:
- Secure architecture — and how to secure your architecture if it isn’t already
- Writing secure requirements, and security requirements, and how the two are different.
- Threat modeling for fun and profit
- Verification through QA automation
- Static binary testing, or how, when, and why Veracode eats its own dogfood
- Checking up–internal and independent pen testing
- Education–the role of certification and verification
- Oops–the threat landscape just changed. Now what?
- The not-so-agile process of integrating third party code.
It’ll be a brisk but fun stroll through how the world’s first SaaS-based application security firm does business. If you’re a developer or just work with one, it’ll be worth a listen.
You’ll be able to catch me in my professional capability twice next week. I’ll be giving a talk on Tuesday in Austin, TX to the Austin chapter of ISACA (the Information Systems Audit and Control Association) on “Best Practices for Application Risk Management.” The argument: the current frontier in securing sensitive data and systems isn’t the network, it’s the applications securing the data. But just as it’s hard to write secure code, even with conventional testing tools, it’s even harder to get a handle on the risk in code you didn’t write. And, of course, it’s the rare application these days that is 100% code that you wrote. I’ll talk about ways that large and small enterprises can get their arms around the application security challenge.
I’ll also be joining one of our customers to talk in more depth about a key part of Veracode’s application risk management capability, our developer elearning program and platform, in a webinar. If you are interested in learning how to improve application security before the application even gets written, this is a good one to check out.
If you’ve ever wondered what it would be like to work at an amazing company in the security space, wonder no more. Veracode is growing, and we’ve got quite a few openings in sales, engineering, QA, research, and even (particularly) in product management.
If you’ve read my posts about security and product management, if you’ve read about us in the press, and if you think you’ve got what it takes, drop us a line. Of course you’re welcome to contact me and ask questions about the company too.