Cross-site scripting, illustrated

Wired ThreatLevel Blog: Look Ma, I’m on CIA.gov. Wired’s security blog reports a cross-site scripting vulnerability in the CIA’s web site and gives a convenient demo exploit. The exploit is benign enough, illustrating how JavaScript can be used to load an iframe on the CIA’s search results page containing arbitrary content. But the potential for mischief is significant. Imagine loading a phishing site this way. Or imagine this vulnerability on your bank’s home page.

Too often security vulnerabilities are abstract. This one, thanks to Wired, is pretty real. I’m surprised it’s still up, actually.

Veracode: Cool Vendor

Quick pointers to a few awards Veracode has won recently:

  1. Readers Choice Award, Information Security Magazine and SearchSecurity.com
  2. Gartner Cool Vendor Award, Application Security and Authentication category

It’s great for Veracode to get this kind of recognition. I’m really proud to work at a company that can make a difference to how companies address application security.

—Oops. Almost forgot to mention: Looks like I’ll be at the Gartner IT Security Summit in early June in Washington, DC. I’m looking forward to getting the long view on the industry. And from the speaker list, it looks like I might get a chance to get Bruce Sterling’s signature next to William Gibson’s on my copy of The Difference Engine.

Why does Microsoft push unpatched software via Windows Update?

It is, for a change, a very good question from CNet. If you know that security vulnerabilities exist in your software, and you’ve already patched those vulnerabilities, and you have a well-documented process for slipstreaming patches into existing installs, and you have an automatic update process

… why in the hell would you have that automated update service push the unpatched software rather than fully patched versions?

The short time between install and patch isn’t a good enough reason. Even if Microsoft automatically forced a re-run of Windows Update after each update session, as Mac OS X does, history shows that it doesn’t take long for unpatched, vulnerable software to be exploited. There is relatively little cost to Microsoft to prepare fully patched downloads, and the payback is huge risk avoidance. Fix it, already, guys.

So the War on Liquids is the War on Tang

Normally I write about application security in this space, but occasionally I’m inspired to write about physical security as well. In this case: Remember the 2006 Heathrow incident that started the War on Liquids? The one in which people were supposed to be bringing the ingredients for a liquid bomb on a flight? Well, the Daily Mail says that they were planning to mix hydrogen peroxide with another unnamed compound, which Bruce Schneier and the Guardian name:

Tang.

That’s right. The drink that took the astronauts to the moon was supposed to blow up seven planes.

Heh. Read the thread on Schneier’s blog for information about the feasibility of this threat, and then ask yourself why we still have to carry on 4 ounce portions of liquid and taste our baby’s breast milk.

Security theatre does not equal security.

PWN 2 OWN: platform battle or bad app showdown?

The recent coverage of the PWN 2 OWN contest, in which hackers broke into a MacBook Air and a Vista laptop, has generated a little blog heat—but in a misleading way. The headline of this InfoWorld post is an example: MacBook Air is Insecure. With all due respect to Mr. Hultquist, that’s like saying that water is wet. At this point, the way to look at it is not whether a platform is secure or insecure, but rather how much effort it takes to exploit the platform.

As long as software has flaws, it opens computers up to attacks. The fact that the MacBook was hacked through a Safari vulnerability and the Vista machine through a Flash flaw, and that neither could be hacked directly from the network, says something about the manufacturer’s networking code. But more, it says that this contest is not about whether the Mac is more secure than Vista or Ubuntu, but is about the risks introduced by applications with bugs.

So for software vendors it becomes much more critical to find and fix those flaws, and for users, as Hultquist rightly points out, the right approach is to be aware that these vulnerabilities may exist and to behave accordingly.

Ripples from SOURCE: Boston: how much security is optimal?

I wasn’t able to attend this week’s SOURCE: Boston conference, which my company is cosponsoring, but reading about some of the talks and looking at some of the papers that are coming out of it has been fascinating. A few points:

If you think protecting digital systems is hard, what about analog systems like the telephone?

The number of potential points of compromise is staggering… Once the X-rays of telephone equipment and close-ups of modified circuit boards came out (notice that there’s supposed to be a diode there, but someone replaced it with a capacitor…) we were headed into real spy vs. spy territory. Tracking down covert channels requires identifying, mapping, and physically and electronically testing every conductor out of an area. Even the conduit and grounds can be used to carry signal, and they have to be checked.

We don’t normally think about telephone security as an issue (although given the shenanigans that the FBI has been up to, with retroactive blanket wiretapping warrants, almost 200,000 National Security Letters authorizing warrantless wiretapping in a four-year-period from 2003 to 2006, and collection of data that they are specifically disallowed from collecting, maybe we should). Why? Because there’s an implicit cost-benefit calculation at play: given the size of the attack surface, or the vulnerable parts of the infrastructure, the cost of absolute security is staggering.

But very few people bother to follow that thought to the logical conclusion, which is that the optimum number of security violations is greater than zero. I’m not recommending hacks, mind, but if you use a cost-benefit approach to analyze security spending, you are constantly trading the cost of protection vs. the cost of attacks. If you spend so much on security that there are no breaches, you have spent more than warranted by the cost of the attack. Dan Geer makes this argument neatly, in graphical form, in the opening of his article “The Evolution of Security” in ACM Queue. The whole article is worth digesting and mulling over. He points out that as our networked world gets more complex, we start to replicate design patterns found in nature (central nervous systems, primitive immune systems, hive behavior) and perhaps we ought to look to those natural models to understand how to create more effective security responses.

Getting back to SOURCE: Boston, Geer’s keynote there amplified some of his points from the Evolution paper and addressed other uncomfortable thoughts. Such as:

  1. The model for security used to be “I’m OK, you’re OK, the network is compromised,” which leads to the widespread use of encryption. But SSL and other network encryption technology has been famously likened (by Gene Spafford) to hiring an armored car to deliver rolls of pennies from someone living in a cardboard box to someone living on a park bench. Meaning: in a world of malware and botnets, maybe the model ought to be: “I’m OK, I think, but you’re not.”
  2. Epidemiologically, as malware and botnets become more prevalent, they will become less virulent. One of the L0pht team has said (as cited by Geer) that computers might be better off in botnets than in the wild, because the botmaster will want to keep them from being infected by other malware. (This is the gang membership theory of the inner city writ large.) Geer likens this to the evolution of beneficial parasites and symbiotes.
  3. So if botnets are here to stay and we need to assume everyone is compromised, why shouldn’t bots become a part of doing business? Why shouldn’t ETrade 0wnz0r my computer when I make a trade, if only to ensure that no one else can listen in? Suddenly the Sony BMG rootkit begins to make more sense, in a sick sort of way.

Geer closes his talk by bringing back the question of how much security we want. If the cost of absolute security is absolute surveillance, of having one&rsquo’s computer routinely 0wnz0r3d by one’s chosen e-commerce sites, then perhaps we need to be prepared to tolerate a little insecurity. Maybe the digital equipment of telephone equipment boxes “secured” with a single hex bolt makes sense after all.

Going through disk encryption like a knife through butter

CNET: Disk encryption may not be secure enough, new research finds. It’s one thing to read about theoretical ways to get access to secure data, it’s another to watch it on a slide show.

For those that don’t want to read the article, the upshot is: a laptop thief who knows enough can pick the secret key used to encrypt a hard drive—using Apple’s FileVault, Microsoft’s BitLocker, or any other solution that uses known key-based encryption mechanisms. The particularly brilliant bit, the part that adds insult to injury, is when the research team (which includes usual suspects Ed Felten and Alex Halderman) demonstrates recovering the key after a reboot of the laptop. Yes, that’s right: even after rebooting the laptop, enough of the prior state of the machine remains in memory so that the key can be recovered. And by chilling the RAM using liquid nitrogen—or canned air—the time needed to recover the key can be extended indefinitely.

So yes, the feds have additional techniques that can be used to recover data from your laptop, if they come across it. So do identity thieves.

So the trick now would seem to be to identify a way to encrypt data that is less subject to key recovery. The only problem is, every method that depends on the hardware decrypting the storage is likely to leave the key in memory. I like the article’s suggestion of PGP-encrypted USB sticks, if only I didn’t lose thumb drives so easily. There are also some interesting suggestions regarding limiting remote booting and unmounting encrypted volumes; the problem is that they don’t get around the core issue. If the key is in memory, you can sniff it. So what to do?

The application is the perimeter

An interesting trio of articles hit yesterday. One is a summary of industry response to the announcement that President Bush intends to fund a massive network security initiative. The money quote is from Veracode’s co-founder and CTO, Chris Wysopal, who compares the initiative to “posting police on every corner in a dangerous neighborhood, but failing to fix shoddy locks on the houses.” The second is an article by Chris about improving development processes to reduce the number of security-related flaws that go into your software in the first place. And the third is an interview with Chris, co-founder and chief scientist Christien Rioux, and senior director of security research Chris Eng (the three Chris’) about the founding of Veracode.

The interesting bit about President Bush’s proposal is that it proposes to put so much money at the network layer while neglecting to recognize a key fact: the application has become the perimeter. All the network security in the world won’t stop an attack against a flawed application that is sitting on an open Port 80—as it has to be to be accessible over the Web—and that, if compromised, can hand an attacker access to sensitive customer and transaction data as well as potentially compromise other machines inside the company’s infrastructure.

The world has changed. In the early 90s, the only security that anyone thought about was viruses and the attack vector was floppies. The advent of the Internet brought about the rise of the network firewall and other network layer technologies. When attackers proved to be able to route around that, the machine-level firewall appeared, signalling a shift of the perimeter to individual computers and servers. But no firewall can guard against an intrusion that occurs in the form of a fraudulent interaction with an application, using the application’s own interfaces and protocols. To guard against that, the application has to be hardened.

In short, the administration’s focus on network security proposes to spend lots of money on a problem that hasn’t been at the real perimeter for about eight years now. Worse, it proposes to do it by placing “sensors” into government and private networks, which sounds like the introduction of additional points of vulnerability to me.

Chris’s article proposes an alternative to doing this, an approach that has actually had a fair amount of success at Microsoft: focus on how you can write secure code and not introduce the flaws to begin with. Because once you recognize that the application is a critical part of the security perimeter, the question should become: how do I catch those potentially exploitable flaws before they reach my production environment?

How secure is Mac OS X? And how long will it stay that way?

I have been struck by an occasional series of Daring Fireball posts in which the question is raised: What explains the relative lack of Mac for-profit malware? The conclusion seems to be that there are two factors: a system-dynamics-based answer that says that the overwhelming Windows market share keeps the attention of malware authors; an aspect of user culture; and some element of technical superiority on the platform.

I would argue that there is yet another factor: developer education. After all, no matter how secure the OS, if an application that is widely distributed on it has flaws, the machine is vulnerable. But Objective C is not exactly a trivial language to pick up and learn, and writing in Carbon doesn’t buy you any development ease points either. So Mac software vendors have to be fairly well educated, and presumably in that process they learn how to avoid introducing flaws like potential buffer overflows. Of course, Apple makes the majority of the software that many casual Mac users use every day, too: Mail, Safari, iTunes, iPhoto… and the company has proven to have a reasonable track record with avoiding exploitable issues there.

The big exception, though, is QuickTime: cross-platform, widely used on websites, widely installed on PCs thanks to iTunes, and vulnerable. There are in the range of 50-60 CERT vulnerability notes indicating potentially exploitable flaws in QuickTime. So clearly Apple doesn’t have a virtuous monopoly on programming skill.

Compare this, though, to the history for iTunes, also cross platform, also widely installed: just two vulnerabilities that affect iTunes directly. The rest are vulnerabilities rising from QuickTime. The suggestion here is that Apple programmers defer much more to the lower level frameworks provided by Apple than their Windows counterparts—consider the wide impact of something like an exploitable buffer overflow in JPG handling in Windows, which affected the OS, AND Microsoft Office, AND half a dozen other Microsoft products, because the vulnerable code was distributed with each application rather than centralized at the OS level.

So, OK, a new theory: because Mac OS X apps rely heavily on centralized system frameworks, more attention can be devoted to keeping those central resources secure. That in turn keeps the OS secure.

Which is why the news that Apple introduced a way to hide program behavior into its port of the DTrace tool is so alarming. For the uninitiated, DTrace is an open source tracing and debugging utility that allows inspecting the internals of program operations—essential for software development, and also for security testing. Apple’s implementation introduces a way that software authors can flag their process so that it is ignored by DTrace (the soon to be infamous PT_DENY_ATTACH request).

From a security perspective, this won’t have a lot of effect on researchers’ ability to observe Apple applications. Following Chris Wysopal’s admonition to use tools that are free from interaction with the platform being analyzed means that security researchers will use non-crippled tools to analyze what’s going on. But it sets a very bad precedent. By creating a way to hide what software is doing on Mac OS X, Apple creates something that smells a lot like a rootkit. And we know what sort of trouble that gets people into.

Rested and ready

A few days off from the blog were really necessary this week for me to recharge. My silence here belies the work that I was doing elsewhere, though I’m not quite ready to take that work public yet.

It’s been pretty quiet all around, though, and I’m definitely ready to start a new challenge tomorrow in my new position as director of product management at Veracode. What’s Veracode? Well, it’s a fairly unique company—it provides an application security service that identifies vulnerabilities in software binaries. By providing a service that does not require access to application source code, the company can do some fairly interesting things—one-time scans on behalf of a company that is purchasing software, for instance, or single-shot audits for software vendors. I’m pretty excited about the opportunity to work with the team there and to address some of the challenges there in taking their services to the next level.

For a little more flavor of the type of world that Veracode operates in, check out the company blog, Zero in a Bit. Very interesting posts about security, application development, and the disclosure process.

Security: mass SQL injection hack

I’m starting a couple new departments on the blog today. The first, the Security department, is going to be posts about computer security concepts and events as I attempt to educate myself about the field. I’m kicking off the department with this story about a mass SQL injection attack that recently hit more than 70,000 sites (via Slashdot That’s a lot of compromised sites, but the really astonishing thing is the vector that was used to do it.

SQL injection—putting database command language into a system as user input or command parameters so that it is executed in a remote system—isn’t a new attack vector. It’s been around since at least 2004, when it was used to deface the Dremel website. It’s also a fairly well understood attack—if you can explain a security vulnerability in a comic strip, you have something that developers should be able to figure out how to avoid.

So why are these vulnerabilities so widespread? One reason may be the ease of web development and its separation from more structured programming disciplines. It’s second nature to a well educated developer to sanitize inputs; self-taught scripters (PHP, ASP, whatever) may not have been exposed to the importance of this principle.

Virii and Ecosystem Health

I rambled a little yesterday. It’s like that sometimes before 8 am. One of the points I forgot to make in my comparison of IM wars and single sign-on wars is that sometimes keeping a diverse ecosystem is critical to the ecosystem’s health. In English? Two or more strong players in the market are better than one. Common sense, really, except in the computer field, which has for the last ten years had a strong whiff of winner take all about it.

Economists talk about network externalities driving the explosive growth of Windows and of Microsoft Office. A “network externality” is an effect that makes a particular good or service valuable the more other people are using it–at least that’s the common definition.

But I think most people forget that occasionally network externalities can be negative as well as positive. And that’s how we get things like SirCam and other Outlook virii. The Outlook virii bother me because of what they say about email clients. What network externalities can be gained on a large scale from everyone being on just one email client? Within an organization’s firewall, sure, you get value by adding extra components like calendaring (and don’t get me wrong–for corporate email, this is a killer app). But I’m aware of very few companies who extend access to those apps to people outside the firewall. So why does everyone run the same email client?

The real externality in Internet mail is the mail format itself, good old RFC 822. It’s a transfer protocol, just like SOAP. Any client that can speak it can participate in the discussion. If you believe in the logic of network externality, in the email realm there’s no reason that Outlook or any mail client should have the overwhelming market share that it does.

I used to feel fairly secure about email virii. I was the only Mac user in the MBA program at MIT’s Sloan School of Management, and we used Eudora as the school’s email application. There were always a few people who persisted in using Outlook in spite of the fact that there wasn’t a calendar server anywhere in the school. They spread their fair share of virii, but I was never infected. My diversity protected me. Unfortunately we’re changing over in the fall, largely because of a lack of a unified calendar feature for Outlook. I’m grimly looking forward to discussions with my IT-savvy but Outlook-bigot classmates after the fourth or fifth email virus comes around.