Last week I indulged in a little live tweeting of a webinar my firm, Veracode, did with Chanxi Wang of Forrester, following up on our recent announcement of an independent survey in which 62% of the respondents reported being breached through at least one application vulnerability in 2008.
I’ve reposted the substance of my tweets below, followed by my $0.02 on the survey:
- (1) #Veracode & Forrester app risk mgmt survey: in 2008 62% of respondents were breached thru app vulns but don’t know their app risk.
- (2) As Kaspersky breach shows, 3rd party code is a big blind spot for most orgs.
- (3) open source, outsourced and off the shelf code used frequently but 59% don’t do anything to secure OSS.
- (4) only 32% require security at all stages of sdlc.
- (5) top training method in 37% of respondents is to learn on the job from experienced devs… who can’t be hired.
- (6) False sense of security pervasive. 94% think they know security of app portfolio but 40% dont know COTS risk
- (7) ease of use plus secure plus time saving is driving factor for third party assessments.
- (8) if you outsource code, consider outsourcing security assessments too.
Bottom line: the survey results suggest that application vulnerabilities lead to real risk for a lot of companies, but most companies don’t have secure practices that cover their development or training adequately, to say nothing of the risk from third party code.
As I mentioned yesterday, there were a few unfinished items left after the FiOS installation yesterday. I got two of the items taken care of this morning, but I was a little disturbed at what I had to do to make things work.
After the installation was complete on Sunday, I connected to the administrative web page of the Actiontec router that Verizon had provided (and which is required with the Verizon TV package). I reconfigured the router to take over the network name (SSID) that I had been using on my AirPort Extreme, changed the security to WPA2, and set the passphrase to the one I had been using previously. Our laptops and my iPhone picked up the change, but my AirPort Express units (which provide wireless printer support and AirTunes) didn’t. They’re first generation AirPort Express, and do 802.11G and 802.11b only.
After some pulling my hair out this morning, I found a thread on the Apple support message boards that suggested that the original AirPort Express was incompatible with the Actiontec version of WPA2. I changed the Verizon router to use regular WPA and told the AirPort Express to use WPA/WPA2 for authentication. After rebooting, I finally got a good connection (green light) with the Express. My second Express didn’t need any reconfiguration–I simply unplugged it and plugged it back in, and it worked.
So there’s that. What’s left is getting my hard drive, with all my music, back on the network. I may have to run an Ethernet drop into the living room over Christmas. Or try one of the tricks for supplanting the Actiontec for wireless.
(It’s more than a little annoying, btw, that I had to use regular WPA instead of WPA2. WPA2 is a much more secure protocol and WPA has been cracked.)
No link, because I’m posting this from my iPhone. But it looks like WordPress 2.6.3, the latest version, has a cross site request forgery vulnerability. The way CSRF works, if you have your WP site open and are logged in, an attacker can use another web page that’s open at the same time to perform actions on your blog, like deleting users. No word yet that I’ve seen about a fix. I’ll post more about CSRF in a while.
Update: Here’s the official published vulnerability (CVE-2008-5113) from the National Vulnerability Database. And here’s a good description of how CSRF works from OWASP. The scary bit is that if the application isn’t patched, there’s not a lot you can do to mitigate the attack. I haven’t seen anything official from WordPress yet on this vulnerability, but there’s an interesting discussion trail on the bug. Bottom line for app developers: don’t trust user input, and yes the HTTP request needs to be considered user input.
A quick heads up for the publication of WordPress 2.6.3, which I missed yesterday thanks to my site’s slowness. This is a straightforward patch release with an update for one PHP class, snoopy, which has a now-patched command injection vulnerability. Mercifully, the patch files are available directly from the blog post, making this the easiest WordPress upgrade yet.
I wrote previously about “technical debt,” the concept that the decision to defer necessary technical work (adopting an updated version of a new component, refactoring code to reduce cruft, etc.) accumulates across releases until it absorbs a project team’s entire capability to develop code. You “pay interest” on technical debt because it’s much harder and consumes many more resources to make a necessary technical change the further downstream you get from the point where the change becomes necessary.
It occured to me today that there’s a specific flavor of technical debt, security debt, that is both more insidious and much easier to see in operation, because we have so many prominent examples of it. It might not have cost the developers of Windows too much more to make the OS more secure at design time, but some of the decisions were deferred, until the point where you had whole features introduced to address security deficiencies in prior features, and the six month long security push that postponed Vista’s launch while the team took care of outstanding security issues in the already-shipped version of the OS.
What’s interesting about security debt to me is that it balloons over time. My once-favorite mix sharing site, Art of the Mix, is a good example. The guy who developed it didn’t really understand SQL injection or XSS, or at least didn’t code defensively against them, and it’s become a hive of malware as a result–and is now flagged as a “reported attack site” and blocked by Firefox 3. So, to carry the metaphor to its logical conclusion, the site’s security debt drove it into a kind of “bankruptcy” when it proved susceptible to drive-by SQL injection attacks.
So how do you avoid incurring security debt? Learning good development practices is a good start; keeping up on the prevalent attacks–the current risk space–is another. But there’s one key thing to remember about security debt: in many cases fixing the underlying flaw that permits exploitation is far far cheaper than getting hacked, or even putting bandaids like web application firewalls in place.
After the difficulty I had with the WordPress 2.6 upgrade, I was both hopeful that 2.6.1 would fix some of the bugs, and a little hesitant about the upgrade. Apparently both my anticipations were incorrect. WordPress 2.6.1 was released yesterday, and while there’s no explicit mention of the admin cookie bug that I hit on the 2.6 upgrade, my own upgrade to 2.6.1 was pretty easy.
The full fixed bug list is on the WordPress Trac, so you may want to see if there’s any fixes you need. As another commenter pointed out, there are few security fixes, but that doesn’t mean there aren’t any–the thing about a plugin without headers not appearing on the plugins page raises concerns about hidden malware that might be worth upgrading to avoid. Just remember to clear your cookies before you try to log back into the admin console after the upgrade.
I haven’t posted a new mix for a while, and there are a few reasons for that. So I’m jumpstarting by posting a largely unedited theme mix, based on Estaminet’s Sacrilicious mix of a while back. It’s called “Blasphemous Rumors,” and it hits songs with Old and New Testament themes as well as good old fashioned breaking of the third (or second, depending) commandment.
This will also be the last mix I post on Art of the Mix unless a few things change. The site has had some problems with SQL injection vulnerabilities, and the developer chose to fix the vulnerabilities by filtering input–which is fine, but it means that you can’t create a mix with the word “drop” in it, even in a song title (e.g. “Dropkick Me Jesus”). Tip to the developer: the best way to avoid SQL injection is by whitelisting input and parametrizing your queries, not by blacklisting.
So does anyone have a recommendation for a replacement for Art of the Mix? It should ideally support uploading playlists from iTunes.
The developer challenge in developing secure code is two-pronged: first, understanding the threat landscape; second, coding defensively and following best practices to avoid creating security vulnerabilities in code. The WCF Security Guide, now available for download from Microsoft, is a pretty impressive document (600+ pages) that combines aspects of both threat landscape definition and specific coding practices, leveraging Microsoft’s Windows Communication Foundation (part of the .NET Framework in version 3 and later).
WCF is an impressive framework that allows the creation of applications that do everything from turnkey SOAP web services to custom communications channels, with tons of flexible configuration options. The downside of the flexibility of the framework is that a lot of the choices it offers have serious security considerations, and the tradeoffs aren’t necessarily clear at development time. For instance, WCF allows the definition of the security mechanism used to protect a communication stream–transport level, message level, or none; encryption, message signing, or both–and using some of the options can make deploying services more complex (must run the service as a user who belongs to a domain, for instance). The guide walks you through a lot of these decisions, as well as basic secure coding practices ranging from input and output sanitization to developing to survive a DoS attack.
Various parties in the Mac community have weighed in and suggested the best way to address the issue highlighted in last week’s advisory regarding an escalation of privilege vulnerability in ARDAgent. While some have suggested that enabling the remote access service may actually correct the privilege escalation, there’s been enough evidence that it doesn’t really work. And a suggestion to clear the setuid bit that allows ARDAgent to act as root appears to kill it, for at least some commentators. That leaves only leave two options:
- If you don’t need to have anyone remotely manage your application, just delete or archive ARDAgent.app.
- Restrict ARDAgent from being able to perform
do shell script (as described in Martin Kou’s blog)
It would be nice if Apple just closed the hole, wouldn’t it?
While you’re at it, don’t forget to update Ruby (it’s part of the default Mac OS X installation), if you’re using it, to close a whole bunch of holes–from numeric errors to buffer overflows–in the core Ruby runtime.
And can we stop pretending that the Mac OS X platform is magically secure?
As I’ve been getting myself up to speed in learning about application security, a few resources have been extremely helpful.
A good general background on application security issues, unsurprisingly, is contained in The Art of Software Security Testing, co-authored by Veracode cofounder Chris Wysopal. The book goes beyond the basic description of classes of application security vulnerabilities into specific recommendations for testing strategies and ways to improve the software development lifecycle to avoid introducing vulnerabilities.
There have been a few pivotal written works about how certain classes of software vulnerability work. The canonical one is the Cult of the Dead Cow’s “The Tao of Windows Buffer Overflow,” written by veteran hacker Dildog. Written in a clear and easy to read (if profane) style, this work should scare the living bejeezus out of you.
There are some more business friendly summaries of other vulnerability classes available. One source for this information is Veracode’s own web site, which features clear explanations of SQL injection and cross-site scripting (XSS).
The CTO of WebRoot is talking about applying Software as a Service to email and web security. It’s a good pitch, delivered to a small audience late in the afternoon.
- Because business-relevant content creation is shifting from “trusted providers” to semi-anonymous collaborations like wikis, blogs, and social networks, the focus is shifting away from blocking and allowing entire sites and toward figuring out how to deal with the possibility of Facebook (e.g.) as a malware vector
- Spam messages per business user in 2008: 42,000, based on their internal statistics.
- Because of #1, outgoing URL filtering no longer works (at least by itself). You have to combine anti-spyware, anti-virus, anti-phishing, and access control with high performance requirements.
I’m at the Gartner IT Security Summit today and tomorrow (alas, I missed Bruce Sterling on the panel yesterday), and have been splitting my time between the show floor and a few of the sessions. I attended a few sessions on application security testing and on ITIL v. 3 this afternoon that sparked a few responses based on my combined security and ITIL experience.
Basically, the challenge to IT organizations who are doing any level of application management — change management of internally managed apps, purchasing COTS apps — is to figure out how to integrate application security into their software development and purchasing lifecycles. The two concrete recommendations that jumped out for me were:
- Don’t treat purchased software differently, from a security perspective, than you treat internally developed software. Hold both to the same standards and demand the same security certification from both. While this has traditionally been harder for COTS software, where source code is usually unavailable, binary analysis techniques such as those provided by my firm enable some level of consistency across these two categories.
- Bake security into your service management lifecycle. From design to transition to continuous improvement, security should be architected in and designed into the process. One way to consider how security can dovetail with ITIL is considering the role of security audits, whether binary or otherwise, as part of change and release management criteria. While secure development practices and source code tools should certainly be part of the SDLC process, release criteria should include security testing as well as functional testing requirements. Again, automated scanning can greatly assist with this process.
Netcraft: Hacker redirects Barack Obama’s site to hillaryclinton.com. Okay, folks, here’s the thing: never trust any place where a user can enter text into your website and have it displayed back at you. Never trust any text that comes from a form field on your site. Because if you do, smart and devious people like Mox here can use your trust to do embarrassing things to your visitors.
On the (very) slightly mitigating side, the attack was not against the main Obama website but his community blog platform, and the vulnerability that was exploited has already been closed. But this type of vulnerability, Cross Site Scripting, is insidious unless you begin your web application with the assumption that all user input needs to be sanitized. And even then, it’s not enough to check your code; you need to check all the third party code that makes up your site.
It would be immodest of me to mention that my company’s service can do just such a check, without requiring you to build security expertise inhouse and for a modest fee.
In what is shaping up to be a fine security trifecta (see yesterday’s post about an as-yet unpatched cross-site scripting vulnerability at CIA.gov), yesterday’s Daily WTF posting concerned a naked SQL Injection vulnerability on the Oklahoma Department of Corrections website. The vulnerability allowed anyone who cared to download lots of details from Oklahoma’s sex offender registry that shouldn’t have been accessible, including social security numbers (identity theft, anyone?), and also allowed access to other tables in the database, including information on corrections staff members. The page is now, mercifully, offline, though not before a commenter claimed that he was able to insert someone’s name into the database using a different SQL statement in the URL.
Little Bobby Tables at xkcd illustrates this type of vulnerability as well. Moral of the story: don’t trust user input!