The Almost Hopeless Challenge Of Web Security

sleeping-policemanToday we are trusting the web with our most personal and important data, from private photos and social graphs to finances and key work documents. Our hesitation to share such information has dropped over the years as our trust in our favorite services grows. Yet all the while, the web is actually growing less secure, as sites are left open to new attacks that can spread easily and leave users totally unaware when they’ve been compromised.

Looking back on the history of the web, classic security protection involved patching servers to assure latest versions were running, monitoring advisories from vendors, and maintaining some level of filtering and firewall to keep basic attacks out. Simple moves on the part of an admin or developer could protect sites from 99% of automated scripts. But a few years ago, a new security can-of-worms was opened, as new exploits that took advantage of simple oversights within web applications were being used to steal large amounts of user data. This new class of vulnerabilities took advantage of attack vectors within custom built web applications, using techniques like passing Javascript calls into web forms which would then be published back to an unsuspecting user. This new breed of attack was referred to as Cross-Site Scripting (XSS) — in short, the ability to manipulate a trusted website to run untrusted scripting code on a victim’s browser.

Cross-Site Scripting, and its related cousin, Cross-Site Request Forgery (XSRF), have led to attacks and exploits such as MySpace being taken down (via a worm, Sammy), data being stolen from 18 Million users of a Korean auction site, a Gmail weakness used to blackmail a domain owner and even an exploit targeted at changing the settings on a user’s local broadband router. All of these exploits were accomplished by convincing the user to click a link, an email (where an embedded image containing an exploit payload was displayed) or by simply visiting a site they trusted and had previously visited.

Various statistics claim that up to 80% of security vulnerabilities (pdf link) in the past 2 years have been the result of XSS and XSRF. There are claims that at various points, over 70% of websites were vulnerable to either one or the other. Anybody who understand how these attacks work, and who understands how to conduct a simple test (i.e. feed something like '<script>alert('y0');' into a web app and see if it pops back out somewhere unfiltered), would tend to agree that a large number of sites were, and still are, vulnerable.

Complicating the XSRF and XSS problem is the fact that not only does it take time to inform and educate developers, but that new ways of conducting such attacks against the most modern web apps and browsers are still being discovered. While application developers are busy cleaning up their code to protect against simple vectors discovered years ago (eg. escaping simple input text with addslashes()), security researchers are discovering new ways of exploiting the trust relationship between a user, a website and the web browser. These ‘new ways’ are being discovered all the time, and often fall outside of the box of previous thinking on what it takes to secure a web app.

For instance, today I read about (via dalmaer of Ajaxian) a newly discovered potential means for XSS and XSRF exploits by forcing a browser to talk HTTP to a non-HTTP service and have the code response interpreted, bounced-back and executed by the browser (that is my single-sentence attempt at condensing this brilliant description, which should be required reading for every app developer). It seems that every few weeks I stumble on yet another description of how to manipulate the trust relationship to exploit a user.

What is worrying is that these attacks exploit the foundation of the web — a network that was built with an implicit level of trust assumed between users and servers. To keep up with security requires a key re-think of how data is transported on the web and destroying the assumption that most data is safe data. Also worrying is that in all likelihood, most successful attacks exploiting these methods are likely to go unreported, as they can be used to silently attack a targeted individual who would usually have no way of knowing what is occurring underneath the hood of their browser. The black-hats have no incentive to share new methods they discover, forever locking developers and corporate security researchers (or those working on the ‘good’ side) in a race to stay in front.

Having performed bare-bones testing of new web applications I see, as well as monitoring the security announcement lists of web applications I use myself, I can safely say that most web application developers today are at least a year or more behind on the latest security vulnerability methods being discovered. Complicating this is that browser manufacturers themselves do not completely understand the issues involved, and in some cases are moving backwards (ie. the new IE8 is now allowing XmlHttpRequest across-ports). Scary? Yes. What to do about it? I have no idea, other than to get educated and attempt to stay on top of it.

Update: A somewhat ironic twist to this story. When I included the code example above (ie. how to test for XSS) it actually passed through the CMS running this blog and kept triggering when I would attempt to preview or publish this post.