Eli Grey


We are announcing Zerodrop, an open-source stealth URL toolkit optimized for bypassing censorship filters and dropping malware. Zerodrop is written in Go and features a powerful web UI that supports geofencing, datacenter IP filtering, blacklist training, manual blacklisting/whitelisting, and advanced payload configuration!

Zerodrop can help you elude the detection of the automatic URL scanners used on popular social media platforms. You can easily blacklist traffic from the datacenters and public Tor exit nodes commonly used by URL scanners. For scanners not included in our default blacklists, you can activate blacklist training mode to automatically log the IP addresses of subsequent requests to a blacklist.

When used for anti-forensic malware distribution, Zerodrop is most effective paired with a server-side compromise of a popular trusted domain. This further complicates incident analysis and breach detection.

Live demo

A live demo is available at dangerous.link. Please keep your usage legal. Infrastructural self-destruct has been disabled for the demo. To prevent automated abuse, users may be required to complete CAPTCHA challenges in order to create new entries.

Zerodrop geofencing & blacklist training


Google Inbox spoofing vulnerability

On May 4th, 2017 I discovered and privately reported a recipient spoofing vulnerability in Google Inbox. I noticed that the composition box always hid the email addresses of named recipients without providing a way to inspect the actual email address, and figured out how to abuse this with mailto: links containing named recipients.

The link mailto:​”support@paypal.com”​<scam@phisher.example> shows up as “support@paypal.com” in the Google Inbox composition window.

In order to exploit this vulnerability, the target user only needs to click on a malicious mailto: link. It can also be triggered by clicking on a direct link to Inbox’s mailto: handler page, as shown in this example exploit link.

This vulnerability was still unfixed in all Google Inbox apps as of May 4th, 2018, a year after private disclosure.

Update: This vulnerability has been fixed in the Google Inbox webapp as of May 18, 2018. All Google Inbox mobile apps remain vulnerable.

The recipient “support@paypal.com” being spoofed in the Google Inbox composition window. The actual recipient is “scam@phisher.example”.


Opera UXSS vulnerability regression

Opera users were vulnerable to a publicly-disclosed UXSS exploit for most of 2010-2012.

I privately disclosed a UXSS vulnerability (complete SOP bypass) to Opera Software in April 2010, and recently discovered that Opera suffered a regression of this issue and continued to be vulnerable for over two years after disclosure. The vulnerability was that data: URIs could attain same-origin privileges to non-opening origins across multiple redirects.

I asked for a status update 50 days after disclosing the vulnerability, as another Opera beta release was about to be published. Opera responded by saying that they were pushing back the fix.

I publicly disclosed the vulnerability with a PoC exploit on Twitter on June 15, 2010. This was slightly irresponsible of me (at least I included a kill switch), but please keep in mind that I was 16 at the time. The next week, Opera published new mainline releases (10.54 for Windows/Mac and 10.11 for Linux) and said that those releases should fix the vulnerability. I tested my PoC and it seemed to be fixed.

Shortly after, this vulnerability regressed back into Opera without me noticing. I suspect that this was due to the rush to fix their mainline branch, and lack of coordination between their security and release teams. The regression was caught two years later by M_script on the RDot Forums, and documented in English by Detectify Labs.

Opera Software’s management should not have allowed this major flaw to regress for so long.

Rainpaper 2.0

Version 2.0 of Rainpaper is now available.

What’s new:

  • Wallpaper scrolling
  • Muzei extension support
  • Cycle through multiple of your own images from your gallery
  • Performance and stability improvements

In order to change the refresh interval to cycle through your own images, long press on the “My images” image source and tap “Settings”. There will be another update (2.1) with support for looping GIF/video wallpapers and additional memory and performance improvements.

I will also be launching a pair of Android and Windows apps later this year named Soundmesh. It enables wireless synchronization of multiple devices’ audio outputs and inputs with low latency, high-quality audio.

You can use Soundmesh to listen to your PC audio output on multiple Android phones and PCs, forward your Android microphone to your PC, and listen to your Android phone’s audio output on your PC.

Bedford/St. Martin’s data breach

Some time between Aug 27, 2012 and May 3, 2014, the Macmillan Publishers subsidiary Bedford/St. Martin’s suffered a data breach that leaked the unique email address that I provided to them. I have previously informed them of the breach and it appears that they do not care to investigate.

I don’t appreciate large companies getting away with not disclosing or investigating data breaches, so I’m disclosing it for them.


I just released an Android live wallpaper called Rainpaper on Google Play. Check it out!

Rainpaper features simulated rain, popular images from reddit, and synchronization with your local weather.

Also stay tuned for a new open source project that I’ve been working on called subscribe.js. Soon you will be able to easily retrofit push-like notifications onto any website that has a syndication feed. subscribe.js will be powered by Service Workers and run locally in your browser.

CPU core estimation with JavaScript

(Update) Standardization

I have standardized navigator.cores as navigator.hardwareConcurrency, and it is now supported natively in Chrome, Safari, Firefox, and Opera. Our polyfill has renamed the APIs accordingly. Since the initial blog post, Core Estimator has been updated to estimate much faster and now has instant estimation in Chrome through PNaCl.


So you just built some cool scalable multithreaded feature into your webapp with web workers. Maybe it’s machine learning-based webcam object recognition—or a compression algorithm like LZMA2 that runs faster with the more cores that you have. Now, all you have to do is simply set the number of worker threads to use the user’s CPU as efficiently as possible…

You might be thinking “Easy, there’s probably a navigator.cores API that will tell me how many cores the user’s CPU has.” That was our thought while porting xz to JavaScript (which will be released in the future as xz.js), and we were amazed there was no such API or any equivalent whatsoever in any browser! With all the new features of HTML5 which give more control over native resources, there must be a way to find out how many cores a user possesses.

I immediately envisioned a timing attack that could attempt to estimate a user’s CPU cores to provide the optimal number of workers to spawn in parallel. It would scale from one to thousands of cores. With the help of Devin Samarin, Jon-Carlos Rivera, and Devyn Cairns, we created the open source library, Core Estimator. It implements a navigator.cores value that will only be computed once it is accessed. Hopefully in the future, this will be added to the HTML5 specification.

Live demo

Try out Core Estimator with the live demo on our website.

screenshot of the demo being run on an i7 3930k

How the timing attack works and scales

The estimator works by performing a statistical test on running different numbers of simultaneous web workers. It measures the time it takes to run a single worker and compares this to the time it takes to run different numbers of workers simultaneously. As soon as this measurement starts to increase excessively, it has found the maximum number of web workers which can be run in parallel without degrading performance.

In the early stages of testing whether this would work, we did a few experiments on various desktops to visualize the data being produced. The graphs being produced clearly showed that it was feasible on the average machine. Pictured are the results of running an early version of Core Estimator on Google Chrome 26 on an Intel Core i5-3570K 3.4GHz Quad-Core Processor with 1,000 time samples taken for each core test. We used 1,000 samples just to really be able to see the spread of data but it took over 15 minutes to collect this data. For Core Estimator, 5 samples seem to be sufficient.

The astute observer will note that it doesn’t test each number of simultaneous workers by simply counting up. Instead, Core Estimator performs a binary search. This way the running time is logarithmic in the number of cores—O(log n) instead of O(n). At most, 2 * floor(log2(n)) + 1 tests will be done to find the number of cores.


Previously, you had to either manually code in an amount of threads or ask the user how many cores they have, which can be pretty difficult for less tech savvy users. This can even be a problem with tech savvy users—few people know how many cores their phone has. Core Estimator helps you simplify your APIs so thread count parameters can be optional. The xz.js API will be as simple as xz.compress(Blob data, callback(Blob compressed), optional int preset=6, optional int threads=navigator.cores), making it this easy to implement a “save .xz” button for your webapp (in conjunction with FileSaver.js):

save_button.addEventListener("click", function() {
    xz.compress(serializeDB(), function(compressed) {
        saveAs(compressed, "db.xz");

Supported browsers and platforms

Early Core Estimator has been tested to support all current release versions of IE, Firefox, Chrome, and Safari on ARM and x86 (as of May 2013). The accuracy of Core Estimator on systems with Intel hyper-threading and Turbo Boost technology is somewhat lesser as the time to complete a workload is less predictable. In any case it will try to tend towards estimating a larger number of cores than actually available to provide a somewhat reasonable number.

Saving generated files on the client-side

Have you ever wanted to add a Save as… button to a webapp? Whether you’re making an advanced WebGL-powered CAD webapp and want to save 3D object files or you just want to save plain text files in a simple Markdown text editor, saving files in the browser has always been a tricky business.

Usually when you want to save a file generated with JavaScript, you have to send the data to your server and then return the data right back with a Content-disposition: attachment header. This is less than ideal for webapps that need to work offline. The W3C File API includes a FileSaver interface, which makes saving generated data as easy as saveAs(data, filename), though unfortunately it will eventually be removed from the spec.

I have written a JavaScript library called FileSaver.js, which implements FileSaver in all modern browsers. Now that it’s possible to generate any type of file you want right in the browser, document editors can have an instant save button that doesn’t rely on an online connection. When paired with the standard HTML5 canvas.toBlob() method, FileSaver.js lets you save canvases instantly and give them filenames, which is very useful for HTML5 image editing webapps. For browsers that don’t yet support canvas.toBlob(), Devin Samarin and I wrote canvas-toBlob.js. Saving a canvas is as simple as running the following code:

canvas.toBlob(function(blob) {
    saveAs(blob, filename);

I have created a demo of FileSaver.js in action that demonstrates saving a canvas doodle, plain text, and rich text. Please note that saving with custom filenames is only supported in browsers that either natively support FileSaver or browsers like Google Chrome 14 dev and Google Chrome Canary, that support <a>.download or web filesystems via LocalFileSystem.

How to construct files for saving

First off, you want to instantiate a Blob. The Blob API isn’t supported in all current browsers, so I made Blob.js which implements it. The following example illustrates how to save an XHTML document with saveAs().

      new Blob(
          [(new XMLSerializer).serializeToString(document)]
        , {type: "application/xhtml+xml;charset=" + document.characterSet}
    , "document.xhtml"

Not saving textual data? You can save multiple binary Blobs and ArrayBuffers to a Blob as well! The following is an example of setting generating some binary data and saving it.

      buffer = new ArrayBuffer(8) // allocates 8 bytes
    , data = new DataView(buffer)
// You can write uint8/16/32s and float32/64s to dataviews
data.setUint8 (0, 0x01);
data.setUint16(1, 0x2345);
data.setUint32(3, 0x6789ABCD);
data.setUint8 (7, 0xEF);
saveAs(new Blob([buffer], {type: "example/binary"}), "data.dat");
// The contents of data.dat are <01 23 45 67 89 AB CD EF>

If you’re generating large files, you can implement an abort button that aborts the FileSaver.

var filesaver = saveAs(blob, "video.webm");
abort_button.addEventListener("click", function() {
}, false);

Title image files in Opera

I recently discovered a method to title image files in Opera. I was experimenting with CSS generated content in regards to the <title> element in various browsers, and discovered that as long as the <head> and <title> elements are not display: none, generated content applied before and after the <title> element is added to the page title itself in Opera. It was obvious to me that I should combine this with HTTP Link: headers containing stylesheets, as to make it possible to modify the title of usually non-titleable media, such as images, plain text, audio, and video.

In this demo, the following CSS rules are applied in Opera.

head, title {
	display: block;
	width: 0;
	height: 0;
	visibility: hidden;
title::before {
	content: "Just an image — ";
Tagged: , ,

Voice Search Google Chrome extension

Voice Search screenshot Voice Search is an open source Google Chrome extension I made that allows you to search by speaking. For example, just click on microphone and say kittens to search for kittens. If you specifically want pictures of kittens, say google images kittens. Want to learn more about World War II? Say wikipedia world war two. The source code for Voice Search is on GitHub.

Voice Search comes pre-loaded with many popular search engines by default, and you can add your own user-defined search engines. It also integrates a speech input button for all websites using HTML5 search boxes, all of the default search engines’ websites, Facebook, Twitter, reddit, and GitHub.

In later versions, I plan to introduce ability to import/export settings, create aliases (e.g. map to Google Maps and calculate to Wolfram|Alpha), OpenSearch description detection, and scripted terms that do more than open URLs.