Eli Grey

Big Tech’s role in enabling link fraud

Link fraud is increasingly undermining trust in major online platforms, including Google, Bing, and X (Twitter).

These platforms allow advertisers to spoof links with unverified ‘vanity URLs’, laundering trust in their systems, while simultaneously deflecting blame onto advertisers when these mechanisms are exploited for fraudulent purposes. 

I believe that this status quo must be abolished. Commercial entities that maintain advertising systems that systemically enable link fraud must contend with their net-negative impact on society.

What are vanity URLs and what is link fraud?

URL spoofing is the act of presenting an internet address that appears to lead to one destination but actually leads to another, unexpected, location. URL spoofing is commonly referred to as “vanity URLs” or “display URLs” by the adtech industry when provided as a first-class feature on adtech platforms.

Link fraud is the use of URL spoofing to achieve financial gain or other illicit objectives. It is a staple practice in spam emails and scam websites, where links may appear legitimate but lead to harmful content.

Examples of preventable link fraud

Link fraud is commonly used on popular websites to hijack ecommerce and software downloads. People that don’t block ads can get convincing link fraud whenever they search for web browsers[1][2], videoconferencing apps[3], password managers[4], task management apps[5], authenticator apps[6], open source image editing software[7], and cryptocurrency wallet managers[8]. People are more likely to input their sensitive account information when they see valid links to trusted sites on ‘trustworthy’ search engines.

An industry-wide lack of effective technical protections allows scam operations to scale up such that it becomes practical to farm low-value targets en-masse. There are instances where scammers use link fraud just to collect media streaming provider credentials[9], likely for resale.

This compilation thread on X includes all of the examples below.

Google Search

Microsoft Bing

X (Twitter)

I’ve also personally witnessed link fraud twice on X but didn’t screenshot it.

Culprits

Google, Microsoft, and X all appear to have similarly ineffective enforcement methods and policies for vanity URL control on their advertising platforms.

I suspect that Facebook and Reddit also have similar limitations as they also allow link spoofing for ads, although I haven’t sufficiently confirmed a lack of effective domain ownership verification.

Implicit regulatory capture

Adtech companies play the victim by claiming that fraudsters and scammers are ‘abusing’ their unverified vanity URL systems. These companies should not be able to get away with creating systems that enable link fraud and then pretend to tie their hands behind their back when asked to combat the issue. They have created systems for trust-laundered URL spoofing, and then disclaimed ethical or legal responsibility for the fundamental technical failures of these systems.

It is not possible to automatically prevent link fraud in systems that allow for unverified URL spoofing to occur. If adtech providers do not perform domain ownership verification on vanity URLs, advertisers are technically free to commit fraud as they please.

How did we get here?

The adtech industry may excuse these practices as an unavoidable consequence of the complexity of online advertising. However, this overlooks the responsibility that these companies bear for prioritizing profit over user safety and the integrity of their platforms.

Corporate greed has gotten so out-of-control that companies such as Google, Microsoft, and Brave now all deeply integrate advertising technologies at the browser-level, with some effects ranging from battery drain to personal interest tracking, and even taking a cut of the value of your attention.

National security risks

The risk of malvertising and fraud through adtech platforms has become so concerning and prevalent that the FBI now recommends all citizens install ad blockers. Interestingly, some of the FBI’s advice for checking ad authenticity is inadequate in practice. The FBI suggests “Before clicking on an advertisement, check the URL to make sure the site is authentic. A malicious domain name may be similar to the intended URL but with typos or a misplaced letter.” — this is useless advice in the face of unverified vanity URLs. Instead of asking private citizens to block an entire ‘legal’ industry, the FBI should be investigating adtech platforms for systemically enabling link fraud.

Intelligence agencies such as the NSA and CIA also use adblockers in order to keep their personnel safe from malware threats. I anticipate that the US federal government may start requiring adblockers on all federal employee devices at some point in the future.

What can be done? Verification & enforcement

Companies are generally mandated by law to provide true statements to consumers where technically possible. Unverified vanity URLs as a first-class feature flies in the face of these requirements.

Adtech providers should validate ownership of the domain names used within vanity URLs, or alternatively vanity URLs should be banned entirely. Validating domain ownership can easily be done through automated or manual processes where domain name owners place unique keys in their domain name’s DNS records.

A common, yet fundamentally flawed ‘verification’ mechanism that adtech platforms such as Google Ads employ is the use of sampled URL resolution, which involves visiting a website at given points in time from one or more given computers. This technique can easily be bypassed with dynamic redirection software to hide fraud and malware from URL scanning servers.

Petition your elected government officials to let them know that big tech is willingly ignoring their role in the rise of effective link fraud, spurred by their support of unverified vanity URLs. The United States FTC should investigate companies that knowingly enable link fraud through unverified vanity URL systems that are fundamentally impossible to audit.

On a personal level, you can install an adblocker such as uBlock Origin to block advertising, which has a nice added side effect of increasing web browsing privacy and performance.

Favioli

🤯 Favioli is a productivity extension that makes it easier to recognize tabs within Chrome.

Favioli was originally inspired by two things. The first thing that spurred the idea was Eli Grey’s personal site and its use of Emoji Favicon Toolkit to make randomized emoji favicons. The favicon of his site shows different emoji on-load that persist within a session. It’s pretty creative and fun! This was the creative inspiration for Favioli.

There is an Emoji Favicon Toolkit usage example here on eligrey.com.

The practical inspiration for Favioli came from my day job. We have a lot of internal tools and sites, and they tend to either not have favicons, or have the standard Sony logo. For me this was a bit of a pain, because I love to pin my tabs. I couldn’t tell these sites apart.

I could use a Chrome extension that lets me set custom favicons, but then I’d have to go through and specify each one. If I were to use emojis, I wouldn’t have to deal with finding art for each individual site, and I could make something that could automatically make all of these pages recognizable at a glance.

This project has been a fun exploration of javascript strings, chrome extensions, and browser/os string support.

As we look through everything, feel free to follow along by looking at Favioli’s source code! All the Favioli code that is my own is licensed with the Unlicense, so feel free to go crazy with it.

(more…)

Zerodrop

We are announcing Zerodrop, an open-source stealth URL toolkit optimized for bypassing censorship filters and dropping malware. Zerodrop is written in Go and features a powerful web UI that supports geofencing, datacenter IP filtering, blocklist training, manual blocklisting/allowlisting, and advanced payload configuration!

Zerodrop can help you elude the detection of the automatic URL scanners used on popular social media platforms. You can easily blocklist traffic from the datacenters and public Tor exit nodes commonly used by URL scanners. For scanners not included in our default blocklists, you can activate blocklist training mode to automatically log the IP addresses of subsequent requests to a blocklist.

When used for anti-forensic malware distribution, Zerodrop is most effective paired with a server-side compromise of a popular trusted domain. This further complicates incident analysis and breach detection.

Live demo

A live demo is available at dangerous.link. Please keep your usage legal. Infrastructural self-destruct has been disabled for the demo. To prevent automated abuse, users may be required to complete CAPTCHA challenges in order to create new entries.

Update: We have decided to restrict access to the demo to prevent abuse.

Zerodrop geofencing & blocklist training

(more…)

Google Inbox spoofing vulnerability

On May 4th, 2017 I discovered and privately reported a recipient spoofing vulnerability in Google Inbox. I noticed that the composition box always hid the email addresses of named recipients without providing a way to inspect the actual email address, and figured out how to abuse this with mailto: links containing named recipients.

The link mailto:​”support@paypal.com”​<scam@phisher.example> shows up as “support@paypal.com” in the Google Inbox composition window, visually identical to any email actually sent to PayPal.

In order to exploit this vulnerability, the target user only needs to click on a malicious mailto: link. It can also be triggered by clicking on a direct link to Inbox’s mailto: handler page, as shown in this example exploit link.

This vulnerability was still unfixed in all Google Inbox apps as of May 4th, 2018, a year after private disclosure.

Update: This vulnerability has been fixed in the Google Inbox webapp as of May 18, 2018. The Android app still remains vulnerable.

The recipient “support@paypal.com” being spoofed in the Google Inbox composition window. The actual recipient is “scam@phisher.example”.

(more…)

Opera UXSS vulnerability regression

Opera users were vulnerable to a publicly-disclosed UXSS exploit for most of 2010-2012.

I privately disclosed a UXSS vulnerability (complete SOP bypass) to Opera Software in April 2010, and recently discovered that Opera suffered a regression of this issue and continued to be vulnerable for over two years after disclosure. The vulnerability was that data: URIs could attain same-origin privileges to non-opening origins across multiple redirects.

I asked for a status update 50 days after disclosing the vulnerability, as another Opera beta release was about to be published. Opera responded by saying that they were pushing back the fix.

I publicly disclosed the vulnerability with a PoC exploit on Twitter on June 15, 2010. This was slightly irresponsible of me (at least I included a kill switch), but please keep in mind that I was 16 at the time. The next week, Opera published new mainline releases (10.54 for Windows/Mac and 10.11 for Linux) and said that those releases should fix the vulnerability. I tested my PoC and it seemed to be fixed.

Shortly after, this vulnerability regressed back into Opera without me noticing. I suspect that this was due to the rush to fix their mainline branch, and lack of coordination between their security and release teams. The regression was caught two years later by M_script on the RDot Forums, and documented in English by Detectify Labs.

Opera Software’s management should not have allowed this major flaw to regress for so long.

Security advisories

Bedford/St. Martin’s data breach

Some time between Aug 27, 2012 and May 3, 2014, the Macmillan Publishers subsidiary Bedford/St. Martin’s suffered a data breach that leaked the unique email address that I provided to them. I have previously informed them of the breach and it appears that they do not care to investigate.

I don’t appreciate large companies getting away with not disclosing or investigating data breaches, so I’m disclosing it for them.

CPU core estimation with JavaScript

(Update) Standardization

I have standardized navigator.cores as navigator.hardwareConcurrency, and it is now supported natively in Chrome, Safari, Firefox, and Opera. Our polyfill has renamed the APIs accordingly. Since the initial blog post, Core Estimator has been updated to estimate much faster and now has instant estimation in Chrome through PNaCl.

navigator.cores

So you just built some cool scalable multithreaded feature into your webapp with web workers. Maybe it’s machine learning-based webcam object recognition—or a compression algorithm like LZMA2 that runs faster with the more cores that you have. Now, all you have to do is simply set the number of worker threads to use the user’s CPU as efficiently as possible…

You might be thinking “Easy, there’s probably a navigator.cores API that will tell me how many cores the user’s CPU has.” That was our thought while porting xz to JavaScript (which will be released in the future as xz.js), and we were amazed there was no such API or any equivalent whatsoever in any browser! With all the new features of HTML5 which give more control over native resources, there must be a way to find out how many cores a user possesses.

I immediately envisioned a timing attack that could attempt to estimate a user’s CPU cores to provide the optimal number of workers to spawn in parallel. It would scale from one to thousands of cores. With the help of Devin Samarin, Jon-Carlos Rivera, and Devyn Cairns, we created the open source library, Core Estimator. It implements a navigator.cores value that will only be computed once it is accessed. Hopefully in the future, this will be added to the HTML5 specification.

Live demo

Try out Core Estimator with the live demo on our website.

screenshot of the demo being run on an i7 3930k

How the timing attack works and scales

The estimator works by performing a statistical test on running different numbers of simultaneous web workers. It measures the time it takes to run a single worker and compares this to the time it takes to run different numbers of workers simultaneously. As soon as this measurement starts to increase excessively, it has found the maximum number of web workers which can be run in parallel without degrading performance.

In the early stages of testing whether this would work, we did a few experiments on various desktops to visualize the data being produced. The graphs being produced clearly showed that it was feasible on the average machine. Pictured are the results of running an early version of Core Estimator on Google Chrome 26 on an Intel Core i5-3570K 3.4GHz Quad-Core Processor with 1,000 time samples taken for each core test. We used 1,000 samples just to really be able to see the spread of data but it took over 15 minutes to collect this data. For Core Estimator, 5 samples seem to be sufficient.

The astute observer will note that it doesn’t test each number of simultaneous workers by simply counting up. Instead, Core Estimator performs a binary search. This way the running time is logarithmic in the number of cores—O(log n) instead of O(n). At most, 2 * floor(log2(n)) + 1 tests will be done to find the number of cores.

Benefits

Previously, you had to either manually code in an amount of threads or ask the user how many cores they have, which can be pretty difficult for less tech savvy users. This can even be a problem with tech savvy users—few people know how many cores their phone has. Core Estimator helps you simplify your APIs so thread count parameters can be optional. The xz.js API will be as simple as xz.compress(Blob data, callback(Blob compressed), optional int preset=6, optional int threads=navigator.cores), making it this easy to implement a “save .xz” button for your webapp (in conjunction with FileSaver.js):

save_button.addEventListener("click", function() {
    xz.compress(serializeDB(), function(compressed) {
        saveAs(compressed, "db.xz");
    });
});

Supported browsers and platforms

Early Core Estimator has been tested to support all current release versions of IE, Firefox, Chrome, and Safari on ARM and x86 (as of May 2013). The accuracy of Core Estimator on systems with Intel hyper-threading and Turbo Boost technology is somewhat lesser as the time to complete a workload is less predictable. In any case it will try to tend towards estimating a larger number of cores than actually available to provide a somewhat reasonable number.

Saving generated files on the client-side

Have you ever wanted to add a Save as… button to a webapp? Whether you’re making an advanced WebGL-powered CAD webapp and want to save 3D object files or you just want to save plain text files in a simple Markdown text editor, saving files in the browser has always been a tricky business.

Usually when you want to save a file generated with JavaScript, you have to send the data to your server and then return the data right back with a Content-disposition: attachment header. This is less than ideal for webapps that need to work offline. The W3C File API includes a FileSaver interface, which makes saving generated data as easy as saveAs(data, filename), though unfortunately it will eventually be removed from the spec.

I have written a JavaScript library called FileSaver.js, which implements FileSaver in all modern browsers. Now that it’s possible to generate any type of file you want right in the browser, document editors can have an instant save button that doesn’t rely on an online connection. When paired with the standard HTML5 canvas.toBlob() method, FileSaver.js lets you save canvases instantly and give them filenames, which is very useful for HTML5 image editing webapps. For browsers that don’t yet support canvas.toBlob(), Devin Samarin and I wrote canvas-toBlob.js. Saving a canvas is as simple as running the following code:

canvas.toBlob(function(blob) {
    saveAs(blob, filename);
});

I have created a demo of FileSaver.js in action that demonstrates saving a canvas doodle, plain text, and rich text. Please note that saving with custom filenames is only supported in browsers that either natively support FileSaver or browsers like Google Chrome 14 dev and Google Chrome Canary, that support <a>.download or web filesystems via LocalFileSystem.

How to construct files for saving

First off, you want to instantiate a Blob. The Blob API isn’t supported in all current browsers, so I made Blob.js which implements it. The following example illustrates how to save an XHTML document with saveAs().

saveAs(
      new Blob(
          [(new XMLSerializer).serializeToString(document)]
        , {type: "application/xhtml+xml;charset=" + document.characterSet}
    )
    , "document.xhtml"
);

Not saving textual data? You can save multiple binary Blobs and ArrayBuffers to a Blob as well! The following is an example of setting generating some binary data and saving it.

var
      buffer = new ArrayBuffer(8) // allocates 8 bytes
    , data = new DataView(buffer)
;
// You can write uint8/16/32s and float32/64s to dataviews
data.setUint8 (0, 0x01);
data.setUint16(1, 0x2345);
data.setUint32(3, 0x6789ABCD);
data.setUint8 (7, 0xEF);
 
saveAs(new Blob([buffer], {type: "example/binary"}), "data.dat");
// The contents of data.dat are <01 23 45 67 89 AB CD EF>

If you’re generating large files, you can implement an abort button that aborts the FileSaver.

var filesaver = saveAs(blob, "video.webm");
abort_button.addEventListener("click", function() {
    filesaver.abort();
}, false);

Title image files in Opera

I recently discovered a method to title image files in Opera. I was experimenting with CSS generated content in regards to the <title> element in various browsers, and discovered that as long as the <head> and <title> elements are not display: none, generated content applied before and after the <title> element is added to the page title itself in Opera. It was obvious to me that I should combine this with HTTP Link: headers containing stylesheets, as to make it possible to modify the title of usually non-titleable media, such as images, plain text, audio, and video.

In this demo, the following CSS rules are applied in Opera.

head, title {
	display: block;
	width: 0;
	height: 0;
	visibility: hidden;
}
 
title::before {
	content: "Just an image — ";
}
Tagged: , ,

Voice Search Google Chrome extension

Voice Search screenshot Voice Search is an open source Google Chrome extension I made that allows you to search by speaking. For example, just click on microphone and say kittens to search for kittens. If you specifically want pictures of kittens, say google images kittens. Want to learn more about World War II? Say wikipedia world war two. The source code for Voice Search is on GitHub.

Voice Search comes pre-loaded with many popular search engines by default, and you can add your own user-defined search engines. It also integrates a speech input button for all websites using HTML5 search boxes, all of the default search engines’ websites, Facebook, Twitter, reddit, and GitHub.

In later versions, I plan to introduce ability to import/export settings, create aliases (e.g. map to Google Maps and calculate to Wolfram|Alpha), OpenSearch description detection, and scripted terms that do more than open URLs.