backdooring your javascript using minifier bugs

In addition to unforgettable life experiences and personal growth, one thing I got out of DEF CON 23 was a copy of POC||GTFO 0x08 from Travis Goodspeed. The coolest article I’ve read so far in it is “Deniable Backdoors Using Compiler Bugs,” in which the authors abused a pre-existing bug in CLANG to create a backdoored version of sudo that allowed any user to gain root access. This is very sneaky, because nobody could prove that their patch to sudo was a backdoor by examining the source code; instead, the privilege escalation backdoor is inserted at compile-time by certain (buggy) versions of CLANG.

That got me thinking about whether you could use the same backdoor technique on javascript. JS runs pretty much everywhere these days (browsers, servers, arduinos and robots, maybe even cars someday) but it’s an interpreted language, not compiled. However, it’s quite common to minify and optimize JS to reduce file size and improve performance. Perhaps that gives us enough room to insert a backdoor by abusing a JS minifier.

Part I: Finding a good minifier bug

Question: Do popular JS minifiers really have bugs that could lead to security problems?

Answer: After about 10 minutes of searching, I found one in UglifyJS, a popular minifier used by jQuery to build a script that runs on something like 70% of the top websites on the Internet. The bug itself, fixed in the 2.4.24 release, is straightforward but not totally obvious, so let’s walk through it.

UglifyJS does a bunch of things to try to reduce file size. One of the compression flags that is on-by-default will compress expressions such as:

!a && !b && !c && !d

That expression is 20 characters. Luckily, if we apply De Morgan’s Law, we can rewrite it as:

!(a || b || c || d)

which is only 19 characters. Sweet! Except that De Morgan’s Law doesn’t necessarily work if any of the subexpressions has a non-Boolean return value. For instance,

!false && 1

will return the number 1. On the other hand,

!(false || !1)

simply returns true.

So if we can trick the minifier into erroneously applying De Morgan’s law, we can make the program behave differently before and after minification! Turns out it’s not too hard to trick UglifyJS 2.4.23 into doing this, since it will always use the rewritten expression if it is shorter than the original. (UglifyJS 2.4.24 patches this by making sure that subexpressions are boolean before attempting to rewrite.)

Part II: Building a backdoor in some hypothetical auth code

Cool, we’ve found the minifier bug of our dreams. Now let’s try to abuse it!

Let’s say that you are working for some company, and you want to deliberately create vulnerabilities in their Node.js website. You are tasked with writing some server-side javascript that validates whether user auth tokens are expired. First you make sure that the Node package uses uglify-js@2.4.23, which has the bug that we care about.

Next you write the token validation function, inserting a bunch of plausible-looking config and user validation checks to force the minifier to compress the long (not-)boolean expression:

function isTokenValid(user) {
    var timeLeft =
        !!config && // config object exists
        !!user.token && // user object has a token
        !user.token.invalidated && // token is not explicitly invalidated
        !config.uninitialized && // config is initialized
        !config.ignoreTimestamps && // don't ignore timestamps
        getTimeLeft(user.token.expiry); // > 0 if expiration is in the future

    // The token must not be expired
    return timeLeft > 0;
}

function getTimeLeft(expiry) {
  return expiry - getSystemTime();
}

Running uglifyjs -c on the snippet above produces the following:

function isTokenValid(user){var timeLeft=!(!config||!user.token||user.token.invalidated||config.uninitialized||config.ignoreTimestamps||!getTimeLeft(user.token.expiry));return timeLeft>0}function getTimeLeft(expiry){return expiry-getSystemTime()}

In the original form, if the config and user checks pass, timeLeft is a negative integer if the token is expired. In the minified form, timeLeft must be a boolean (since “!” in JS does type-coercion to booleans). In fact, if the config and user checks pass, the value of timeLeft is always true unless getTimeLeft coincidentally happens to be 0.

Voila! Since true > 0 in javascript (yay for type coercion!), auth tokens that are past their expiration time will still be valid forever.

Part III: Backdooring jQuery

Next let’s abuse our favorite minifier bug to write some patches to jQuery itself that could lead to backdoors. We’ll work with jQuery 1.11.3, which is the current jQuery 1 stable release as of this writing.

jQuery 1.11.3 uses grunt-contrib-uglify 0.3.2 for minification, which in turn depends on uglify-js ~2.4.0. So uglify-js@2.4.23 satisfies the dependency, and we can manually edit package.json in grunt-contrib-uglify to force it to use this version.

There are only a handful of places in jQuery where the DeMorgan’s Law rewrite optimization is triggered. None of these cause bugs, so we’ll have to add some ourselves.

Backdoor Patch #1:

First let’s add a potential backdoor in jQuery’s .html() method. The patch looks weird and superfluous, but we can convince anyone that it shouldn’t actually change what the method does. Indeed, pre-minification, the unit tests pass.

After minification with uglify-js@2.4.23, jQuery’s .html() method will set the inner HTML to “true” instead of the provided value, so a bunch of tests fail.

Screen Shot 2015-08-23 at 1.35.48 PM

However, the jQuery maintainers are probably using the patched version of uglifyjs. Indeed, tests pass with uglify-js@2.4.24, so this patch might not seem too suspicious.

Screen Shot 2015-08-23 at 1.39.47 PM

Cool. Now let’s run grunt to build jQuery with this patch and write some silly code that triggers the backdoor:

<html>
    <script src="../dist/jquery.min.js"></script>
    <button>click me to see if this site is safe</button>
    <script>
        $('button').click(function(e) {
            $('#result').html('<b>false!!</b>');
        });
    </script>
    <div id='result'></div>
</html>

Here’s the result of clicking that button when we run the pre-minified jQuery build:

Screen Shot 2015-08-23 at 4.44.45 PM

As expected, the user is warned that the site is not safe. Which is ironic, because it doesn’t use our minifier-triggered backdoor.

Here’s what happens when we instead use the minified jQuery build:

Screen Shot 2015-08-23 at 4.45.10 PM

Now users will totally think that this site is safe even when the site authors are trying to warn them otherwise.

Backdoor Patch #2:

The first backdoor might be too easy to detect, since anyone using it will probably notice that a bunch of HTML is being set to the string “true” instead of the HTML that they want to set. So our second backdoor patch is one that only gets triggered in unusual cases.

Screen Shot 2015-08-23 at 7.48.14 PM

Basically, we’ve modified jQuery.event.remove (used in the .off() method) so that the code path that calls special event removal hooks never gets reached after minification. (Since spliced is always boolean, its length is always undefined, which is not > 0.) This doesn’t necessarily change the behavior of a site unless the developer has defined such a hook.

Say that the site we want to backdoor has the following HTML:

<html>
    <script src="../dist/jquery.min.js"></script>
    <button>click me to see if special event handlers are called!</button>
    <div>FAIL</div>
    <script>
        // Add a special event hook for onclick removal
        jQuery.event.special.click.remove = function(handleObj) {
            $('div').text('SUCCESS');
        };
        $('button').click(function myHandler(e) {
            // Trigger the special event hook
            $('button').off('click');
        });
    </script>
</html>

If we run it with unminified jQuery, the removal hook gets called as expected:

Screen Shot 2015-08-23 at 4.43.10 PM

But the removal hook never gets called if we use the minified build:

Screen Shot 2015-08-23 at 4.43.42 PM

Obviously this is bad news if the event removal hook does some security-critical function, like checking if an origin is whitelisted before passing a user’s auth token to it.

Conclusion

The backdoor examples that I’ve illustrated are pretty contrived, but the fact that they can exist at all should probably worry JS developers. Although JS minifiers are not nearly as complex or important as C++ compilers, they have power over a lot of the code that ends up running on the web.

It’s good that UglifyJS has added test cases for known bugs, but I would still advise anyone who uses a non-formally verified minifier to be wary. Don’t minify/compress server-side code unless you have to, and make sure you run browser tests/scans against code post-minification. [Addendum: Don’t forget that even if you aren’t using a minifier, your CDN might minify files in production for you. For instance, Cloudflare’s collapsify uses uglifyjs.]

Now, back to reading the rest of POC||GTFO.

PS: If you have thoughts or ideas for future PoC, please leave a comment or find me on Twitter (@bcrypt). The code from this blog post is up on github.

[Update 1: Thanks @joshssharp for posting this to Hacker News. I’m flattered to have been on the front page allllll night long (cue 70’s soul music). Bonus points – the thread taught me something surprising about why it would make sense to minify server-side.]

[Update 2: There is now a long thread about minifiers on debian-devel which spawned this wiki page and another HN thread. It’s cool that JS developers are paying attention to this class of potential security vulnerabilities, but I hope that people complaining about minification also consider transpilers and other JS pseudo-compilers. I’ll talk more about that in a future blog post.]

23 hours of DEF CON 23

James Kasten, Peter Eckersley and I gave a talk at DEF CON this year about the Let’s Encrypt project. There is no recording yet, but you can get off the edge of your seat now, because here are the slides [pdf] that the world has been waiting for with bated breath.

Given that we practiced for a total of 30 minutes and worked on slides until we were whisked onstage, the talk went pretttttty smoothly. In particular, James’ live demo of a certificate issuance and rollback on a parody enterprise website ~stole the show. My one-take documentary about innocent people who can’t figure out how to get an SSL certificate was also met with great acclaim, especially for the phenomenal cinematography (“A cross between The Blair Witch Project, Spinal Tap, and a Windows 95 home setup instruction video.”):

Unfortunately, we were in one of the smaller DEF CON rooms, so the majority of people who waited in line for the talk didn’t get to see it, and the ones who did get to see it became very close to each other (emotionally as well as physically, I hope).

defcon1

the people who didn’t want to encrypt were forcibly removed from the room

45 minutes later, we were glad to be done and finally free to enjoy the rest of the conference!

defcon2

peter, me, and james looking pretty psyched

. . . which we did by scrambling over to Dan Kaminsky’s talk on clickjacking prevention. Afterwards, we rescued Dan from his hordes of manic fans by inviting him to dinner.

defcon3

peter and dan sure are happy to be done with their talks!

After dinner, I walked around a bunch with my favorite DEF CON 23 car hacker Samy (no offense to Charlie Miller, Chris Valasek, Marc Rogers, Kevin Mahaffey, and all of Car Hacking Village tho!). All the villages were closed, but luckily the Silent Circle booth in the vendor room was poppin’.

silentcircle

we made a silent Silent Circle circle

I was supposed to head to the airport shortly after, but I was having such an unexpectedly great time at DEF CON that I changed my flight.

After 3.5 energy drinks and an all-nighter, I ended up in a cigarrette-smoke-infested $2 hot-dog stand on the far side of dawn. Then I hailed a cab to the airport before collapsing in a heap of exhaustion.

defcon5

I’m pretty darn sad that DEF CON is over – it was a fantastic time, I met lots of cool people, and all 3 talks I attended inspired me to hack on something new. Too bad talk recordings aren’t online yet, but fortunately Travis Goodspeed left me with some good ol’ fashioned bedtime reading.

pocorgtfo

 

PS – working on some new hacks. Hopefully more blog posts soon after catching up on sleep.

this blog uses Content Security Policy

Having recently given some talks about Content Security Policy (CSP), I decided just now to enable it on my own blog to prevent cross-site scripting.

This lil’ blog is hosted by the MIT Student Information Processing Board and runs on a fairly-uncustomized WordPress 4.x installation. Although I could have enabled CSP by modifying my .htaccess file, I chose to use HTML <meta> tags instead so that these instructions would work for people who don’t have shell access to their WordPress host. Unfortunately, CSP using <meta> hasn’t landed in Firefox yet (tracking bug) so I should probably do the .htaccess thing anyway.

It’s pretty easy to turn on CSP in WordPress from the dashboard:

  1. Go to <your_blog_path>/wp-admin/theme-editor.php. Note that this isn’t available for WordPress-hosted blogs (*.wordpress.com).
  2. Click on Header (header.php) in the sidebar to edit the header HTML.
  3. At the start of the HTML <head> element, add a CSP meta tag with your CSP policy. This blog uses <meta http-equiv="Content-Security-Policy" content="script-src 'self'"> which disallows all scripts except from its own origin (including inline scripts). As far as I can tell, this blocks a few inline scripts but doesn’t impede any functionality on a vanilla WordPress instance. You might want a more permissive policy if you use fancy widgets and plugins.
  4. [Bonus points] You can also show a friendly message to users who disable javascript by adding a <noscript> element to your header.noscript

A fun fact I discovered during this process is that embedding a SoundCloud iframe will include tracking scripts from Google Analytics and scorecardresearch.com. Unfortunately CSP on the embedding page (my blog) doesn’t extend to embedded contexts (soundcloud.com iframe), so those scripts will still run unless you’ve disabled JS.

scorecard

That’s all. I might do more posts on WordPress hardening later or even write a WP plugin (*shudders at the thought of writing PHP*). More tips are welcome too.

UPDATE (8/24/15): CSP is temporarily disabled on this blog because Google Analytics uses an inline script. I’ll nonce-whitelist it later and turn CSP back on.

lessons from the ad blocker trenches

Greetings from the beautiful museum district of Berlin, where I’ve been trapped in a small conference room all week for the quarterly meeting of the W3C Technical Architecture group. So far we’ve produced two documents this week that I think are pretty good:

I just realized I have a few more things to say about the latter, based on my experience building and maintaining a semi-popular ad blocker (Privacy Badger Firefox).

  1. Beware of ad blockers that don’t actually block requests to tracking domains. For instance, an ad blocker that simply hides ads using CSS rules is not really useful for preventing tracking. Many users can’t tell the difference.
  2. Third-party cookies are not the only way to track users anymore, which means that browser features and extensions that only block/delete third-party cookies are not as useful as they once were. This 2012 survey paper [PDF] by Jonathan Mayer et. al. has a table of non-cookie browser tracking methods, which is probably out of date by now: Screen Shot 2015-07-17 at 4.32.55 PM
  3. Detecting whether a domain is performing third-party tracking is not straightforward. Naively, you could do this by counting the number of first-party domains that a domain reads high-entropy cookies from in a third-party context. However, this doesn’t encompass reading non-cookie browser state that could be used to uniquely identify users in aggregate (see table above). A more general but probably impractical approach is to try to tag every piece of site-readable browser state with an entropy estimate so that you can score sites by the total entropy that is readable by them in a third-party context. (We assume that while a site is being accessed as a first-party, the user implicitly consents to letting it read information about them. This is a gross simplification, since first parties can read lots of information that users don’t consent to by invisible fingerprinting. Also, I am recklessly using the term “entropy” here in a way that would probably cause undergrad thermodynamics professors to have aneurysms.)
  4. The browser definition of “third-party” only roughly approximates the real-life definition. For instance, dropbox.com and dropboxusercontent.com are the same party from a business and privacy perspective but not from the cookie-scoping or DNS or same-origin-policy perspective.
  5. The hardest-to-block tracking domains are the ones who cause collateral damage when blocked. A good example of this is Disqus, commonly embedded as a third-party widget on blogs and forums; if we block requests to Disqus (which include cookies for logged-in users), we severely impede the functionality of many websites. So Disqus is too usability-expensive to block, even though they can track your behavior from site to site.
  6. The hardest-to-block tracking methods are the ones that cause collateral damage when disabled. For instance, HSTS and HPKP both store user-specific persistent data that can be abused to probe users’ browsing histories and/or mark users so that you can re-identify them after the first time they visit your site. However, clearing HSTS/HPKP state between browser sessions dilutes their security value, so browsers/extensions are reluctant to do so.
  7. Specifiers and implementers sometimes argue that Feature X, which adds some fingerprinting/tracking surface, is okay because it’s no worse than cookies. I am skeptical of this argument for the following reasons:
    a. Unless explicitly required, there is no guarantee that browsers will treat Feature X the same as cookies in privacy-paranoid edge cases. For instance, if Safari blocks 3rd party cookies by default, will it block 3rd party media stream captures (which will store a unique deviceid) by default too?
    b. Ad blockers and anti-tracking tools like Disconnect, Privacy Badger, and Torbutton were mostly written to block and detect tracking on the basis of cookies, not arbitrary persistent data. It’s arguable that they should be blocking these other things as soon as they are shipped in browsers, but that requires developer time.

That’s all. And here’s some photos I took while walking around Berlin in a jetlagged haze for hours last night:

berlin1

berlin2

Update (7/18/15): Artur Janc of Google pointed out this document by folks at Chromium analyzing various client identification methods, including several I hadn’t thought about.

pseudorandom podcast series, episode 1

The combination of my roommate starting a Rust podcast and a long, animated conversation with a (drunk) storyteller last night caused me to become suddenly enamored with the idea of starting my own lil’ podcast. Lately I keep thinking about how many spontaneous, insightful conversations are never remembered, much less entombed in a publicly-accessible server for posterity. So a podcast seemed like an excellent way to share these moments without spending a lot of time writing (I’m a regrettably slow writer). I’d simply bring folks into my warehouse living room, give them a beverage of their choice, and spend a leisurely hour chatting about whatever miscellaneous topics came to mind.

And so, wasting no time, today I asked my ex-ex-colleague Peter Eckersley if he would like to be my first podcast guest. Peter runs the technology projects team at the Electronic Frontier Foundation and, more importantly, lives 3 blocks away from me. Fortuitously, Peter agreed to have me over for a chat later this afternoon.

When I arrived, it turned out that one of Peter’s housemates was having friends over for dinner, so finding a quiet spot became a challenge. We ended up in a tiny room at the back of his house where every flat surface was covered in sewing equipment and sundry household items. As Peter grabbed a hammer to reconstruct the only available chair in the room, I set up my laptop and fancy (borrowed) podcast microphone. We gathered around as close as we could and hit the record button.

Except for one hiccup where Audacity decided to stop recording abruptly, the interview went smoothly and didn’t need much editing. Next time I’ll plan to put myself closer to the mic, do a longer intro, and maybe cut the length down to 15 minutes.

Overall, I had a fun time recording this podcast and am unduly excited about future episodes. Turns out a podcast takes ~10% of the time to write a blog post with the same content. :)

For this and future episodes in the Pseudorandom Podcast Series, here’s an RSS feed. I’m going to reach SoundCloud’s limit of 180 minutes real quick at this rate, so I will probably host these somewhere else in the future or start a microfunding campaign to pay $15/month.

life update

i’ve finally recovered enough from a multi-week bout of sickness to say some things and put up some photos. lately i’ve felt exhausted and lethargic and unproductive to be honest. being sick probably had something to do with it; i sure hope next week gets better.

yesterday, someone told me they had a theory that everyone who sleeps at night (with rare exceptions) can only manage ~3 significant life events at a time. that sounds about right, but it feels like a lot has been going on. a partial, unordered list:

1. talked yesterday at the Yahoo Trust Unconference about the future of email security

yan

photo credit Bill Childers

2. working on graceful degradation of hopes and feelings

3. writing software for Let’s Encrypt as an EFF Technology fellow

4. trying to make sane w3c standards with these fine folks from the W3C Technical Architecture group

TAG

photo credit Tantek Celik

5. packing bag(s) and moving to a new neighborhood (twice)

6. finding balance on a skateboard and otherwise

“I think emotional and crypto intelligence are severely underrated” – spectator at the Yahoo Trust Unconference.

rate-limiting anonymous accounts

Yesterday TechCrunch reported that Twitter now seems to be requiring SMS validation from new accounts registered over Tor. Though this might be effective for rate-limiting registration of abusive/spammy accounts, sometimes actual people use Twitter over Tor because anonymity is a prerequisite to free speech and circumventing information barriers imposed by oppressive governments. These users might not want to link their telco-sanctioned identity with their Twitter account, hence why they’re using Tor in the first place.

What are services like Twitter to do, then? I thought of one simple solution that borrows a popular idea from anonymous e-cash systems.

In a 1983 paper, cryptographer David Chaum introduced the concept of blind signatures [1]. A blind signature is a cryptographic signature in which the signer can’t see the content of the message that she’s signing. So if Bob wants Alice to sign the message “Bob is great” without her knowing, he first “blinds” the message using a random factor that is unknown to her and gives Alice the blinded message to sign. When he unblinds her signed message by removing the blinding factor, the original message “Bob is great” also has a valid signature from Alice!

This may seem weird and magical, but blinded signatures are actually possible using the familiar RSA signature scheme. The proof is straightforward and on Wikipedia so I’ll skip it here [2]. Basically, since RSA signatures are just modulo’d exponentiation of some message M to a secret exponent d, when you create a signature over a blinded message M’ = M*r^e (where r is the blinding factor and e is the public exponent), you also create a valid signature over M thanks to the distributive property of exponentiation over multiplication.

Given the existence of blind signature schemes, Twitter can do something like the following to rate-limit Tor accounts without deanonymizing them:

  1. Say that @bob is an existing Twitter user who would like to make an anonymous account over Tor, which we’ll call @notbob. He computes T = H(notbob) * r^e mod N, where H is a hash function, r is a random number that Bob chooses, and {e,N} is the public part of an Identity Provider’s RSA keypair (defined in step 2).
  2. Bob sends T to an identity provider. This could be Twitter itself, or any service like Google, Identica, Facebook, LinkedIn, Keybase, etc. as long as it can check that Bob is probably a real person via SMS verification or a reputation-based algorithm. If Bob seems real enough, the Identity Provider sends him Sig(T) = T^d mod N = H(notbob)^d * r mod N, where d is the private part of the Identity Provider’s RSA keypair. [3]
  3. Bob divides Sig(T) by r to get Sig(H(notbob)), AKA his Identity Provider’s signature over the hash of his desired anonymous username.
  4. Bob opens up Tor browser and goes to register @notbob. In the registration form, he sends Sig(H(notbob)). Twitter can then verify the Identity Provider’s signature over ‘notbob’ and only accept @notbob’s account registration if verification is successful!

It seems to me that this achieves some nice properties.

  • Every anonymous account is transitively validated via SMS or reputation.
  • Ignoring traffic analysis (admittedly a big thing to ignore), anonymous accounts and the actual identities or phone numbers used to validate them are unlinkable.

Thoughts? I’d bet that someone has thought of this use case before but I couldn’t find any references on the Internet.

[1] http://www.hit.bme.hu/~buttyan/courses/BMEVIHIM219/2009/Chaum.BlindSigForPayment.1982.PDF

[2] https://en.wikipedia.org/wiki/Blind_signature#Blind_RSA_signatures.5B2.5D:235

canvas #1

painting2

that could have been us, 2015
Oil pastels, lipstick, eyeliner, cold medicine, and ballpoint pen on canvas.

i painted this while standing in my bathroom on valentine’s day’s night, unable to sleep and grotesquely feeling the weight of the oncoming dawn. it was my first time drawing on canvas.

as i worked, i kept thinking about all these people passing to and from doomed relationships, that feeling of being stupidly and everlastingly perched on the brink between hope and mutilation. that’s more or less what this is about.

solving boolean satisfiability on human circuits

I remember quite clearly sitting in Scott Aaronson’s computability and complexity theory course at MIT in 2011. I was a 19 year-old physics major back then, so Scott’s class was mostly new and fascinating.

One spring day, Scott was at the chalkboard delightedly introducing the concept of time complexity classes to us, with the same delight he used when introducing most abstract constructs. He said that you could categorize algorithms into time complexity classes based on the amount of time they take to run as a function of the input length. For instance, you could prove that certain decision problems couldn’t be solved by a deterministic Turing machine in polynomial time. I raised my hand.

“Yes?”
“But time is reference-frame dependent! What if you ran the deterministic Turing machine on earth while you yourself were on a rocket going at relativistic speeds?”

Scott’s eyes lit up. “Aha!” he said, without pause. “Suppose you traveled faster as the input length increased, so from your perspective, a problem in EXP is decidable in polynomial time. However you would be using more and more energy to propel your spaceship. So there is necessarily a tradeoff in the resources needed to solve the problem.”

In retrospect, this was pretty characteristic of why I liked the class so much. Scott didn’t give the easy and useless answer, which would have been that *by our definition* all running times are measured in a fixed inertial reference frame. Instead he reminds us that we, as humans, ultimately care about the totality of resources needed to solve a problem. Time complexity analysis is just one step toward grasping at how hard, how expensive, how painful something really is; mired as we may be in mathematical formalism, the reality of our dying planet and unpaid bills stays within sight when Scott lectures.

All this came to mind when I read Scott’s now-infamous blog comment about growing up as a shy, self-proclaimed and self-hating male nerd; followed by the much-cited response from journalist Laurie Penny about growing up as a shy, self-proclaimed and self-hating female nerd; followed by Scott’s latest blog post clarifying what he believes about feminism and the plight of shy nerdy people anguished by sexual frustration. What suprised me about the latter was that Scott went so far as to write:

“How to help all the young male nerds I meet who suffer from this problem, in a way that passes feminist muster, and that triggers the world’s sympathy rather than outrage, is a problem that interests me as much as P vs. NP, and right now that seems about equally hard.”

(“As much as P vs NP”?! Remember that Scott once bet his house on the invalidity of a paper claiming to prove P != NP, cf. http://www.scottaaronson.com/blog/?p=456.)

Sometimes I think that the obvious step towards solving the problem Scott mentions is for the frustrated person to politely and non-expectantly inform the other person of his/her desires. In an ideal world, they would then discuss them until reaching an amicable resolution, at which point they can return to platonically multiplying tensors or whatever.

But I suppose part of the definition of shy is the fear of exposing yourself to untrusted parties, for which they can reject you, humiliate you, and otherwise destroy that which you value or at least begrudgingly tolerate. Sadly, the shyness of analytical minds seems justified, because pretty much nobody has worked out how to communicate rejection without passing unfair judgement or otherwise patterning poisonous behavior. There is an art to divulging hidden feelings, an art to giving rejection, an art to handling sadness graciously, and an art to growing friendships from tenuous beginnings. None of these are taught to adolescent humans. Instead, we learn to hide ourselves and shame others.

I feel unprepared to write anything resembling a guide on how to do this, having recoiled from human contact for most of my life thanks to shyness, but I think it’s well worth some human brain cycles. Here’s hoping to live in a culture of rejection-positivity.

tls everything

Yesterday the W3C Technical Architecture Group published a new finding titled, “The Web and Encryption.” In it, they conclude:

“. . . the Web platform should be designed to actively prefer secure origins — typically, by encouraging use of HTTPS URLs instead of HTTP ones. Furthermore, the end-to-end nature of TLS encryption must not be compromised on the Web, in order to preserve this trust.”

To many HTTPS Everywhere users like myself, this seemed a decade or so beyond self-evident. So I was surprised to see a flurry of objections appear on the public mailing list thread discussing the TAG findings.

It seems bizarre to me that security-minded web developers are spending so much effort hardening the web platform by designing and implementing standards like CSP Level 2, WebCrypto, HTTP Public Key Pinning, and Subresource Integrity, while others are still debating whether requiring the bare minimum security guarantee on the web is a good thing. While some sites are preventing any javascript from running on their page unless it’s been whitelisted, other sites can’t even promise that any user will ever visit a page that hasn’t been tampered with.

wtf

small consolation: the second one has more downloads

Obviously we shouldn’t ignore arguments for a plaintext-permissive web; they’re statistically useful as indicators of misconceptions about HTTPS and sometimes also as indicators of real friction that website operators face. What can we learn?

Here’s some of my observations and responses to common anti-HTTPS points (as someone who lurks on standards mailing lists and often pokes website operators to deploy HTTPS, both professionally and recreationally):

  1.  “HTTPS is expensive and hard to set up.” This is objectively getting better. Cloudflare offers automatic free SSL to their CDN customers, and SSLMate lets you get a cert for $10 using the command line. In the near future, the LetsEncrypt cert authority will offer free certificates, deployed and managed using a nifty new protocol called ACME that makes the entire process take <30 seconds.
  2. “There is no value in using HTTPS for data that is, by nature, public (such as news articles).” This misses the point that aggregated browsing patterns, even for only public sites, can reveal a lot of private information about a person. If it weren’t, advertisers wouldn’t use third-party tracking beacons. QED.
  3. “TLS is slow.” Chris Palmer thought you would ask this and gave an excellent presentation explaining why not. tl;dr: TLS is usually not noticeably slower, but if it is, chances are that you can optimize away the difference (warning: the previous link is highly well-written and may cause you to become convinced that TLS is not slow).
  4. “HTTPS breaks feature X.” This is something I’m intimately familar with, since most bug reports in HTTPS Everywhere (which I used to maintain) were caused by the extension switching a site to HTTPS and suddenly breaking some feature. Mixed content blocking was the biggest culprit, but there were also cases where CORS stopped working because the header whitelisted the HTTP site but not the HTTPS one. (I also expected some “features” to break because HTTPS sites don’t leak referer to HTTP ones, but surprisingly this never happened.) Luckily if you’re using HTTPS Everywhere in Chrome, there is a panel in the developer console that helps you detect and fix mixed content on websites (shown below). Setting the CSP report-only header to report non-HTTPS subresources is similarly useful but doesn’t tell you which resources can be rewritten.https-switch
  5. “HTTPS gives users a false sense of security.” This comes up surprisingly often from various angles. Some people frame this as, “The CA system isn’t trustworthy and is breakable by every government,” while others say, “Even with HTTPS, you leak DNS lookups and valuable metadata,” and others say, “But many site certificates are managed by the CDN, not the site the user thinks they’re visiting securely.” The baseline counterargument to all of these is that encryption, even encryption that is theoretically breakable by some people, is better than no encryption, which doesn’t need to be broken by anyone. CA trustworthiness in particular is getting better with the implementation of certificate transparency and key pinning in browsers; let’s hope that we solve DNSSEC someday too. Also, regardless of whether HTTPS gives people a false sense of security, HTTP almost certainly gives the average person a false sense of security; otherwise, why would anyone submit their Quora password in plaintext?quora

In summary, it’s very encouraging to see the TAG expressing support for a ubiquitous transit encryption on the web (someday), but from the resulting discussion, it’s clear that developers still need to be convinced that HTTPS is efficient, reliable, affordable, and worthwhile. I think the TAG has a clear path forward here: separate the overgrown anti-HTTPS mythology from the actual measurable obstacles to HTTPS deployment, and encourage standards that fix real problems that developers and implementers have when transitioning to HTTPS. ACME, HPKP, Certificate Transparency, and especially requiring minimum security standards for powerful new web platform features are good examples of work that motivates website operators to turn on HTTPS by lowering the cost and/or raising the benefits.