Every so often, I get sick of basically everything. Walls become suffocating, routine is insufferable, and the city I live in wraps itself against the sky like a cage. So inevitably I duck away and find something to chase (warm faces, the light in autumn, half-formed schemes, etc.), run until I’m dizzy and lost and can’t remember whose couch I’m waking up on or why I crashed there. Weeks later, the sky bruises into swollen dusk, some familiar voice yells for me to come home so I run back into my bed once again, wondering if home is this place more than it is the feeling of staring at an unfamiliar timetable and noticing your heartbeat quicken.

This kinda happened last month so I took a 4 week leave (2 paid, 2 unpaid) from my job to read books, work on open source projects, and couchsurf the East Coast. I spent a lot of rainy days curled up on a friend’s bed in Somerville, MA poking at my laptop, idle afternoons hiding in a corner of the MIT library poking at my laptop, and long electric evenings walking around New York City looking for a place to sit and poke at my laptop. A lot of laptop-poking happened while on “vacation” because I had promised some people that I would give two talks in October, one at SecretCon and one at ToorCon.

Predictably, I put off the ToorCon talk until 2 weeks ago. Also predictably, I started panicking and not sleeping anymore because I said I would show people a new browser fingerprinting technique which did not exist. Somehow, after a lot of head-banging-against-desk, I came up with one that sort-of worked about a week before the ToorCon and actually finished the code right before ToorCon. I named it Sniffly because it sniffs browser history, and also because I was coming down with a cold.

Here’s how Sniffly works:

  1. A user visits the Sniffly page.
  2. Their browser attempts to load images from various HSTS domains over HTTP. These domains were harvested from a scrape of HSTS domains in the Alexa Top 1M. It was really fun to write this scraper; I finally had a chance to use Python’s Twisted!
  3. Sniffly sets a CSP policy that restricts images to HTTP, so image sources are blocked before they are redirected to HTTPS. This is crucial, because If the browser completes a request to the HTTPS site, then it will receive the HSTS pin, and the attack will no longer work when the user visits Sniffly.
  4. When an image gets blocked by CSP, its onerror handler is called. In this case, the onerror handler does some fancy tricks to semi-reliably time how long it took for  the image to be redirected from HTTP to HTTPS. If this time is on the order of a millisecond, it was an HSTS redirect (no network request was made), which means the user has visited the image’s domain before. If it’s on the order of 100 milliseconds, then a network request probably occurred, meaning that the user hasn’t visited the image’s domain.

Here’s a quick demo. It only works in recent Chrome/Firefox versions when HTTPS Everywhere is disabled. The results also turn up a lot of false positives if you are running an adblocker, since ad-blocked domains are indistinguishable from HSTS-blocked domains from a timing perspective. (However, since HTTPS Everywhere domains and ad-blocked domains are mostly the same for every user, they can simply be subtracted out to get more accurate results for users who run these browser extensions.) I didn’t collect analytics on the site, but random testing with several friends showed a ~80% accuracy rate in the demo once browser extensions were accounted for.

For more info, check out the source code, ToorCon slides (pdf), and talk recording. Someone submitted the demo to Hacker News and, to my horror, it was the #1 link for 6+ hours yesterday (!). I feel bewildered that this kind of attention is being granted (again) to random side projects that I do alone in my spare time, but I guess I should take whatever validation I can get right now. It would be sweet if people looked at my work and paid me to hack on interesting stuff for the public so i never had to work a real job again. Maybe someday it’ll happen; until then I’ll prolly hold down a day job and take more fake vacations.

PS: I think I fixed the anti-SEO settings on this blog. Maybe now it will show up in at least one search engine. That was real dumb.


you know things are getting better when you walk away from the hotel where you just gave two presentations wearing your best pretense of holding-it-togetherness while inside you felt shakey, hungover, and insane. remember how long you stood there, smiling and rationing weak handshakes while pretending you believed that you had a future? promise yourself you’re never doing that again. you walk away from the volatile company of people who made you feel shitty about yourself without trying to and into the car of someone who looks like they could be your new friend. you drive down the street and pack your bags, take off your stupid clothes and pull on a grey tshirt. now you’re driving down Highway 5 towards LA, the hills are honey-colored, the mountains crashing into sunset sky with symphonic grace, your insecurities start to crack like chipped gold paint. you pick at your wrecked fingernails and start to feel like you might have the vaguest idea of what to do with yourself tomorrow morning. then your new friend turns on the stereo and says, “do you want to hear a song i wrote? it’s about how my mom was a cunt.” you say, sure.

backdooring your javascript using minifier bugs

In addition to unforgettable life experiences and personal growth, one thing I got out of DEF CON 23 was a copy of POC||GTFO 0x08 from Travis Goodspeed. The coolest article I’ve read so far in it is “Deniable Backdoors Using Compiler Bugs,” in which the authors abused a pre-existing bug in CLANG to create a backdoored version of sudo that allowed any user to gain root access. This is very sneaky, because nobody could prove that their patch to sudo was a backdoor by examining the source code; instead, the privilege escalation backdoor is inserted at compile-time by certain (buggy) versions of CLANG.

That got me thinking about whether you could use the same backdoor technique on javascript. JS runs pretty much everywhere these days (browsers, servers, arduinos and robots, maybe even cars someday) but it’s an interpreted language, not compiled. However, it’s quite common to minify and optimize JS to reduce file size and improve performance. Perhaps that gives us enough room to insert a backdoor by abusing a JS minifier.

Part I: Finding a good minifier bug

Question: Do popular JS minifiers really have bugs that could lead to security problems?

Answer: After about 10 minutes of searching, I found one in UglifyJS, a popular minifier used by jQuery to build a script that runs on something like 70% of the top websites on the Internet. The bug itself, fixed in the 2.4.24 release, is straightforward but not totally obvious, so let’s walk through it.

UglifyJS does a bunch of things to try to reduce file size. One of the compression flags that is on-by-default will compress expressions such as:

!a && !b && !c && !d

That expression is 20 characters. Luckily, if we apply De Morgan’s Law, we can rewrite it as:

!(a || b || c || d)

which is only 19 characters. Sweet! Except that De Morgan’s Law doesn’t necessarily work if any of the subexpressions has a non-Boolean return value. For instance,

!false && 1

will return the number 1. On the other hand,

!(false || !1)

simply returns true.

So if we can trick the minifier into erroneously applying De Morgan’s law, we can make the program behave differently before and after minification! Turns out it’s not too hard to trick UglifyJS 2.4.23 into doing this, since it will always use the rewritten expression if it is shorter than the original. (UglifyJS 2.4.24 patches this by making sure that subexpressions are boolean before attempting to rewrite.)

Part II: Building a backdoor in some hypothetical auth code

Cool, we’ve found the minifier bug of our dreams. Now let’s try to abuse it!

Let’s say that you are working for some company, and you want to deliberately create vulnerabilities in their Node.js website. You are tasked with writing some server-side javascript that validates whether user auth tokens are expired. First you make sure that the Node package uses uglify-js@2.4.23, which has the bug that we care about.

Next you write the token validation function, inserting a bunch of plausible-looking config and user validation checks to force the minifier to compress the long (not-)boolean expression:

function isTokenValid(user) {
    var timeLeft =
        !!config && // config object exists
        !!user.token && // user object has a token
        !user.token.invalidated && // token is not explicitly invalidated
        !config.uninitialized && // config is initialized
        !config.ignoreTimestamps && // don't ignore timestamps
        getTimeLeft(user.token.expiry); // > 0 if expiration is in the future

    // The token must not be expired
    return timeLeft > 0;

function getTimeLeft(expiry) {
  return expiry - getSystemTime();

Running uglifyjs -c on the snippet above produces the following:

function isTokenValid(user){var timeLeft=!(!config||!user.token||user.token.invalidated||config.uninitialized||config.ignoreTimestamps||!getTimeLeft(user.token.expiry));return timeLeft>0}function getTimeLeft(expiry){return expiry-getSystemTime()}

In the original form, if the config and user checks pass, timeLeft is a negative integer if the token is expired. In the minified form, timeLeft must be a boolean (since “!” in JS does type-coercion to booleans). In fact, if the config and user checks pass, the value of timeLeft is always true unless getTimeLeft coincidentally happens to be 0.

Voila! Since true > 0 in javascript (yay for type coercion!), auth tokens that are past their expiration time will still be valid forever.

Part III: Backdooring jQuery

Next let’s abuse our favorite minifier bug to write some patches to jQuery itself that could lead to backdoors. We’ll work with jQuery 1.11.3, which is the current jQuery 1 stable release as of this writing.

jQuery 1.11.3 uses grunt-contrib-uglify 0.3.2 for minification, which in turn depends on uglify-js ~2.4.0. So uglify-js@2.4.23 satisfies the dependency, and we can manually edit package.json in grunt-contrib-uglify to force it to use this version.

There are only a handful of places in jQuery where the DeMorgan’s Law rewrite optimization is triggered. None of these cause bugs, so we’ll have to add some ourselves.

Backdoor Patch #1:

First let’s add a potential backdoor in jQuery’s .html() method. The patch looks weird and superfluous, but we can convince anyone that it shouldn’t actually change what the method does. Indeed, pre-minification, the unit tests pass.

After minification with uglify-js@2.4.23, jQuery’s .html() method will set the inner HTML to “true” instead of the provided value, so a bunch of tests fail.

Screen Shot 2015-08-23 at 1.35.48 PM

However, the jQuery maintainers are probably using the patched version of uglifyjs. Indeed, tests pass with uglify-js@2.4.24, so this patch might not seem too suspicious.

Screen Shot 2015-08-23 at 1.39.47 PM

Cool. Now let’s run grunt to build jQuery with this patch and write some silly code that triggers the backdoor:

    <script src="../dist/jquery.min.js"></script>
    <button>click me to see if this site is safe</button>
        $('button').click(function(e) {
    <div id='result'></div>

Here’s the result of clicking that button when we run the pre-minified jQuery build:

Screen Shot 2015-08-23 at 4.44.45 PM

As expected, the user is warned that the site is not safe. Which is ironic, because it doesn’t use our minifier-triggered backdoor.

Here’s what happens when we instead use the minified jQuery build:

Screen Shot 2015-08-23 at 4.45.10 PM

Now users will totally think that this site is safe even when the site authors are trying to warn them otherwise.

Backdoor Patch #2:

The first backdoor might be too easy to detect, since anyone using it will probably notice that a bunch of HTML is being set to the string “true” instead of the HTML that they want to set. So our second backdoor patch is one that only gets triggered in unusual cases.

Screen Shot 2015-08-23 at 7.48.14 PM

Basically, we’ve modified jQuery.event.remove (used in the .off() method) so that the code path that calls special event removal hooks never gets reached after minification. (Since spliced is always boolean, its length is always undefined, which is not > 0.) This doesn’t necessarily change the behavior of a site unless the developer has defined such a hook.

Say that the site we want to backdoor has the following HTML:

    <script src="../dist/jquery.min.js"></script>
    <button>click me to see if special event handlers are called!</button>
        // Add a special event hook for onclick removal = function(handleObj) {
        $('button').click(function myHandler(e) {
            // Trigger the special event hook

If we run it with unminified jQuery, the removal hook gets called as expected:

Screen Shot 2015-08-23 at 4.43.10 PM

But the removal hook never gets called if we use the minified build:

Screen Shot 2015-08-23 at 4.43.42 PM

Obviously this is bad news if the event removal hook does some security-critical function, like checking if an origin is whitelisted before passing a user’s auth token to it.


The backdoor examples that I’ve illustrated are pretty contrived, but the fact that they can exist at all should probably worry JS developers. Although JS minifiers are not nearly as complex or important as C++ compilers, they have power over a lot of the code that ends up running on the web.

It’s good that UglifyJS has added test cases for known bugs, but I would still advise anyone who uses a non-formally verified minifier to be wary. Don’t minify/compress server-side code unless you have to, and make sure you run browser tests/scans against code post-minification. [Addendum: Don’t forget that even if you aren’t using a minifier, your CDN might minify files in production for you. For instance, Cloudflare’s collapsify uses uglifyjs.]

Now, back to reading the rest of POC||GTFO.

PS: If you have thoughts or ideas for future PoC, please leave a comment or find me on Twitter (@bcrypt). The code from this blog post is up on github.

[Update 1: Thanks @joshssharp for posting this to Hacker News. I’m flattered to have been on the front page allllll night long (cue 70’s soul music). Bonus points – the thread taught me something surprising about why it would make sense to minify server-side.]

[Update 2: There is now a long thread about minifiers on debian-devel which spawned this wiki page and another HN thread. It’s cool that JS developers are paying attention to this class of potential security vulnerabilities, but I hope that people complaining about minification also consider transpilers and other JS pseudo-compilers. I’ll talk more about that in a future blog post.]

23 hours of DEF CON 23

James Kasten, Peter Eckersley and I gave a talk at DEF CON this year about the Let’s Encrypt project. There is no recording yet, but you can get off the edge of your seat now, because here are the slides [pdf] that the world has been waiting for with bated breath.

Given that we practiced for a total of 30 minutes and worked on slides until we were whisked onstage, the talk went pretttttty smoothly. In particular, James’ live demo of a certificate issuance and rollback on a parody enterprise website ~stole the show. My one-take documentary about innocent people who can’t figure out how to get an SSL certificate was also met with great acclaim, especially for the phenomenal cinematography (“A cross between The Blair Witch Project, Spinal Tap, and a Windows 95 home setup instruction video.”):

Unfortunately, we were in one of the smaller DEF CON rooms, so the majority of people who waited in line for the talk didn’t get to see it, and the ones who did get to see it became very close to each other (emotionally as well as physically, I hope).


the people who didn’t want to encrypt were forcibly removed from the room

45 minutes later, we were glad to be done and finally free to enjoy the rest of the conference!


peter, me, and james looking pretty psyched

. . . which we did by scrambling over to Dan Kaminsky’s talk on clickjacking prevention. Afterwards, we rescued Dan from his hordes of manic fans by inviting him to dinner.


peter and dan sure are happy to be done with their talks!

After dinner, I walked around a bunch with my favorite DEF CON 23 car hacker Samy (no offense to Charlie Miller, Chris Valasek, Marc Rogers, Kevin Mahaffey, and all of Car Hacking Village tho!). All the villages were closed, but luckily the Silent Circle booth in the vendor room was poppin’.


we made a silent Silent Circle circle

I was supposed to head to the airport shortly after, but I was having such an unexpectedly great time at DEF CON that I changed my flight.

After 3.5 energy drinks and an all-nighter, I ended up in a cigarrette-smoke-infested $2 hot-dog stand on the far side of dawn. Then I hailed a cab to the airport before collapsing in a heap of exhaustion.


I’m pretty darn sad that DEF CON is over – it was a fantastic time, I met lots of cool people, and all 3 talks I attended inspired me to hack on something new. Too bad talk recordings aren’t online yet, but fortunately Travis Goodspeed left me with some good ol’ fashioned bedtime reading.



PS – working on some new hacks. Hopefully more blog posts soon after catching up on sleep.

this blog uses Content Security Policy

Having recently given some talks about Content Security Policy (CSP), I decided just now to enable it on my own blog to prevent cross-site scripting.

This lil’ blog is hosted by the MIT Student Information Processing Board and runs on a fairly-uncustomized WordPress 4.x installation. Although I could have enabled CSP by modifying my .htaccess file, I chose to use HTML <meta> tags instead so that these instructions would work for people who don’t have shell access to their WordPress host. Unfortunately, CSP using <meta> hasn’t landed in Firefox yet (tracking bug) so I should probably do the .htaccess thing anyway.

It’s pretty easy to turn on CSP in WordPress from the dashboard:

  1. Go to <your_blog_path>/wp-admin/theme-editor.php. Note that this isn’t available for WordPress-hosted blogs (*
  2. Click on Header (header.php) in the sidebar to edit the header HTML.
  3. At the start of the HTML <head> element, add a CSP meta tag with your CSP policy. This blog uses <meta http-equiv="Content-Security-Policy" content="script-src 'self'"> which disallows all scripts except from its own origin (including inline scripts). As far as I can tell, this blocks a few inline scripts but doesn’t impede any functionality on a vanilla WordPress instance. You might want a more permissive policy if you use fancy widgets and plugins.
  4. [Bonus points] You can also show a friendly message to users who disable javascript by adding a <noscript> element to your header.noscript

A fun fact I discovered during this process is that embedding a SoundCloud iframe will include tracking scripts from Google Analytics and Unfortunately CSP on the embedding page (my blog) doesn’t extend to embedded contexts ( iframe), so those scripts will still run unless you’ve disabled JS.


That’s all. I might do more posts on WordPress hardening later or even write a WP plugin (*shudders at the thought of writing PHP*). More tips are welcome too.

UPDATE (8/24/15): CSP is temporarily disabled on this blog because Google Analytics uses an inline script. I’ll nonce-whitelist it later and turn CSP back on.

lessons from the ad blocker trenches

Greetings from the beautiful museum district of Berlin, where I’ve been trapped in a small conference room all week for the quarterly meeting of the W3C Technical Architecture group. So far we’ve produced two documents this week that I think are pretty good:

I just realized I have a few more things to say about the latter, based on my experience building and maintaining a semi-popular ad blocker (Privacy Badger Firefox).

  1. Beware of ad blockers that don’t actually block requests to tracking domains. For instance, an ad blocker that simply hides ads using CSS rules is not really useful for preventing tracking. Many users can’t tell the difference.
  2. Third-party cookies are not the only way to track users anymore, which means that browser features and extensions that only block/delete third-party cookies are not as useful as they once were. This 2012 survey paper [PDF] by Jonathan Mayer et. al. has a table of non-cookie browser tracking methods, which is probably out of date by now: Screen Shot 2015-07-17 at 4.32.55 PM
  3. Detecting whether a domain is performing third-party tracking is not straightforward. Naively, you could do this by counting the number of first-party domains that a domain reads high-entropy cookies from in a third-party context. However, this doesn’t encompass reading non-cookie browser state that could be used to uniquely identify users in aggregate (see table above). A more general but probably impractical approach is to try to tag every piece of site-readable browser state with an entropy estimate so that you can score sites by the total entropy that is readable by them in a third-party context. (We assume that while a site is being accessed as a first-party, the user implicitly consents to letting it read information about them. This is a gross simplification, since first parties can read lots of information that users don’t consent to by invisible fingerprinting. Also, I am recklessly using the term “entropy” here in a way that would probably cause undergrad thermodynamics professors to have aneurysms.)
  4. The browser definition of “third-party” only roughly approximates the real-life definition. For instance, and are the same party from a business and privacy perspective but not from the cookie-scoping or DNS or same-origin-policy perspective.
  5. The hardest-to-block tracking domains are the ones who cause collateral damage when blocked. A good example of this is Disqus, commonly embedded as a third-party widget on blogs and forums; if we block requests to Disqus (which include cookies for logged-in users), we severely impede the functionality of many websites. So Disqus is too usability-expensive to block, even though they can track your behavior from site to site.
  6. The hardest-to-block tracking methods are the ones that cause collateral damage when disabled. For instance, HSTS and HPKP both store user-specific persistent data that can be abused to probe users’ browsing histories and/or mark users so that you can re-identify them after the first time they visit your site. However, clearing HSTS/HPKP state between browser sessions dilutes their security value, so browsers/extensions are reluctant to do so.
  7. Specifiers and implementers sometimes argue that Feature X, which adds some fingerprinting/tracking surface, is okay because it’s no worse than cookies. I am skeptical of this argument for the following reasons:
    a. Unless explicitly required, there is no guarantee that browsers will treat Feature X the same as cookies in privacy-paranoid edge cases. For instance, if Safari blocks 3rd party cookies by default, will it block 3rd party media stream captures (which will store a unique deviceid) by default too?
    b. Ad blockers and anti-tracking tools like Disconnect, Privacy Badger, and Torbutton were mostly written to block and detect tracking on the basis of cookies, not arbitrary persistent data. It’s arguable that they should be blocking these other things as soon as they are shipped in browsers, but that requires developer time.

That’s all. And here’s some photos I took while walking around Berlin in a jetlagged haze for hours last night:



Update (7/18/15): Artur Janc of Google pointed out this document by folks at Chromium analyzing various client identification methods, including several I hadn’t thought about.

pseudorandom podcast series, episode 1

The combination of my roommate starting a Rust podcast and a long, animated conversation with a (drunk) storyteller last night caused me to become suddenly enamored with the idea of starting my own lil’ podcast. Lately I keep thinking about how many spontaneous, insightful conversations are never remembered, much less entombed in a publicly-accessible server for posterity. So a podcast seemed like an excellent way to share these moments without spending a lot of time writing (I’m a regrettably slow writer). I’d simply bring folks into my warehouse living room, give them a beverage of their choice, and spend a leisurely hour chatting about whatever miscellaneous topics came to mind.

And so, wasting no time, today I asked my ex-ex-colleague Peter Eckersley if he would like to be my first podcast guest. Peter runs the technology projects team at the Electronic Frontier Foundation and, more importantly, lives 3 blocks away from me. Fortuitously, Peter agreed to have me over for a chat later this afternoon.

When I arrived, it turned out that one of Peter’s housemates was having friends over for dinner, so finding a quiet spot became a challenge. We ended up in a tiny room at the back of his house where every flat surface was covered in sewing equipment and sundry household items. As Peter grabbed a hammer to reconstruct the only available chair in the room, I set up my laptop and fancy (borrowed) podcast microphone. We gathered around as close as we could and hit the record button.

Except for one hiccup where Audacity decided to stop recording abruptly, the interview went smoothly and didn’t need much editing. Next time I’ll plan to put myself closer to the mic, do a longer intro, and maybe cut the length down to 15 minutes.

Overall, I had a fun time recording this podcast and am unduly excited about future episodes. Turns out a podcast takes ~10% of the time to write a blog post with the same content. :)

For this and future episodes in the Pseudorandom Podcast Series, here’s an RSS feed. I’m going to reach SoundCloud’s limit of 180 minutes real quick at this rate, so I will probably host these somewhere else in the future or start a microfunding campaign to pay $15/month.

life update

i’ve finally recovered enough from a multi-week bout of sickness to say some things and put up some photos. lately i’ve felt exhausted and lethargic and unproductive to be honest. being sick probably had something to do with it; i sure hope next week gets better.

yesterday, someone told me they had a theory that everyone who sleeps at night (with rare exceptions) can only manage ~3 significant life events at a time. that sounds about right, but it feels like a lot has been going on. a partial, unordered list:

1. talked yesterday at the Yahoo Trust Unconference about the future of email security


photo credit Bill Childers

2. working on graceful degradation of hopes and feelings

3. writing software for Let’s Encrypt as an EFF Technology fellow

4. trying to make sane w3c standards with these fine folks from the W3C Technical Architecture group


photo credit Tantek Celik

5. packing bag(s) and moving to a new neighborhood (twice)

6. finding balance on a skateboard and otherwise

“I think emotional and crypto intelligence are severely underrated” – spectator at the Yahoo Trust Unconference.

rate-limiting anonymous accounts

Yesterday TechCrunch reported that Twitter now seems to be requiring SMS validation from new accounts registered over Tor. Though this might be effective for rate-limiting registration of abusive/spammy accounts, sometimes actual people use Twitter over Tor because anonymity is a prerequisite to free speech and circumventing information barriers imposed by oppressive governments. These users might not want to link their telco-sanctioned identity with their Twitter account, hence why they’re using Tor in the first place.

What are services like Twitter to do, then? I thought of one simple solution that borrows a popular idea from anonymous e-cash systems.

In a 1983 paper, cryptographer David Chaum introduced the concept of blind signatures [1]. A blind signature is a cryptographic signature in which the signer can’t see the content of the message that she’s signing. So if Bob wants Alice to sign the message “Bob is great” without her knowing, he first “blinds” the message using a random factor that is unknown to her and gives Alice the blinded message to sign. When he unblinds her signed message by removing the blinding factor, the original message “Bob is great” also has a valid signature from Alice!

This may seem weird and magical, but blinded signatures are actually possible using the familiar RSA signature scheme. The proof is straightforward and on Wikipedia so I’ll skip it here [2]. Basically, since RSA signatures are just modulo’d exponentiation of some message M to a secret exponent d, when you create a signature over a blinded message M’ = M*r^e (where r is the blinding factor and e is the public exponent), you also create a valid signature over M thanks to the distributive property of exponentiation over multiplication.

Given the existence of blind signature schemes, Twitter can do something like the following to rate-limit Tor accounts without deanonymizing them:

  1. Say that @bob is an existing Twitter user who would like to make an anonymous account over Tor, which we’ll call @notbob. He computes T = H(notbob) * r^e mod N, where H is a hash function, r is a random number that Bob chooses, and {e,N} is the public part of an Identity Provider’s RSA keypair (defined in step 2).
  2. Bob sends T to an identity provider. This could be Twitter itself, or any service like Google, Identica, Facebook, LinkedIn, Keybase, etc. as long as it can check that Bob is probably a real person via SMS verification or a reputation-based algorithm. If Bob seems real enough, the Identity Provider sends him Sig(T) = T^d mod N = H(notbob)^d * r mod N, where d is the private part of the Identity Provider’s RSA keypair.
  3. Bob divides Sig(T) by r to get Sig(H(notbob)), AKA his Identity Provider’s signature over the hash of his desired anonymous username.
  4. Bob opens up Tor browser and goes to register @notbob. In the registration form, he sends Sig(H(notbob)). Twitter can then verify the Identity Provider’s signature over ‘notbob’ and only accept @notbob’s account registration if verification is successful!

It seems to me that this achieves some nice properties.

  • Every anonymous account is transitively validated via SMS or reputation.
  • Ignoring traffic analysis (admittedly a big thing to ignore), anonymous accounts and the actual identities or phone numbers used to validate them are unlinkable.

Thoughts? I’d bet that someone has thought of this use case before but I couldn’t find any references on the Internet.



canvas #1


that could have been us, 2015
Oil pastels, lipstick, eyeliner, cold medicine, and ballpoint pen on canvas.

i painted this while standing in my bathroom on valentine’s day’s night, unable to sleep and grotesquely feeling the weight of the oncoming dawn. it was my first time drawing on canvas.

as i worked, i kept thinking about all these people passing to and from doomed relationships, that feeling of being stupidly and everlastingly perched on the brink between hope and mutilation. that’s more or less what this is about.