How to make a less-leaky Heartbleed bandage

Mashable just put out a nice-looking chart showing “Passwords You Need to Change Right Now” change in light of the recent Heartbleed carnage. However, it has some serious caveats that I wanted to mention:

  1. It’s probably better to be suspicious of companies whose statements are in present-tense (ex: “We have multiple protections” or even “We were not using OpenSSL”). The vulnerability existed since 2011, so even if a service was protected at the time of its disclosure 3 days ago, it could be have been affected at some point long before then. I am also skeptical that every single company on the list successfully made sure that nothing that they’ve used or given sensitive user data to had a vulnerable version of OpenSSL in the last 2 years.
  2. The article neglects to mention that password reuse means you might have to change passwords on several services for every one that was leaked. The same goes for the fact that one can trigger password resets on multiple services by authenticating a single email account.
  3. You should also clear all stored cookies just in case the server hasn’t invalidated them as they should; many sites use persistent CSRF tokens so logging out doesn’t automatically invalidate them. (Heartbleed trivially exposed user cookies.)
  4. Don’t forget to also change API keys if a service hasn’t force-rotated those already.
  5. It remains highly unclear whether any SSL certificates were compromised because of Heartbleed. If so, changing your password isn’t going to help against a MITM who has the SSL private key unless the website has revoked its SSL certificate and you’ve somehow gotten the revocation statement (LOL). This is complicated. Probably best not to worry about it right now because there’s not much you can do, but we all might have to worry about it a whole lot more depending on which way the pendulum swings in the next few days.
  6. Related-to-#5-but-much-easier: clear TLS session resumption data. I think this usually happens automatically when you restart the browser.

Nonetheless, Mashable made a pretty good chart for keeping track of what information companies have made public regarding the Heartbleed fallout.

Zero-bit vulnerabilities?

The other day, I overheard Seth Schoen ask the question, “What is the smallest change you can make to a piece of software to create a serious vulnerability?” We agreed that one bit is generally sufficient; for instance, in x86 assembly, the operations JL and JLE (corresponding to “jump if less than” and “jump if less than or equal to”) are represented by:

JL  → 0F 8C (00001111 10001100)
JLE → 0F 8E (00001111 10001110)

and the difference between the two could very easily cause serious problems via memory corruption or otherwise. As a simple human-understandable example, imagine replacing “<” with “<=” in a bus ticket machine that says: “if ticket_issue_date < today, reject rider; else allow rider.”

At this point, I started feeling one-bit-upsmanship and wondered whether there was such a thing as a zero-bit vulnerability. Obviously, a binary that is “safe” on one machine can be malicious on a different machine (ex: if the second machine has been infected with malware), so let’s require that the software must be non-vulnerable and vulnerable on two machines that start in identical states. For simplicity, let’s also require that both machines are perfectly (read: unrealistically) airgapped, in the sense that there’s no way for them to change state based on input from other computers.

This seems pretty much impossible to me unless we consider vulnerabilities probabilistically generated by environmental noise during code execution. Two examples for illustration:

  1. A program that behaves in an unsafe way if the character “A” is output by a random character generator that uses true hardware randomness (ex: quantum tunneling rates in a semiconductor).
  2. A program that behaves in an unsafe way when there are single-bit flips due to radioactive decay, cosmic ray collisions, background radiation, or other particle interactions in the machine’s hardware. It turns out that these are well-known and have, in some historical cases, caused actual problems. In 2000, Sun reportedly received complaints from 60 clients about an error caused by background radiation that flipped, on average, one bit per processor per year! (In other words, Sun suffers due to sun.)

Which brings up a fun hypothetical question: if you design an SSL library that will always report invalid certificates as valid if ANY one bit in the library is flipped (but behaves correctly in the absence of single-bit flip errors), have you made a zero-bit backdoor?

a short story idea

In the year 2014, a startup in San Francisco builds an iPhone app that successfully cures people of heartbreak, but it requires access to every permission allowed on the operating system, including some that no app has ever requested before. It only costs $2.99 though.

The app becomes hugely popular. The heartbroken protagonist of our story logs into the Apple iStore to download it, but because the Apple iStore doesn’t support HTTP Strict Transport Security yet, an NSA FOXACID server intercepts the HTTP request and injects targeted iPhone malware into the download before Apple’s servers have a chance to respond.

However, the malware was actually targeted for the iPhone of an overseas political dissident. The only reason it reached our protagonist by mistake was because the first SHA-1 collision in recorded history was generated by the tracking cookies that NSA used to target the dissident.

Meanwhile, the protagonist is wondering whether this app is going to work once it finishes installing. He smokes a cigarette and walks along a bridge in the pouring rain. Thousands of miles away, an NSA agent pinpoints his location and dispatches a killer drone from the nearest drone refueling station.

The protagonist is silently assassinated in the dark while the entire scene is caught on camera by a roaming Google Street View car. The NSA realizes this and logs into Google’s servers to delete the images, but not before some people have seen them thanks to CDN server caching.

Nobody really wants to post these pictures, because they’re afraid of getting DMCA takedown notices from Google Maps.

decentralized trustworthiness measures and certificate pinning

On the plane ride from Baltimore to SFO, I started thinking about a naming dilemma described by Zooko. Namely (pun intended): it’s difficult to architect name assignment systems that are simultaneously secure, decentralized, and human meaningful. Wikipedia defines these properties as:

  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable.

It’s pretty easy to make systems that satisfy two of the three. Tor Hidden Service (.onion) addresses are secure and decentralized but not human-meaningful since they look like random crap. Regular domain names like are secure and human-meaningful but not decentralized since they rely on centralized DNS records. Human names are human-meaningful and decentralized but not secure, because multiple people can share the same name (that’s why you can’t just tell the post office to send $1000 to John Smith and expect it to get to the right person).

It’s fun to think of how to take a toy system that covers two edges of Zooko’s triangle and bootstrap it along the third until you get an almost-satisfactory solution to the naming dilemma. Here’s the one I thought of on the plane:

Imagine we live in a world with a special type of top-level domain called .ssl, which people have decided to make because they’re sick of the NSA spying on them all the time. .ssl domains have some special requirements:

  1. All .ssl servers communicate only over SSL connections. Browsers refuse to send any data unencrypted to a .ssl domain.
  2. All .ssl domain names are just the hash of the server’s SSL public key.
  3. The registrars refuse to register a domain name for you unless you show him/her a public key that hashes to that domain name.

This naming system wouldn’t be human-meaningful, because people can’t easily remember URLs like https://2xtsq3ekkxjpfm4l.ssl. On the other hand, it’s secure because the domain names are guaranteed to be unique (except in the overwhemingly-unlikely cases where two keys have the same hash or two servers happen to generate the same keypair). It’s not truly decentralized, because we still use DNS to map domain names to IP addresses, but I argue that DNS isn’t a point of compromise: if a MITM en route to the DNS server sends you to the wrong IP address, your browser refuses to talk to the server at that IP address because it won’t show the right SSL certificate. This is an unavoidable denial-of-service vulnerability, but the benefit is that you detect the MITM attack immediately.

Of course, this assumes we already have a decentralized way to advertise these not-very-memorable domain names. Perhaps they spread by trusted emails, or word-of-mouth, or business cards at hacker cons. But still, the fact that they’re so long and complicated and non-human-meaningful opens up serious phishing vulnerabilities for .ssl domains!

So, we’d like to have petnames for .ssl domains to make them more memorable. Say that the owner of “2xtsq3ekkxjpfm4l.ssl” would like to have the petname “forbes.ssl”; how do we get everyone to agree on and use the petname-to-domain-name mappings? We could store the mappings in a distributed, replicated database and require that every client check several database servers and get consistent answers before resolving a petname to a domain name. But that’s kinda slow, and maybe we’re too cheap to set up enough servers to make this system robust against government MITM attacks.

Here’s a simpler and cheaper solution that doesn’t require any extra servers at all: require that the distance between the hash of the petname and the hash of [server's public SSL key] + [nonce] is less than some number D [1]. The server operator is responsible for finding a nonce that satisfies this inequality; otherwise, clients will refuse to accept the server’s SSL certificate.

[1] For purposes of this discussion, it doesn’t really matter how we choose to measure the distance between two hashes, but it should satisfy the following: (1) two hashes that are identical have a distance of 0, and (2) the number of distinct hashes that are at distance N from a hash H0 should grow faster than linearly in N. We can pick Hamming distance, for example.

In other words, the procedure for getting a .ssl domain now looks like this:

  1. Alice wants forbes.ssl. She generates a SSL keypair and mines for a nonce that makes the hash of the public key plus nonce close enough to the hash of “forbes”.
  2. Once Alice does enough work to find a satisfactory nonce, she adds it as an extra field in her SSL certificate. The registrar checks her work and gives her forbes.ssl if the name isn’t already taken and her nonce is valid.
  3. Alice sets up her site. She continues to mine for better nonces, in case she has adversaries who are secretly also mining for nonces in order to do MITM attacks on forbes.ssl in the future (more on this later).

Bob comes along and wants to visit Alice’s site.

  1. Bob goes to https://forbes.ssl in his browser.
  2. His browser sees Alice’s SSL certificate, which has a nonce. Before finishing the SSL handshake, it checks that the distance D1_forbes between the hash of “forbes” and the hash of [SSL public key]+[nonce] is less than Bob’s maximum allowed distance, D1. Otherwise it abandons the handshake and shows Bob a scary warning screen.
  3. If the handshake succeeds, Bob’s browser caches Alice’s SSL certificate and trusts it for some period of time T; if Bob sees a different certificate for Alice within time T, his browser will refuse to accept it, unless Alice has issued a revocation for her cert during that time.
  4. After time T, Bob goes to Alice’s site again. His maximum allowed distance has gone down from D1 to D2 during that time. Luckily, Alice has been mining for better nonces, so D1_forbes is down to D2_forbes. Bob’s browser repeats Step 2 with the new distances and decides whether or not to trust Alice for the next time interval T.

In reality, you probably wouldn’t want to use this system with SSL certs themselves; rather, it’d be better to use the nonces to strengthen trust-on-first-use in a key pinning system like TACK. That is, Alice would mine for a nonce that reduces the distance between the hash of “forbes” and the hash of [TACK Signing Key]+[nonce].

For those unfamiliar with TACK, it’s a system that allows SSL certificates to be pinned to a long-term TACK Signing Key provided by the site operator, which is trusted-on-first-sight and cached for a period of up to 30 days. Trust-on-first-use gets rid of the need to pin to a certificate authority, but it doesn’t prevent a powerful adversary from MITM’ing you every time you visit a site if they can MITM you the first time with a fake TACK Signing Key.

The main usefulness of nonces for TACK Signing Keys is this: it makes broad MITM attacks much more costly. Not only does the MITM have to show you a fake key, but they have to show you one with a valid nonce. If they wanted to do this for every site you visit, keeping in mind that your acceptable distances go down over time, they’d have to continuously mine for hundreds or thousands of domains.

Not impossible, of course, but it’s incrementally harder than just showing you a fake certificate.

Another nice thing about this scheme is that Bob can decide to set different distance thresholds for different types of sites, depending on how “secure” they should be. He can pick a very low distance D_bank for his banking website, because he knows that his bank has a lot of computational resources to mine for a very good nonce. On the other hand, he picks a relatively high distance D_friend for his friend’s homepage, because he knows that his friend’s one-page site doesn’t take any sensitive information.

My intuition says that sites with high security needs (banks, e-commerce, etc.) also tend to have more computational resources for mining, but obviously this isn’t true for sites like Wikileaks or some nonprofits that handle sensitive information liked Planned Parenthood. That’s okay, because volunteers and site users can also mine for nonces! Ex: if Bob finds a better nonce for Alice, he can send it to her so that she has a stronger certificate.

Essentially, this causes proof of trustworthiness to become decentralized: if I start a whistleblower site, I can run a crowd-mining campaign to ask thousands of volunteers around the world to help me get a strong certificate. I win as long as their combined computing power is greater than that of my adversaries.

Of course, that last part isn’t guaranteed. But it’s interesting to think about what would happen either way.



My co-worker Peter and I were riding the Caltrain from Mozilla to San Francisco a few days ago. A stranger sat down next to us and started talking. When I mentioned that we worked at EFF, his eyes lit up and he said, “Oh! But you guys have won, right?”

Confused, I asked what he meant by that.

He said, “You defeated SOPA and PIPA a couple years ago. So you’ve won.”

We laughed and explained that it didn’t quite work like that. Peter said, “Imagine this: you’re a hero in a comic book. Every time you defeat your nemesis, a new one appears. This happens over and over again. It has to work that way, because you live inside a comic book.”

And so it does. SOPA and PIPA are dead, but now there’s NSA surveillance.


Aaron Swartz died a year ago today. I didn’t know him well at all, but I could tell he believed that he had the power to make the world that he wanted to live in. That’s not something that everyone believes about themselves; in fact, I think very few people live their lives as if it were true.

When Aaron died, I felt like I had to do something. I didn’t understand how to effectively fight for Internet freedom or why governments cared so much about restricting it, but I could see that Aaron’s work had pivotal consequence for the future of human societies. I realized that if the wrong people gained control over the laws of the Internet, ordinary users would quickly lose their right to free speech on the greatest medium of expression that history has ever witnessed.

I didn’t know anything about code or laws or activism a year ago, but Aaron’s death taught me that the fight for Internet freedom is lonely enough that it didn’t matter who I was. One more person, one step forward.


I think SOPA/PIPA was the moment when we, the citizens of the Internet, realized that we could stand up and actually protect ourselves against historically-powerful institutions. As Peter once said, “This was the moment when the Internet had grown up.”

There’s a famous shot of Aaron at a SOPA/PIPA protest, standing in front of a crowd of people and yelling at them, “It’s easy sometimes to feel like you’re powerless, when you come out and march in the streets and nobody hears you. But I’m here to tell you today, you are powerful.”

When the ratio of Congress members supporting SOPA/PIPA to those against it went from 80/31 to 65/101 overnight on January 18, 2012, we started to think that maybe Aaron had a point: if enough people show that they care about something, the government listens and the people win.

Perhaps this strategy doesn’t apply to the fight against mass surveillance, because it’s a bigger and different sort of enemy than copyright. That’s okay. Comic books aren’t interesting without plot twists, I suppose.

(Thanks to Jacobo Nájera for translating this post into Spanish:


In the last couple weeks, people have been rightfully angry about RSA (the company, not the algorithm). This is a (formerly-)respected security software company that allegedly took $10 million from the NSA in exchange for putting a backdoor in at least one product marketed for secure encryption.

If true, this is unforgivably gross, because deliberately weakening encryption so that the government can abuse your customers’ trust in you is the stuff of crypto-dystopic sci-fi, not the sort-of-ok-despite-all-its-ugliness world that we currently think we live in. (The one immediate positive side effect that I can think of is that more users will demand that security software be transparent and publicly-auditable by design.)

I feel like people should be angry about deliberate vulnerabilities in software the same way they’re angry about police brutality. Think of public encryption standards as this invisible guardian friend who stays by our side and dutifully protects us from getting our credit card numbers stolen when we buy something online, from strangers blackmailing us based on our sexual orientation, from random people in airports reading our love letters, from authoritarian governments finding out that we’re rallying against political wrongdoings. Now think of the NSA as a police officer who pulls out a gun and shoots our friend in the head because (s)he talked to some terrorists this one time.

This is why I’m helping organize a protest in the streets outside the RSA Conference. The NSA’s deliberate sabotage of tools that people depend on for data security is a violent act; it deserves an appropriate response. The anger of those of us in the cybersecurity community absolutely needs to be visible and comprehensible to the average person if we’re going to insist that it matters to them.

Though, it’s worth noting that protest comes in many forms, including making fun of weird, silly, possibly dangerous things that websites do for no reason.

Like, doesn’t it make you nervous that the RSA conference registration website sends one AJAX request per keystroke when you enter your password?

Screenshot from 2014-01-10 13:49:55

(I originally wrote about this on Twitter. And here’s a nice paper about the danger of information leakage by keystroke timing attacks during interactive password entry.)

On Suicide

I lost four friends and relatives of friends to suicide this past year. I’d prefer it if 2014 were different, and I’ve been trying to think about how to make that happen.

The least I could do is offer myself to anyone who feels alone otherwise: so, if you’re at that point where you’re thinking about hurting yourself, please please please call or write to me. I’d really like that, even if you don’t feel like it would help in any way, even if we’ve never met.

The more difficult thing for me to do, and the one that I’ve been putting off for months, is to write a bit about what it feels like to reach that point. I won’t claim that my experiences are universal in any way, but maybe some parts will resonate with others who’ve gone to similar places.

I would really not like to alarm anyone, so please just take everything here literally. Suicide is, unfortunately, stigmatized in such a way that it’s extremely difficult to write about non-anonymously for fear of scaring friends. That seems like the start of a vicious cycle.

I’ve never felt very attached to life, even when things are going great (as they are now). I have a theory that human beings naturally vary in how much they value their own lives, just like they vary in how much they value having things like fancy cars. People who are a couple standard deviations on the low-value-on-life side don’t necessarily have worse lives than other people; it’s just that they’re not as attached to their lives. I think I’m definitely pretty far on the low end.

But on the other hand, there’s a lot of people that I love in the world, and I have some sense that there are people in the world who feel the same way about me. So therefore I can understand that my death would make those people feel absolutely terrible, and I don’t want that to happen.

Sometimes I get sad and feel like the future isn’t going to be better than the past. I think the word that gets used a lot for this kind of prolonged sadness is “depression.” When this happens, there’s an absurd number of social barriers to talking about it openly. I feel like the number of friends I have, effectively, is suddenly reduced from dozens to one or two if I’m lucky.

So imagine that things are getting kind of hopeless and your effective friend number is down to two. You’re thinking about talking to these two people about your not-doing-great, but you have to stop and think about:

1. Would this cause them unnecessary stress? Are they doing okay in their own lives?
2. If you bring up allusions to suicide, would they do something dramatic against your will, such as call a hospital?
3. If you do end up hurting yourself in some way, would they feel guilty about it forevermore because they couldn’t save you when they had the chance?
4. What if they tell you that your life is great and people love you? How do you explain to them that even though those are facts, they have no relevance to how things are going inside your head?
5. What if they think that you’re telling them this just because you want their attention or pity? Maybe that’s what you’re doing, subconsciously.

All these are fantastic reasons for you to keep silent. Also, there’s the fear that someone will never see you in the same way again once you admit to them that you’ve been looking at tables comparing various common methods of suffocation. It is generally not advantageous to come off as vulnerable or unstable.

That all just sucks. It’s shocking to me that anyone can learn to ask for help at all.

Earlier this year, I didn’t really feel like talking about suicide ever. Still, I observed thought patterns that were fascinating to me because they seemed unorthodox/taboo and yet rational in a way that often gets ignored in most conversations about suicide. I ended up writing them down in an essay and publishing them anonymously here.

After writing that piece, I found the nerve to talk to a few people. Those were some of the best conversations that I remember from 2013, and I think they’ve given me a new understanding of how friendship acts as a psychological anchor.

But there’s places where that anchor doesn’t fall deep enough. I get to those places sometimes and feel really alone and stuck. It helps to remind myself that things usually somehow end up getting better if I just wait it through.

First day of work

Was great. Lots of tea and monitors.


Then I went home and cooked a surprisingly-phenomenal dinner with my housemates, the first time I’ve cooked in this house. Rhodey made potatoes with oranges, Mark contributed some wild rice, and I spun up yellow lentil daal with kale.


We sang some Neutral Milk Hotel songs afterward, and the future looked bright.

One year later

One year ago, I started writing again out of panic. Humans are very adept at forgetting the feeling of panic, so the act of crystallizing it in sentences can be cathartic if you write slowly enough.

Last November was a weird and difficult time for me. I remember spending the night of the twenty-third in a friend’s childhood bedroom overlooking the idyllic frost-laced meadows of suburban Pennsylvania, wrapped in the mansion of an unfamiliar family that had adopted me for Thanksgiving. It was cold and late and Thanksgiving-y, in a way that amplifies certain negative thoughts about the hollowness of growing up and becoming something. I think I was at a point where those kinds of thoughts made me feel like I had swallowed one or two hummingbirds stuffed with bees stuffed with amphetamine. It was a little uncomfortable, so I stayed up all night and wrote about it.

That was the night I decided that I would take a leave of absence from grad school at Stanford and spend a year doing as many different jobs as I could [1]. If I couldn’t find a job that I was genuinely excited about by 11/23/2013, I would go back to getting a PhD in Physics.

[1] For the record, I held a total of 4 paid jobs and 1.5 unpaid ones during that time.


To be honest, it kind of sucked at first. I did an apt job of writing about it back in January.

The last month or so has been full of stress, disappointment, and self-doubt, the pains of a transition to life without externally-imposed structure.

Life without structure in the form of school or employment was terrifying at first. I found it difficult to concentrate on reading. Most days I felt like I was losing in some form or another. My patterns of learning were slow and frustrating, and I started to doubt whether I was capable of accomplishing anything on my own. That’s a really horrifying doubt to have about yourself, and I interpreted it as a sign to go rearrange some psychological furniture (not literal furniture, but only because I was couchsurfing at the time and had none).

A couple days after that frankly-depressing blog post, I moved into my first San Francisco apartment and got my first post-graduation job: an internship that had some good moments (getting the company’s IP blocked from Google a couple times) but mostly involved me feeling less like a human and more like a training set for advanced machine learning algorithms with each passing day.

My second job was better. I got to write software.


Winter passed into spring. I was getting close to 22. Sometimes I would run down Folsom St. all the way to the ocean, amazed that there was still light out at 7 pm. San Francisco in the dimming sunset is full of rushing cars, discarded coffee cups, and people eating salads. My bike was falling apart.

Ever since quitting grad school, I’d been getting good at leaving things behind: jobs, roommates, feelings of attachment to any particular time or place. Part of it was just that I had high standards for who I wanted to become.

San Francisco didn’t quite fit anymore, so I packed a backpack and got on a one-way flight to Boston.


Once I started travelling, it was hard to stop. The crinkled packaging of snack food at a gas station convenience store is basically equivalent to the wrapping that airlines put around cheap disposable pillows. Both are addictive because they remind you of the miles you have to go.

I did this: Boston -> Delaware -> Pittsburg -> Boston -> Austin -> Marfa, TX -> El Paso -> Joshua Tree -> LA -> Big Sur -> SF -> Seattle -> SF -> a tiny village in France -> Amsterdam -> SF


Things got better once I returned to SF. I was interning for EFF over the summer and loved the atmosphere and the people there enough to stay put. I didn’t have a place to live anymore in SF, so I didn’t sleep in the same place two nights in a row for over a month.


Computer security and encryption became intensely fascinating. I didn’t know much to start with, so I read aggressively on subways. My interest probably came partially from my hatred of power imbalances, especially invisible ones. A lot of power belongs to those who made security decisions about software, and those decisions are hardly transparent in most cases.

This seems wrong to me.

Side note: Designing an account management system for a website teaches you that code is supplanting many of the historical functions of legal frameworks. You’d think that would mean that people would write tests.


I was in Berlin for the first time last week. It was drizzling near-freezing Berlin rain for eight days before a thumbprint of blue pressed through the clouds, but none of that matters when you’re jetlagged and ducking through graffiti-lined streets asking drug dealers where to get a sandwich at 3 AM.

It was in a corner of a dimly-lit Indian restaurant in Kreuzberg one night that I got an email from EFF. It said, thanks for pointing out Google’s HSTS bug. Also we’d like to offer you a job as a full-time technologist.

The next day was November twenty-third, exactly one year after I promised myself exactly one year to find a job that I was excited about.


I’m proud to announce that I accepted EFF’s offer today and will be starting work there as a staff technologist after Thanksgiving. It’s been a long and challenging year, but I can’t wait to see where it goes next.