Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
RSA public exponent size (imperialviolet.org)
80 points by taylorbuley on March 16, 2012 | hide | past | favorite | 22 comments


Does anyone know what the "Bleichenbacher bug" was that scared people away from using 3 for the exponent?


RSA signature verification code had been written (in many places, most notably in Mozilla's NSS library) to "parse" the RSA signature message block that resulted from the public key verification operation. RSA signature verification is a little like RSA encryption "inverted", and the element being encrypted is a hash of the content being signed. What vulnerable implementations would do was, they'd perform the public key transform and the extract the hash, then just check the hash against the content.

The problem is that a SHA2 hash of a message is much smaller than an RSA message block. The remainder of the content of the block is supposed to be "padding". "RSA padding" is one of the worst names in cryptography, because it isn't so much "padding" as it is "armor". If you don't pad in a very particular way, and check the padding scrupulously, you end up with multiple different vulnerabilities.

In this case, the vulnerability was that libraries like NSS weren't checking the padding at all (they naively assumed that if an RSA public key operation produced a SHA2 hash of a complicated message, whoever generated the signature must have the private key). The Bleichenbacher attack was that for E=3 RSA, the unchecked padding created a huge amount of "head room" in which garbage data would be ignored. Because of the low exponent, the modular multiplications performed by the signing (private) operation might never wrap the modulus; in other words, the modulus was irrelevant, because the math worked out to a simple cubing/cube root without ever hitting it. The bits produced by this "key oblivious" E=3 signing operation were gibberish, of course... but the RSA implementations weren't checking them, because they were "just padding". As long as the message bits included a hash in a predictable place, they looked valid.

The correct way to perform this operation was not to extract anything from the RSA message block produced by verification. Instead, the verifier should have simply created their own properly-padded message block from the original content and verified the whole block byte-for-byte. So the bug was, basically, trying to parse RSA signature blocks at all.

The end result was, because RSA verifiers were effectively ignoring most of the signature bits in the message, attackers could use trivial (as in, type- it- into- a- python- repl) math to generate valid-looking signatures for any SHA2 hash.

This bug was in RSA code for years and years and years.

Don't build crypto.


There was a nice 5-part series you and I wrote on this topic back in 2006. However, parts of it are no longer available. Could you rescue it from your archives, with diagrams? Thanks.

http://chargen.matasano.com/chargen/2006/9/18/rsa-signature-...


Nate thinks that by asking me publicly, I'm more likely to get this done than the last 10 times he's asked me privately. :)



Bleichenbacher often comes out of hiding and posts some terrible crypto bug, usually based on a slight implementation deviation from best practices.

The above bug is in RSA encryption, not signing, and is much more interesting technically than the e=3 padding verification bug. Just by revealing a different error message, attackers can use your server as a decryption oracle. It's different than the POET/BEAST attacks though, which are on block cipher modes.

I tried to write a clear review of the paper here: http://rdist.root.org/2008/01/07/ssl-pkcs-padding-attack/

The amusing thing is that he usually picks a random toolkit you've never heard of to attack, and then someone else (Hal Finney in the case of the e=3 padding bug) realizes it's a widespread issue in much more important systems.


This was actually a different attack on RSA by the same person! tptacek described the padding attack well in a sibling comment.


One reason to use E>3: Side channel attacks which reveal random bits of exponents (e.g., the hyperthreading attack) are much easier with a small public exponent.


Had no idea Bleichenbacher was at Google. Good get.


In case you are not familiar with dnssec-keygen:

  -e If generating an RSAMD5/RSASHA1 key, 
     use a large exponent.


Wait a minute. Is it just me or is the author talking about fixing the public key at a default value? I hope it is meant to be read as the default maximum bit SIZE of the public key instead.


An RSA public key is made of of two parts -- a 'modulus' and an 'exponent'. The modulus is the part you're thinking about -- very large, computed from your private key, hard to factor, etc. He is talking about the exponent part, which can indeed be very small (and the same everywhere event).


I was taught in a crypto class a few years ago that an algorithm commonly used to encrypt messages runs in time proportional to the number of 1 bits in the binary representation of e, thus a smaller e is better and so 3 (11) is a very good candidate efficiency-wise. Is this true in the real world?


To raise something to the 3rd power in modular math is done by computing:

    x² = x · x
    x³ = x · x²
In other words, two modular multiplications. I have grouped them like this because, to raise something to the 17th power in modular math is done by computing:

    x² = x · x
    x⁴ = x² · x²
    x⁸ = x⁴ · x⁴
    x¹⁶ = x⁸ · x⁸
    x¹⁷ = x · x¹⁶
So that's 5 multiplications. If the exponent is 2^m + 1, we can generalize to say that it requires m + 1 multiplications; 17 is 2^4 + 1 and thus requires 4 + 1.

Of course, we're a long way from that, even. To raise something to the 2¹⁶ + 1 = 65537 power requires 17 multiplications. When you see these times of 10.6 μs and 23.9 μs for these respectively, this means that the multiplication isn't really the bottleneck yet. Let us imagine that the multiplication needs to be initialized in a constant time, then we could say that:

    10.6 = 2 t + C
    23.9 = 17 t + C
Solving, I get that the constant overhead is 8.8 μs while the time t for a multiplication is just 0.9 μs or so. So the constant time is about 10 multiplications worth! We can then test this for the next case, e=2^32 + 1, which is listed as 42.7 μs, the linear fit above suggests 0.9*33 + 8.8 = 38.5, which is not too bad of an estimate. (I suspect that it starts to use the general exponentiation algorithm, which requires that you do some bitwise operations to figure out whether you multiply an accumulating variable; that control logic might make up the missing 4 μs.)

All of these have the same number of 1 bits. It runs in time proportional to the number of total bits in the binary representation of e, not just the 1 bits. Other than that you're mostly right -- in the real world there are evidently constants and perhaps some curvature when nonstandard exponents are used.


Not quite, the cost of RSA encryption is O(k^3), where k is the total number of bits, not the number of 1's. But it is cheaper to have less 1 bits, as (abusing big-O notation), the cost is more like O(3k^3-z^3), where z is the number of 0 bits. That's why some people are using 2^32+1 instead of 2^32-1.

In theory the size of e doesn't matter, but a larger e does make it harder to brute force the message text.


You're mixing up the number of bits in the RSA modulus n, and the number of bits in the exponent e.

Integer modular multiplication is O(n^2)[1]. Binary exponentiation, the most common of algorithms, requires O(e) modular multiplications. It is therefore trivial to get to the O(n^3) figure of general modular exponentiation.

RSA encryption with e = 3 is much simpler, consisting of exactly 2 modular multiplications. That makes it O(n^2).

Note that an exponent with very low hamming weight can still be O(n^3). For example e = 2^(n-1) + 1 requires n multiplications, therefore O(n^3).

[1] If you want to go all theoretical, it can be as low as O(n log n log log n) using Schoenhage-Strassen. But that is not practical for 1024--4096 bit integers.


Those are values. 2^16+1 bits would be a very big number.


Not anywhere near big enough to be cryptographically secure. RSA uses 2 numbers as the public key, e and n. n should be more than 1024 bits, and it really doesn't matter how big e is. The default for e for a while (and cryptographically secure in theory) was 3 because computing y=x^3 mod n (aka RSA encryption) was cheap. What matters in RSA though is n being large and hard to factor, not in e being large.

EDIT: my bad, I misread the "2^16+1 bits" as 2^16+1 every time I read it.


While you are correct, it looks to me like tedunangst thought the numbers should have a bit length of 65,537 bits. That frakken huge. And not at all what this article says.

He said "2^16+1 bits would be a very big number." That's 65,537 bits.


Actually, I'm saying it is not the bit length. But that seems to be the interpretation of the original comment.


2^16 is significantly larger than 1024.


To clarify, I meant to write "default maximum size". (and thus ceiling of log_2 of that being the max bit size). That aside, down voting my voicing a reasonable question which spurred a subthread that has useful dialogue seems a bit immature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: