Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

if timing is so critical to these attacks, it seems adding a tiny variable (on the order of a millisecond) in response times would completely prevent this


No. Adding noise to a signal increases the amount of filtering and the number of measurements you need to take. It doesn't eliminate the signals. The vulnerability is the signal; the fix --- a constant-time HMAC comparison function --- is to eliminate the signal.


So then is the fix as simple as removing the early "break;" statement from a strcmp?


Depends on the rest of the strcmp implementation. You might still leak information such as the number of correct characters if responding to a correct character takes a different amount of time than responding to an incorrect character. Ever played Mastermind? Same theory here.

Your best bet is to generate a huge body of inputs (including the relevant special cases), and tweak the code until it takes the same amount of time for all of them.


Number of chars would matter for plaintext passwords but not HMAC. You gain no more practical advantage knowing I used SHA1 or SHA256 HMAC. Yet another reason not to use plaintext passwords.


Perhaps the fix is to comment it out and explain why. ;)


  "For every problem there is always a solution
   that is simple, obvious, and wrong." -- Mark Twain
I'm not an expert, but I know several people who are. Apparently the literature explains clearly why this most obvious of fixes is, as Twain predicts, wrong. The simple jitter that you can add is dealt with by statistical techniques.

As I say, I'm not an expert, but if you google this it should give you references to papers that discuss the issues.


On the second page of the article:

>Program the system to take the same amount of time to return both correct and incorrect passwords. This can be done in about six lines of code, Lawson said.


Only by introducing a delay where is none is needed.

This makes things slower for the rest of us. 1 extra millisecond per user * 8 billion users * times 10 logins a day == Lots Of Lost Man Hours, probably enough to rebuild the great pyramids of Egypt by hand every year.

Security researchers are the reason we can't have nice things. :)


The funniest part about these discussions is that we're discussing an optimization that exclusively helps attackers. Virtually all HMAC candidate hashes are correct all the way through the final byte, meaning that even in a classic short-circuited compare, you still have to read everything. In virtually all traffic, you never get to take that short circuit. The only time short-circuited comparisons ever make things faster is when an attacker is waiting for a rejection.


However, in many high-level languages == is written in C, and reimplementing it in the high-level language can be quite slow in comparison.


You know, it'd be handy if such high-level languages implemented a separate =$= operator that worked just like ==, but was timing-independent.


That effectively already exists. The internet isn't instant or consistent, and this is about remote timing attacks.


The internet is Statistically consistent, which is all the attack needs.


Even if you added a random up to one second pause, that's still "Statistically consistent", it just requires far more samples to detect millisecond variations.


(delayed, but) whups, my mistake. Read that wrong.


Nate Lawson debunked this and other arguments in a blog entry linked in the thread:

http://rdist.root.org/2010/01/07/timing-independent-array-co...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: