Thursday, June 19, 2014

World's Nerdiest Glasses

Here I am modeling our "idealized head tracker" glasses for the Fire phone announcement:


We built these in the early days of developing our head tracker. You can see three small IR lights mounted on the frame which made it easy to track the head's location in space. We used it to emulate a near-perfect head tracker which allowed us to achieve two important goals:
  • Unblock teams that need a working head tracker immediately.
  • Learn how close to perfection we had to be in order to reach shippable quality (our exit criteria). Because we started with a near-perfect signal we could insert synthetic noise and latency in a controlled manner to measure how it would impact the user experience.
I prefer to think of this photo as me keeping a close eye on Jeff but it was really the other way around!

Amazon Fire Phone

I can finally reveal the reason for letting my blog lie dormant for so long. I've been fully occupied (and then some) working on a secret project at Amazon: the Fire phone. This is the message I sent to my team, just a few moments before Jeff took the stage to make the announcement:
This is a proud day for the company and a proud day for the Emerging Technologies team.
You have attained two genuinely great industry and scientific milestones. You built the first reliable, usable head tracker to ever ship in a phone. You built the first robust, usable gesture recognizer to ever ship in a phone. You did these while meeting quality and power goals that once seemed impossible. You have – without the slightest exaggeration – significantly advanced the state-of-the-art in both technologies.
You will carry the accolades and lessons learned for the rest of your careers. You have made your mark and no one can ever take this away from you.
You are a world-class team and I am extremely proud of what you have accomplished.
It has been a privilege to lead such an amazing team.

You can pre-order the Fire phone here: http://www.amazon.com/dp/B00EOE0WKQ/

The video of the product announcement is embedded below.


Tuesday, January 10, 2012

LINQ sorting is also vulnerable

This is a follow-up post to .Net's Sort Is Not Secure. Don't Use It. Here's a Better One.

A reader asked if LINQ's sort has the same vulnerability. I added tests for sorting an array using LINQ's 'orderby' clause. Another reader wondered if the Sort() method for generic lists had the same problem so I added that test as well.

Time required for sorting one million items provided by an adversary:

AlgorithmSorting Time
Zimbry Introsortone-half second
.Net CLR (Array)1,588 seconds
.Net CLR (LINQ)1,916 seconds
.Net CLR (List)1,588 seconds

Detailed benchmark results are here: Sorting up to 1 million items (with an adversary)

Notes on the test:

All tests were run under .Net framework version 4.0

The times reported in the table are an average of the times taken for one ascending sort and one descending sort.

Friday, January 6, 2012

Quicksort: Is picking a pivot at random secure?

Eric offered this suggestion:
Another way to harden quicksort is to select the elements used in the pivot calculation randomly. This makes it nearly impossible for an attacker to cause worst case performance, and doesn't require you to also implement multiple sort algorithms.
This is well-worth considering. A single sort algorithm is simpler than combining multiple algorithms so there has to be a compelling benefit gained to justify the additional complexity.

I've done some testing and benchmarks but before we jump to the conclusion take a few minutes to ponder this with me and let's see just how far down this rabbit hole goes.

Hardening quicksort is a thorny problem and for an interesting reason. Quicksort's soft spot is in the method used to estimate the median. The risk that a sequence of partitionings will be unbalanced and lead to degenerate performance depends directly on the quality of the estimate. So why bother with a questionable estimate when it's easy to pick the actual median and have a perfectly secure quicksort? Ah, because there's a catch. Finding the actual median requires almost as much effort as it would to perform the sort itself. So, the only way that quicksort is viable is by making estimates and, as such, most quicksort algorithms are defined by their estimating technique: first item, random item, median-of-three, median-of-N, median-of-random-three, median-of-medians-of-three... you name it and it's probably been tried.

As long as we are estimating the median we have to accept that we are, in effect, rolling the dice and hoping a rare series of bad choices does not happen. And if an attacker is present they are only too happy to hand us loaded dice to ensure the worst possible outcome.

But wait... For an attack to work the attacker must be able to anticipate our method of estimating a median. What happens if we choose it at random? Does this defeat the attacker?

First, what exactly do we mean by 'random'? The pseudo-random number generator provided by most frameworks (the Random class in .Net is a good example) is highly predictable. It won't slow down an attacker. Alternatively, you could use a cryptographically secure random number generator but that comes at a performance penalty that almost certainly makes it unworkable. That cost will come down soon, though. Intel has new CPUs coming this year that offer fast, high-quality random number generation in hardware.

Let's assume we have that CPU today and it delivers fast and cryptogaphically secure random values. We are probably(1) not at risk from an attacker.

Not so fast. There are still two problems we can't erase:

1. Although highly unlikely, it doesn't eliminate the risk of a series of low-probability but catastrophic choices. I have seen quicksorts go quadratic on natural data, without the presence of an attacker, even when using the median-of-three method. Choosing at random should be more resilient. Whether this risk is acceptable or not depends on whether the sort is part of tax preparation software or whether it's running the auto-pilot of a jumbo jet.

2. It makes low-quality estimates of the median. Partitions will be less balanced and performance will suffer for it.

There is only one way I know of to reliably harden quicksort against both an attacker and an unlucky series of choices: Set an upper limit on how much effort quicksort is allowed to spend before abandoning it entirely.

Introsort is secure because it limits how many bad partitions it can accept before dropping out of quicksort and switching to heapsort to finish the job.

Of course, we could throw away all the complexity and just use heapsort. It has no degenerate cases and is secure against an attacker. But there's a catch. The reason we don't embrace heapsort so quickly is it tends to be significantly slower than quicksort in the average case.(2)

Let's look at the benchmarks: Sorting up to 100 million items (with no adversary)

The final algorithm, Zimbry_PureQuicksortFatPivotR1, is a pure quicksort (does not fall back to an insertion sort on tiny partitions) that uses a pseudo-random number generator to choose the pivot. It's what Eric suggests: A single quicksort algorithm that chooses a pivot at random. Introsort is 19% faster on average and is not vulnerable to either an attacker or at risk of degenerating on a bad series of random pivot choices.

So, here we are, at the bottom of the rabbit hole, and we find ourselves surrounded by trade-offs:

1. We can have a pure quicksort if we're willing to accept slower performance and the highly unlikely chance of bad behavior.

2. We can be invulnerable to both attackers and bad luck by choosing heapsort but that armor comes at a severe average performance cost compared to quicksort.

3. We can "have our performance cake and eat it too" by building a quicksort that falls back to heapsort in an emergency. We get good performance on average and no bad behavior to worry about but it comes at the cost of additional engineering complexity.

Engineering is all about trade-offs.

                                                                                                                                                            

(1) A motivated attacker won't give up so easily. If there is a flaw in Intel's random number generator it will be found and exploited. It's best to give the community (both the good guys and bad guys) time to stress-test any new security software or hardware before adopting it for your own use.

(2) Ball-park comparison: A well-implemented heapsort is typically three to five times slower than a well-implemented quicksort.

Thursday, January 5, 2012

Vote for fixing .Net's sorting security vulnerability


Several people suggested reporting this to Microsoft. Good idea. If you feel it is important to fix this you can vote for it at the following link:

Bug: .Net's sort is not secure and is vulnerable to an attacker who can use it to create a DOS attack


Tuesday, January 3, 2012

.Net's Sort Is Not Secure. Don't Use It. Here's a Better One.

.Net's Array.Sort (up to at least version 4.0) has serious weaknesses:

1. It is insecure and using it makes you vulnerable to a malicious attacker. .Net uses an ordinary quicksort with the pivot selected by the median-of-three method. It is easy to provoke quicksort's worst-case (quadratic) behavior and increase running times by multiple orders-of-magnitude. An attacker will be happy to exploit this as an effective denial-of-service attack.

2. It is inflexible. It does not allow you to provide a delegate for the swap function so sorting data structures where data synchronization is required to maintain consistency as items are moved is impossible. Also, you can only sort items on the .Net heap so sorting unmanaged memory is impossible.

3. It is slower than it should be even in the absence of an attacker.

Zimbry.Introsort addresses each of these problems.

1. It is secure. It is based on David Musser's Introsort algorithm. Introsort is essentially a quicksort that, should it fail, falls back to a secure heapsort.

2. It is flexible. Both the compare and swap operations are provided by the user. You can use it to sort anything.

3. It is faster. This wasn't an explicit objective but it's nice that we don't have to trade away performance to get a secure and flexible sort.

Click the links to see the benchmarks:


Let's look at the worst-case of dealing with an adversary.

It takes .Net over 26 minutes to sort one million integers when they are provided by an adversary. Zimbry.Introsort does it in half a second.

Those are the worst-case results. We can disable the adversary and benchmark it again:


Zimbry.Introsort is twice as fast in the average case and rarely less than 13% faster in any case.

(Each test was run only once so the timings for small arrays contain noticeable sampling noise. A more robust benchmark would filter multiple samples.)

I am releasing the source under the MIT license: Click here for the source

Some notes on the source:

You'll find many alternative sort algorithms in the Zimbry.Sort.OtherSorts project. I experimented with these along the way. You can enable them in the benchmark if you have a great deal of patience.

The class in QuicksortAdversary.cs was derived from Doug McIlroy's paper, A Killer Adversary for Quicksort. Be careful. It will beat up quicksort and steal its lunch money.

Zimbry.Introsort contains four sort algorithms layered together:

1. Quicksort with pivot selected by median-of-nine: For large partitions.

2. Quicksort with pivot selected by median-of-five: For small partitions.

3. Heapsort as a fall-back when quicksort recurses too deep: Heapsort is slower than quicksort in the best case but it has no quadratic behavior to exploit so it provides effective protection against an adversary.

4. Insertion sort: For tiny partitions where quicksort is inefficient.

Using these four algorithms lets us enjoy the performance advantage of quicksort for the typical case with protection against a malicious attacker in the worst case.

Both quicksorts use Bentley & McIlroy's "fat-pivot" partitioning method from their paper, Engineering a Sort Function, for better performance. This is a big part of why it performs better than .Net's quicksort in many tests.

While this is an improvement it is far from the last word in sorting. Some ideas to consider:

Better performance may be found with Vladimir Yaroslavskiy's dual-pivot quicksort.

It really needs special versions for handling known data types (avoiding the requirement for using compare and swap delegates in all cases). This would give a significant speed improvement.

There's more room for performance tuning. I tried to leave the code in a fairly readable state and some sacrifices could be made to buy a little more performance.

It would be nice to add support for stable sorting.

Wednesday, December 21, 2011

SPRUCE - A Way of Thinking About Software


Analyzing and comparing software can be a complex task and I needed a way to break it up into components to avoid being overwhelmed by the details. These six top-level categories help keep me organized: Security, Performance, Reliability, Usability, Community and Economy. I call it Spruce to make it easy to remember. It works equally well when thinking about operating systems, languages, frameworks and individual applications.

A brief summary of Spruce:

Security - Protection of sensitive data through passwords and encryption is the visible part. The invisible part that is hard to measure is how much exploitable surface area is exposed to an attacker. That may not be initially obvious and it generally takes experience to develop a sense for the size of the risk. There is overlap with reliability with regards to attack-resistance.

Performance - We're concerned with the resources it requires relative to its alternatives. How well does it scale as the problem size increases and what trade-offs are unavoidable to achieve scale (ex: consistency vs. availability)? There can be overlap here with economy if it requires expensive hardware to achieve reasonable performance.

Reliability - This is about attack-resistance, fault-tolerance, error-correction and recovery. How gracefully does it deal with hardware/power failures, incorrect input and outright data corruption? There is overlap with security with regards to dealing with attacks. Can it keep running even under adverse conditions or does it go down every time the wind shifts direction? Has it been battle-tested or are you the brave pioneer? If redundancy is required there is overlap with economy.

Usability - This is considered from the point-of-view of the user or programmer as appropriate. I'm concerned with documentation, user-experience and API design. How well does it adapt to problems the original developer did not anticipate? Is it a pleasure to use or does it make you regret your career path?

Community - This is anyone who can provide you with help and enhance the usefulness of the product. It ranges from support from the original developer to a vibrant third-party community pushing the tech forward. Is it easy to get answers to questions and solve problems? How often is it mentioned on Stack Overflow and GitHub? Can you find developers who are eager to work with it or do they consistently forget to return your calls when you tell them the name of the underlying tech?

Economy - We're interested in the total cost of ownership relative to its alternatives. The visible parts are licensing fees, support contracts and hardware requirements. The invisible parts are the impact it has on other decisions. If it turns out you made the wrong decision how expensive is it to correct the mistake?

Engineering is all about trade-offs so it's rare that any tool excels in all of these areas. Reliability may be emphasized over performance or economy. Community might trump everything else. The key thing is to simply be aware of what the trade-offs are and be conscientious about them.