Hi. While it is true that Mergesort’s average is better than Quicksort’s best (and, indeed, one can prove that in the worst case, Mergesort makes as many comparisons as Quicksort’s best case!), it is important to remember that other things factor into the discussion.
The reason why Quicksort is so often used, for example, is mostly cache behavior and real-world performance induced from this behavior.
Likewise, it’s important to remember the assumptions one makes (in terms of the computational model) when saying that Radix sort is “faster” than Quicksort. Namely, that they are bounded by above. This is not always a reasonable assumption to make.

In practice, implementations often choose “intelligent quicksorts”, or “intelligent mergesorts”. See Timsort, or Introsort.

Nice! You are going to do this for merge and radix sorts, aren’t you?

Also, recursively partitioning below eight elements or so causes the bookkeeping/stack frame overhead to eat up any advantage; I think most implementations use another algorithm once the recursively divided problem set gets this small.

Also, quicksort is not stable in its most general form; adding comparisons to induce stability may eat up speed advantage.

I noticed that no one has compared these sorts to the PS9110 sort.
The above sorts are basically O(N log N) sorts.
The PS9110 sort is an O(N) sort.

The second advantage is that the above sorts use a binary search.
Binary searches are O(log N) searches.
The PS9110 search is an O(1) search.

Links have the problem of the resources needed to create and destroy the links that keep the data set in sorted order.

Arrays have the problem of N/2 moves to add or delete an element from a sorted data set. {O(N) time}

The PS9110 altorithms can insert and delete data from an array in O(1) time.

@Marvon – it will be great if you post an article link that explains PS9110 in detail.

@Marvon: It is a relatively easy to prove fact that no comparison sorting algorithm can do better than Omega(n log n) comparisons (specifically, strictly better than ceil(log_2(n!))) in the worst case. If you assume things about the data, better bounds are possible. Comparing algorithms that assume different things is a tricky business.

Hi. While it is true that Mergesort’s average is better than Quicksort’s best (and, indeed, one can prove that in the

worstcase, Mergesort makes as many comparisons as Quicksort’sbestcase!), it is important to remember that other things factor into the discussion.The reason why Quicksort is so often used, for example, is mostly cache behavior and real-world performance induced from this behavior.

Likewise, it’s important to remember the assumptions one makes (in terms of the computational model) when saying that Radix sort is “faster” than Quicksort. Namely, that they are bounded by above. This is not always a reasonable assumption to make.

In practice, implementations often choose “intelligent quicksorts”, or “intelligent mergesorts”. See Timsort, or Introsort.

That said, cool chart ðŸ™‚

Quicksort is nice, but with a really minor modification, namely selecting a uniformly random pivot, the worst case running time is only O(n log n) in expectation. I definitely think you should check out (http://en.wikipedia.org/wiki/Quicksort#Randomized_quicksort_expected_complexity)

Nice! You are going to do this for merge and radix sorts, aren’t you?

Also, recursively partitioning below eight elements or so causes the bookkeeping/stack frame overhead to eat up any advantage; I think most implementations use another algorithm once the recursively divided problem set gets this small.

Also, quicksort is not stable in its most general form; adding comparisons to induce stability may eat up speed advantage.

I noticed that no one has compared these sorts to the PS9110 sort.

The above sorts are basically O(N log N) sorts.

The PS9110 sort is an O(N) sort.

The second advantage is that the above sorts use a binary search.

Binary searches are O(log N) searches.

The PS9110 search is an O(1) search.

Links have the problem of the resources needed to create and destroy the links that keep the data set in sorted order.

Arrays have the problem of N/2 moves to add or delete an element from a sorted data set. {O(N) time}

The PS9110 altorithms can insert and delete data from an array in O(1) time.

@Marvon – it will be great if you post an article link that explains PS9110 in detail.

@Marvon: It is a relatively easy to prove fact that no comparison sorting algorithm can do better than Omega(n log n) comparisons (specifically, strictly better than ceil(log_2(n!))) in the worst case. If you assume things about the data, better bounds are possible. Comparing algorithms that assume different things is a tricky business.