Quiz on Sorting q10

This is the question:- You have an array of n elements. Suppose you implement quicksort by always choosing the central element of the array as the pivot. Then the tightest upper bound for the worst case performance is

O(n^2)

O(nlogn)

O(n)

Why the option 2 is incorrect as the question mentions that we choose central element as pivot element in all case, so according to that the tightest upper bound must be O(n log n) only? Please explain the correct one.

O(n^2)is the answer. When we choose the first element as the pivot, the worst case of quick sort comes if the input is sorted- either in ascending or descending order. Now, when we choose the middle element as pivot, sorted input no longer gives worst case behavior. But, there will be some permutation of the input numbers which will be giving the same worst case behavior. For example,

1 2 3 4 5 6 7

This array gives worst case behavior for quick sort when the first element is pivot.

6 4 2 1 3 5 7

This array gives the worst case behavior of O(n^2)
if we take middle element as the pivot- each split will be 1
element on one side and n−1
elements on other side. Similarly, for any input, we can have a permutation where the behavior is like this. So, whichever element we take as pivot it gives worst case complexity of O(n^2)
as long as pivot is from a fixed position (not random position as in randomized quick sort)

Is randomised quicksort is something different from quicksort which we do by calculating pivot element considering last element as pivot, I don’t know about that. Please explain I am not able to get exactly the point you are trying to make me understand.

The two standard models of algorithm analysis are to look at average case and worst case performance, but in this case we need to analyse a third way to quantify performance: the expected run time. Imagine that there is some unknown probability distribution that governs the order of the input array. For each possible ordering, quicksort achieves a different run time. We want to analyse the expectation value.

The problem is that, since we usually do not know the probability distribution on the inputs, this is impossible to analyse for deterministic quicksort. All we can say is that the expected run time cannot be worse than the worst case, and not be better than the best case, but depending on the distribution, it can be anywhere in between. That means that depending on your application, your expected run time may be pretty horrible. This is something you should worry about, even if you don’t care about being safe in the worst case.

The benefit of randomized quicksort is that suddenly, the distribution on input order does not matter anymore: by adding our own randomness we ensure that, regardless of the input distribution, we obtain an expected runtime of O(nlogn) That is why it can be a good idea to use.

If you don’t want to understand from Documenation, understand randomised quick sort from here