The two standard models of algorithm analysis are to look at average case and worst case performance, but in this case we need to analyse a third way to quantify performance: the expected run time. Imagine that there is some unknown probability distribution that governs the order of the input array. For each possible ordering, quicksort achieves a different run time. We want to analyse the expectation value.
The problem is that, since we usually do not know the probability distribution on the inputs, this is impossible to analyse for deterministic quicksort. All we can say is that the expected run time cannot be worse than the worst case, and not be better than the best case, but depending on the distribution, it can be anywhere in between. That means that depending on your application, your expected run time may be pretty horrible. This is something you should worry about, even if you don’t care about being safe in the worst case.
The benefit of randomized quicksort is that suddenly, the distribution on input order does not matter anymore: by adding our own randomness we ensure that, regardless of the input distribution, we obtain an expected runtime of O(nlogn) That is why it can be a good idea to use.