CS2020 Learning Sorting

Sorting was one of the first things that I learnt for programming, and inevitably we had to go through this as part of this algorithm module.

The truth is I never really paid much attention to sorting until now, since I  knew that I can just use the buit-in qsort function in most cases. However, this is certainly not we learnt during this module, and we are dealing with more problematic sorting, involving analysis of different types of sorting algorithm in terms of running time and space usage(and I realised what we have learnt from past few weeks is very similar to the first two chapters of book Introduction to Algorithms)

The different sorting methods covered so far are:

  • Bubble sort Stable

A total of n iterations for n inputs, for each iteration, swap the two neighbouring elements if they are not sorted.

Bubble sort has a loop invariant – at the end of jth iteration, the biggest j elements are correctly sorted in the final j positions in the array

Running time is O(n²), best case is O(n) if array is already sorted, Space usage is O(n).

  • Selection sort  Unstable

A total of n iterations for n inputs, for each iteration, find the smallest element from unsorted part and put it(by swapping) at the end of sorted part.

Selection sort also has a loop invariant – at the end of jth iteration, the smallest j elements are correctly sorted in the first j positions in the array

Running time is O(n²), best case is O(n) if array is already sorted, Space usage is O(n).

  • Insertion sort  Stable

A total of n iterations for n inputs, for each iteration, put the first unsorted element  to the correct position of the sorted part.

Insertion sort has a loop invariant, which is slightly different from selection sort – at the end of jth iteration, the first j positions in the array is sorted, however, they may be subjected to “insertion” on the subsequent iterations.

Running time is O(n²), best case is O(n) if array is already sorted, Space usage is O(n).

  • Merge sort  Stable

It uses the idea of divide and conquer, divide the array into two halves on each iteration and sort them during the merging process. It is relatively easier to sort the array during merging as we assume that the arrays to be merged are already sorted in the first place.

Invariant – the arrays to be merged are sorted before the start of merging.

Running time is O(n log n), best case is still O(n log n) if array is already sorted, space usage is O(n log n) for a simple approach, which can be improved to  2n+O(1).

  • Quick sort Unstable(Classic)

For each iteration, a pivot is chosen and the array is partitioned into two parts, one part with all elements smaller than(or equal to) the pivot, the other part with all elements bigger than the pivot.

The invariant is the same as the mechanism of quick sort.

For the process of partition, running time is O(n), space usage is n+O(1), where O(1) accounts for the process of swapping. The running time for the whole process can be improved by optimizations(dealing with duplicates by three-way-partition and etc.). The worst case is O(n²).

The pivot can be chosen deterministically or randomly.

  • BogoSort Unstable

This sorting method involves choosing a random permutation of the array A and return it if it is sorted. A related joke(or maybe not) is QuantumBogoSort which uses quantum computing concept and many-worlds interpretation

Leave a Reply