Imagine trying to find a word in a dictionary that isn’t in alphabetical order.
You’d have to scan every page, which would take a huge amount of effort and time. Enter sorting algorithms. These algorithms are designed to scan data and sort items quickly, which tremendously helps us search for them.
Sorting Algorithms Ranked From Slowest to FastestThere are various ways in which we can sort items. Let’s examine each of them along with their time and space complexities.
A tutorial breaking down the fastest sorting algorithms. | Video: Creel 1. Bubble SortBubble sort is the simplest sorting algorithm. We’ll compare each adjacent pair and check if the elements are in order. If they aren’t, we swap both elements. We keep doing this until all elements are sorted.
for(int i = 0;i < n; i++){
for(int j=0;j < n - 1; j++){
if(arr[j] > arr[j+1]){
int temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = temp;
}
}
}
Bubble Sort Performance Analysis Time Complexity
Since we aren’t using any extra data structures apart from the input array, the space complexity will be O(1).
More on Software Engineering: Tackling Jump Game Problems on LeetCode
2. Revised Bubble SortIn the above sorting algorithm, we find that even if our array is already sorted, the time complexity will be the same, i.e. O(n²).
We’ll come up with a revised algorithm to overcome this. To identify if the array has been sorted, we’ll create a flag that will check if a swap has occurred between any adjacent pairs. If there is no swap while traversing the entire array, we know that the array is completely sorted and we can break out of the loop. This improves the best-case time complexity to O(n), but the worst-case remains O(n²).
for(int i = 0;i < n; i++){
boolean isSwapped = false;
for(int j=0;j < n - 1; j++){
if(arr[j] > arr[j+1]){
int temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = temp;
isSwapped = true;
}
if(!isSwapped){
break;
}
}
}
Revised Bubble Sort Performance Analysis Time Complexity
In this sorting algorithm, we assume that the first element is the minimum element. Then we check to see if an element lower than the assumed minimum is present in the rest of the array. If there is, we swap the assumed minimum and the actual minimum. Otherwise, we move on to the next element.
for(int i=0;i<arr.length; i++) {
int minIndex = i;
for(int j=i+1;j<arr.length; j++) {
if(arr[j]<arr[minIndex]) {
minIndex = j;
}
}
int temp = arr[i];
arr[i] = arr[minIndex];
arr[minIndex] = temp;
}
Selection Sort Performance Analysis Time Complexity
Similar to the previous algorithms, we aren’t making use of any extra data structure apart from the input array so, the space complexity will be O(1).
More on Software Engineering: Sliding Window Algorithm Explained
4. Insertion SortIn this sorting algorithm, we check to see if the order is correct for each element until we reach the current element. Since the first element is in order, we start from the second element and check if the order is maintained. If not, then we swap them. So, on any given element, we check if the current element is greater than the previous element. If it’s not, we keep swapping elements until our current element is greater than the previous element.
for(int i = 1;i < n; i++) {
int j = i;
while(j > 0 && arr[j] < arr[j-1]) {
int temp = arr[j];
arr[j] = arr[j-1];
arr[j-1] = temp;
j--;
}
}
Insertion Sort Performance Analysis Time Complexity
Since we are not making use of any extra data structure apart from the input array, the space complexity will be O(1).
5. QuicksortThe Quicksort algorithm is faster than the previous algorithms because this algorithm uses the concept of divide and conquer.
First, we decide on a pivot element. We then find the correct index for this pivot position and divide the array into two subarrays. One subarray will contain elements that are lesser than our pivot, and the other subarray will contain elements that are greater than our pivot. We then recursively call these two sub-arrays, and the process goes on until we can’t further divide the array.
public static void quicksort(int[] arr, int low, int high) {
if(low >= high) return;
int pivotPosition = partition(arr, low, high);
quicksort(arr,low, pivotPosition-1);
quicksort(arr, pivotPosition+1, high);
}
But how do we divide the sub-array?
We assume the last element of our array to be our pivot. We then traverse the entire array using the two-pointer technique. The elements encountered by the left pointer should be lower than the pivot, and elements encountered by the right pointer should be greater than the pivot. If not, we swap the elements at the left and right pointers so that for a particular position in the array, the elements to the left are lower while higher at the right.
After partitioning, we swap the pivot into its correct sorted position.
public static int partition(int[] arr, int low, int high) {
int pivot = arr[high];
int left = low, right = high-1;
while(left < right) {
while(arr[left]<pivot) {
left++;
}
while(arr[right]>pivot) {
right--;
}
if(left >= right) {
break;
}
int temp = arr[left];
arr[left] = arr[right];
arr[right] = temp;
}
int temp = arr[left];
arr[left] = arr[high];
arr[high] = temp;
return left;
}
For Lomuto partition schemes in Quicksort, choose the last element as the pivot, and maintain an index i
such that all elements before i
are less than or equal to the pivot. From here, swap arr[i]
and arr[j]
if arr[j] <= pivot
. After the loop, place the pivot in its correct position by swapping it with arr[i]
.
For Hoare partition schemes in Quicksort, choose the first element as the pivot. Move left
forward while arr[left] < pivot
, and move right
backward while arr[right] > pivot
. When left >= right
, return right
. Otherwise, swap arr[left]
and arr[right]
.
Since we are recursively calling the quicksort
function, an internal stack is used to store these function calls. The recursion depth is O(log n), so space complexity from recursion is O(log n), but due to temporary arrays, total space complexity is O(n).
Like Quicksort, merge sort also uses the divide and conquer technique. Except in merge sort, most of the work is done during the merging of the sub-arrays. In Quicksort, the majority of work is done during the partitioning/dividing of the array. This is why Quicksort is also known as a partition sort.
The below function will recursively break the array into two sub-arrays until each sub-array has only one element.
public void mergeSort(int arr[], int l, int r) {
if (l < r) {
int m = l + (r-l)/2;
sort(arr, l, m);
sort(arr , m+1, r);
merge(arr, l, m, r);
}
}
After that, we store these sub-arrays in two new arrays. Now, we can start merging them based on their order, and store them into our input array. After all these subarrays are merged, our input array will be sorted.
public void merge(int arr[], int l, int m, int r) {
int n1 = m-l+1;
int n2 = r-m;
int[] L = new int[n1];
int[] R = new int[n2];
for(int i = 0;i < n1; i++) {
L[i] = arr[l+i];
}
for(int i = 0;i < n2; i++) {
R[i] = arr[m+1+i];
}
int i = 0, j = 0, k =l;
while(i < n1 && j < n2) {
if(L[i] <= R[j]) {
arr[k++] = L[i++];
}
else {
arr[k++] = R[j++];
}
}
while(i < n1) {
arr[k++] = L[i++];
}
while(j < n2) {
arr[k++] = R[j++];
}
}
Merge Sort Performance Analysis Time Complexity
Since we are recursively calling the MergeSort
function, an internal stack is used to store these function calls. There will be at most n calls in the stack, and hence, the space complexity will be O(n).
More on Software Engineering: How to Use Pass, Continue and Break in Python
Advantages of Each Sorting AlgorithmSince we sort the elements after comparing them with each other, each of the above algorithms are all comparison-based. However, there are other non-comparison-based sorting algorithms, such as counting sort, radix sort, bucket sort, etc. These are also called linear sorting algorithms because their time complexity is O(n).
Finally, each algorithm has their own pros and cons, and their implementation depends on your priority. If efficiency is not an issue, we can use bubble sort because it’s easy to implement. Or we can use insertion sort when the array is nearly sorted, since the time complexity is linear.
What is a sorting algorithm?A sorting algorithm is a method used to arrange elements in a particular order (typically numerical or alphabetical) to make data easier to search and manage. Sorting algorithms can vary in speed and efficiency based on time and space complexity.
Which sorting algorithm is fastest overall?Merge sort and Quicksort are some of the fastest sorting algorithms, each having an average-case time complexity of O(n log n).
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4