Which sorting algorithm is best when list is already sorted?

What is the best sorting algorithm for an almost sorted array?

How one should optimally sort the almost sorted data of an array is a common problem. ​Many sorting algorithms are available, but the one which is best suited for the almost sorted array is the insertion sort.

Analysis of different sorting techniques

In this article, we will discuss important properties of different sorting techniques including their complexity, stability and memory constraints. Before understanding this article, you should understand basics of different sorting techniques (See : Sorting Techniques).

Time complexity Analysis –
We have discussed the best, average and worst case complexity of different sorting techniques with possible scenarios.

Comparison based sorting –
In comparison based sorting, elements of an array are compared with each other to find the sorted array.

  • Bubble sort and Insertion sort –
    Average and worst case time complexity: n^2
    Best case time complexity: n when array is already sorted.
    Worst case: when the array is reverse sorted.
  • Selection sort –
    Best, average and worst case time complexity: n^2 which is independent of distribution of data.
  • Merge sort –
    Best, average and worst case time complexity: nlogn which is independent of distribution of data.
  • Heap sort –
    Best, average and worst case time complexity: nlogn which is independent of distribution of data.
  • Quick sort –
    It is a divide and conquer approach with recurrence relation:
T(n) = T(k) + T(n-k-1) + cn
  • Worst case: when the array is sorted or reverse sorted, the partition algorithm divides the array in two subarrays with 0 and n-1 elements. Therefore,
T(n) = T(0) + T(n-1) + cn Solving this we get, T(n) = O(n^2)
  • Best case and Average case: On an average, the partition algorithm divides the array in two subarrays with equal size. Therefore,
T(n) = 2T(n/2) + cn Solving this we get, T(n) = O(nlogn)

Non-comparison based sorting –
In non-comparison based sorting, elements of array are not compared with each other to find the sorted array.



  • Radix sort –
    Best, average and worst case time complexity: nk where k is the maximum number of digits in elements of array.
  • Count sort –
    Best, average and worst case time complexity: n+k where k is the size of count array.
  • Bucket sort –
    Best and average time complexity: n+k where k is the number of buckets.
    Worst case time complexity: n^2 if all elements belong to same bucket.

In-place/Outplace technique –
A sorting technique is inplace if it does not use any extra memory to sort the array.
Among the comparison based techniques discussed, only merge sort is outplaced technique as it requires an extra array to merge the sorted subarrays.
Among the non-comparison based techniques discussed, all are outplaced techniques. Counting sort uses a counting array and bucket sort uses a hash table for sorting the array.

Online/Offline technique –
A sorting technique is considered Online if it can accept new data while the procedure is ongoing i.e. complete data is not required to start the sorting operation.
Among the comparison based techniques discussed, only Insertion Sort qualifies for this because of the underlying algorithm it uses i.e. it processes the array (not just elements) from left to right and if new elements are added to the right, it doesn’t impact the ongoing operation.

Stable/Unstable technique –
A sorting technique is stable if it does not change the order of elements with the same value.
Out of comparison based techniques, bubble sort, insertion sort and merge sort are stable techniques. Selection sort is unstable as it may change the order of elements with the same value. For example, consider the array 4, 4, 1, 3.

In the first iteration, the minimum element found is 1 and it is swapped with 4 at 0th position. Therefore, the order of 4 with respect to 4 at the 1st position will change. Similarly, quick sort and heap sort are also unstable.

Out of non-comparison based techniques, Counting sort and Bucket sort are stable sorting techniques whereas radix sort stability depends on the underlying algorithm used for sorting.

Analysis of sorting techniques :

  • When the array is almost sorted, insertion sort can be preferred.
  • When order of input is not known, merge sort is preferred as it has worst case time complexity of nlogn and it is stable as well.
  • When the array is sorted, insertion and bubble sort gives complexity of n but quick sort gives complexity of n^2.

Que – 1. Which sorting algorithm will take the least time when all elements of input array are identical? Consider typical implementations of sorting algorithms.
(A) Insertion Sort
(B) Heap Sort
(C) Merge Sort
(D) Selection Sort

Solution: As discussed, insertion sort will have the complexity of n when the input array is already sorted.

Que – 2. Consider the Quicksort algorithm. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. Let T(n) be the number of comparisons required to sort n elements. Then, (GATE-CS-2012)

(A) T(n) <= 2T(n/5) + n

(B) T(n) <= T(n/5) + T(4n/5) + n

(C) T(n) <= 2T(4n/5) + n

(D) T(n) <= 2T(n/2) + n

Solution: The complexity of quick sort can be written as:

T(n) = T(k) + T(n-k-1) + cn

As given in question, one list contains 1/5th of total elements. Therefore, another list will have 4/5 of total elements. Putting values, we get:

T(n) = T(n/5) + T(4n/5) + cn, which matches option (B).

Time and Space Complexity Comparison Table :

Sorting Algorithm Time Complexity Space Complexity
Best Case Average Case Worst Case Worst Case
Bubble Sort Ω(N) Θ(N2) O(N2) O(1)
Selection Sort Ω(N2) Θ(N2) O(N2) O(1)
Insertion Sort Ω(N) Θ(N2) O(N2) O(1)
Merge Sort Ω(N log N) Θ(N log N) O(N log N) O(N)
Heap Sort Ω(N log N) Θ(N log N) O(N log N) O(1)
Quick Sort Ω(N log N) Θ(N log N) O(N2) O(log N)
Radix Sort Ω(N k) Θ(N k) O(N k) O(N + k)
Count Sort Ω(N + k) Θ(N + k) O(N + k) O(k)
Bucket Sort Ω(N + k) Θ(N + k) O(N2) O(N)

Which sorting algorithm is best when list is already sorted?

Article Tags :
Analysis
GATE CS

When to use each Sorting Algorithm

A sorting algorithm is an algorithm that makes arrange in a certain order. The fundamental task is to put the items in the desired order so that the records are re-arranged for making searching easier.
Below is one by on description for when to use which sorting algorithms for better performance –

  1. Selection Sort –
    This sorting algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from the unsorted part and putting it at the beginning. The algorithm maintains two subarrays in a given array, the subarray which is already sorted and remaining subarray which is unsorted. In every iteration of selection sort, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray.

    We can use Selection Sort as per below constraints :

    • When the list is small. As the time complexity of selection sort is O(N2) which makes it inefficient for a large list.
    • When memory space is limited because it makes the minimum possible number of swaps during sorting.
  2. Bubble Sort –
    This sorting algorithm is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. If we have total N elements, then we need to repeat the above process for N-1 times.

    We can use Bubble Sort as per below constraints :

    • It works well with large datasets where the items are almost sorted because it takes only one iteration to detect whether the list is sorted or not. But if the list is unsorted to a large extend then this algorithm holds good for small datasets or lists.
    • This algorithm is fastest on an extremely small or nearly sorted set of data.
  3. Insertion Sort –
    This sorting algorithm is a simple sorting algorithm that works the way we sort playing cards in our hands. It places an unsorted element at its suitable place in each iteration.

    We can use Insertion Sort as per below constraints :

    • If the data is nearly sorted or when the list is small as it has a complexity of O(N2) and if the list is sorted a minimum number of elements will slide over to insert the element at it’s correct location.
    • This algorithm is stable and it has fast running case when the list is nearly sorted.
    • The usage of memory is a constraint as it has space complexity of O(1).
  4. Merge Sort –
    This sorting algorithm is based on Divide and Conquer algorithm. It divides input array into two halves, calls itself for the two halves, and then merges the two sorted halves.
    The merge() function is used for merging two halves. The merge(arr, l, m, r) is a key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges the two sorted sub-arrays into one.

    We can use Merge Sort as per below constraints :

    • Merge sort is used when the data structure doesn’t support random access since it works with pure sequential access that is forward iterators, rather than random access iterators.
    • It is widely used for external sorting, where random access can be very, very expensive compared to sequential access.
    • It is used where it is known that the data is similar data.
    • Merge sort is fast in the case of a linked list.
    • It is used in the case of a linked list as in linked list for accessing any data at some index we need to traverse from the head to that index and merge sort accesses data sequentially and the need of random access is low.
    • The main advantage of the merge sort is its stability, the elements compared equally retain their original order.
  5. Quick Sort –
    This sorting algorithm is also based on Divide and Conquer algorithm. It picks an element as pivot and partitions the given list around the picked pivot. After partitioning the list on the basis of the pivot element, the Quick is again applied recursively to two sublists i.e., sublist to the left of the pivot element and sublist to the right of the pivot element.

    We can use Quick Sort as per below constraints :

    • Quick sort is fastest, but it is not always O(N*log N), as there are worst cases where it becomes O(N2).
    • Quicksort is probably more effective for datasets that fit in memory. For larger data sets it proves to be inefficient so algorithms like merge sort are preferred in that case.
    • Quick Sort in is an in-place sort (i.e. it doesn’t require any extra storage) so it is appropriate to use it for arrays.

Which sorting algorithm is best when list is already sorted?

Article Tags :
Algorithms
Articles
Sorting
Algorithms-Sorting-Quiz
Practice Tags :
Sorting
Algorithms

Which is the fastest sorting algorithm when the list is nearly sorted?

Bubble sort is fast, but insertion sort has lower overhead. Shell sort is fast because it is based on insertion sort. Merge sort, heap sort, and quick sort do not adapt to nearly sorted data.

Which is the best sorting algorithm to use when the list is almost ordered in ascending sequence?

​Many sorting algorithms are available, but the one which is best suited for the almost sorted array is the insertion sort.

Which sorting algorithm gives best performance when array elements are already sorted quick heap merge insertion?

Question 10 Explanation: The bubble sort is at its best if the input data is sorted.

Which algorithm gives better performance in sorting Mcq?

Explanation: Quick sort is the fastest known sorting algorithm because of its highly optimized inner loop. 2.

Which of the following sorting algorithm is best of the elements are already sorted?

Which of the following sorting algorithm is best suited if the elements are already sorted? Explanation: The insertion sort's best case running time is O. (n). When the input list is already sorted, the best case scenario occurs.

Which algorithm is more efficient?

Quicksort is one of the most efficient sorting algorithms, and this makes of it one of the most used as well. The first thing to do is to select a pivot number, this number will separate the data, on its left are the numbers smaller than it and the greater numbers on the right.

Time Complexities of Sorting Algorithms:

AlgorithmBestWorst
Selection SortΩ(n^2)O(n^2)
Heap SortΩ(n log(n))O(n log(n))
Radix SortΩ(nk)O(nk)
Bucket SortΩ(n+k)O(n^2)

Bubble sort is fast, but insertion sort has lower overhead. Shell sort is fast because it is based on insertion sort. Merge sort, heap sort, and quick sort do not adapt to nearly sorted data.

You may also like:

  1. How Heavy Is The Evenflo Maestro? Table of Contents How much does the Evenflo Maestro weigh?Can...
  2. Can My Cervix Open Overnight? Table of Contents Can cervix open one day?Can you have...
  3. How Long Do Baby Birds Stay With Their Mother? Table of Contents Do baby birds come back to the...
  4. Where Do You Feel Baby Kicks? Table of Contents Where do you feel baby flutters at?How...
  5. Can You Fix A Fridge Compressor? Table of Contents Is it worth replacing the compressor on...

AlgorithmEdit

A graphical example of insertion sort. The partial sorted list (black) initially contains only the first element in the list. With each iteration one element (red) is removed from the "not yet checked for order" input data and inserted in-place into the sorted list.

Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain.

Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. At each array-position, it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). If larger, it leaves the element in place and moves to the next. If smaller, it finds the correct position within the sorted list, shifts all the larger values up to make a space, and inserts into that correct position.

The resulting array after k iterations has the property where the first k + 1 entries are sorted ("+1" because the first entry is skipped). In each iteration the first remaining entry of the input is removed, and inserted into the result at the correct position, thus extending the result:

becomes

with each element greater than x copied to the right as it is compared against x.

The most common variant of insertion sort, which operates on arrays, can be described as follows:

  1. Suppose there exists a function called Insert designed to insert a value into a sorted sequence at the beginning of an array. It operates by beginning at the end of the sequence and shifting each element one place to the right until a suitable position is found for the new element. The function has the side effect of overwriting the value stored immediately after the sorted sequence in the array.
  2. To perform an insertion sort, begin at the left-most element of the array and invoke Insert to insert each element encountered into its correct position. The ordered sequence into which the element is inserted is stored at the beginning of the array in the set of indices already examined. Each insertion overwrites a single value: the value being inserted.

Pseudocode of the complete algorithm follows, where the arrays are zero-based:[1]

i ← 1 while i < length(A) j ← i while j > 0 and A[j-1] > A[j] swap A[j] and A[j-1] j ← j - 1 end while i ← i + 1 end while

The outer loop runs over all the elements except the first one, because the single-element prefix A[0:1] is trivially sorted, so the invariant that the first i entries are sorted is true from the start. The inner loop moves element A[i] to its correct place so that after the loop, the first i+1 elements are sorted. Note that the and-operator in the test must use short-circuit evaluation, otherwise the test might result in an array bounds error, when j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).

After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ← x (where x is a temporary variable), a slightly faster version can be produced that moves A[i] to its position in one go and only performs one assignment in the inner loop body:[1]

i ← 1 while i < length(A) x ← A[i] j ← i - 1 while j >= 0 and A[j] > x A[j+1] ← A[j] j ← j - 1 end while A[j+1] ← x[3] i ← i + 1 end while

The new inner loop shifts elements to the right to clear a spot for x = A[i].

The algorithm can also be implemented in a recursive way. The recursion just replaces the outer loop, calling itself and storing successively smaller values of n on the stack until n equals 0, where the function then returns up the call chain to execute the code after each recursive call starting with n equal to 1, with n increasing by 1 as each instance of the function returns to the prior instance. The initial call would be insertionSortR(A, length(A)-1).

function insertionSortR(array A, int n) if n > 0 insertionSortR(A, n-1) x ← A[n] j ← n-1 while j >= 0 and A[j] > x A[j+1] ← A[j] j ← j-1 end while A[j+1] ← x end if end function

It does not make the code any shorter, it also doesn't reduce the execution time, but it increases the additional memory consumption from O(1) to O(N) (at the deepest level of recursion the stack contains N references to the A array, each with accompanying value of variable n from N down to 1).