More Testing Video Screencasts Tutorials and Courses on Coursinator.com
Reading Time: 3 minutes
Priority queues are a generalisation of stacks and queues.
Rather than inserting and deleting elements in a fixed order, each element is assigned a priority represented by an integer.
We always remove an element with the highest priority, which is given by the minimal integer priority assigned. Priority queues often have a fixed size.
Advantages of Priority Queues
- Easy to implement
- Processes with different priority can be efficiently handled
- Used in Applications with differing requirements
- Nodes can be weighted, allowing those with greater precedence to be moved towards the head of the queue, in front of those with lesser priority, rather than always being added to the tail of the queue as would happen in a normal queue
- O(n) In the worst case, you might have to traverse the entire list when inserting a node based on key in a priority queue
- A priority queue does not permit null elements
Disadvantages of Priority Queues
- insertions are no longer performed in constant time as new nodes must use insertion sort to find their place in the queue (behind nodes with greater or equal priority). However, if the variable weights are finite, maintaining pointers for each weight in a static array will provide constant time insertions.
Implementation of Priority Queues
Priority queue can be implemented using an array, a list, a binary search tree or a heap, although the most efficient one being the heap. The Undo operation is achieved using a stack.
- Before we come to heaps, it is worth considering different implementation choices and consider the complexity of various operations.
The first idea is to use an unordered array of size limit, where we keep
a current index n. Inserting into such an array is a constant-time operation, since we only have to insert it at n and increment n. However, finding the minimum will take O(n), since we have to scan the whole portion of the array that’s in use. Consequently, deleting the minimal element also takes (n): first we find the minimal element, then we swap it with the last element in the array, and decrement n
- keep the array sorted. In this case, inserting an element is O(n). We can quickly (in O(log(n)) steps) find the place i where it belongs using binary search, but then we need to shift elements to make room for the insertion. This take O(n) copy operations. Finding the minimum is O(1) (since it is stored at index 0 in the array). We can also make deleting it O(1) if we keep the array sorted in descending order, or if we keep two array indices: one for the smallest current element and one for the largest.
- heaps will have logarithmic time for insert and deleting the minimal element
Usually, the number of inserts and deletes to roughly balance. So, neither the unordered nor the ordered array provide a good data structure since a sequence of n inserts and deletes will have worst-case complexity O(n2).
The idea of the heap is to use something cleverly situated in between. A min-heap is like an array that is ordered to some extent: enough, that the least element can be found in O(1), but not so rigidly that inserting would take O(n) time. A min-heap is a binary tree where the invariant guarantees that the least element is at the root. For this to be the case we just require that the key of a node is less or equal to the keys of its children. So, each node except the root is greater or equal to its parent.
The elements of the priority queue are ordered according to the natural ordering, or by a Comparator provided at queue construction time