Basics of Algorithm Analysis
Last updated
Last updated
An algorithm is efficient if, when implemented, it runs quickly on real input instances.
An algorithm is efficient if it achieves qualittatively better worst-case peformance, at an analytical level, than brute-force search.
An algorithm is efficient if it has a polynomial running time. The running time is polynomial if it's bounded by primitive computational steps.
The computational tractability is based on bounding an algorithm's worst-case running time on inputs of size n grows at a rate that is at most proportional to .
Let be a function of the worst-case running time of a certain algorithm on an input size of n.
Given another function , we say that is if is bounded above by a constant multiple of . ()
Example: , thus is or .
Let be a function of the worst-case running time of a certain algorithm on an input size of n.
Given another function , we say that is if is bounded below by a constant multiple of . ()
Example: , thus is or .
FreeMan
list: The linked list of free men.
Next[m]
array: The array that indicates for each man m the position of the next woman he will propose to.
Current[w]
array: The array that indicates for each woman her current partner. The default value is null
.
Ranking[w, m]
array: The array that contains the rank of man m in the sorted order of w' preferences.
Identify a free man: Take the first man from FreeMan
.
Identify the highest-ranking woman to whom the man m has not yet proposed: Find the woman with Next[m]
.
Decide if woman w is engaged and who is her current partner: Find the partner with Current[w]
.
Decide which of m or m' is preferred by w in constant time: Compare Ranking[w, m]
and Ranking[w, m']
.
Finding the maximum: Process the numbers in a single pass.
Merging two sorted lists:
Quadratic time arises naturally from a pair of nested loops or performing a search over all pairs of input items and spending constant time per pair.
Algorithms with sublinear runtime arise in a model of computation where the input can be "queried" indirectly rather than read completely, since it takes linear time just to read the input.
A priority queue maintains a set of elements S, where each element v ∈ S has an associated value key(v)
that denotes the priority of element v; smaller keys represent higher priorities.
A priority queue upports the addition and deletion of elements from the set, and also the selection of the element with smallest key.
Heap order: The key of any element is at least as large as the key of the element at its parent node in the tree.
The root node: Heap[1]
Left Child: leftChild(i) = 2i
Right Child: rightChild(i) = 2i + 1
Parent: parent(i) = ⌊i/2⌋
Size: length(H)
Append the new element to the end of the list with index i
.
Use the Heapify-up
procedure to fix our heap.
Let j = ⌊i/2⌋
be the parent of the node i
.
If key[Heap[i]] < key[Heap[j]]
, swap the positions of Heap[i]
and Heap[j]
.
Process recursively from the position i
to continue fixing the heap.
When deleting the item at position i, move the element w
in position n to position i.
Use the Heapify-up
or Heapify-down
procedure to fix our heap.
Let 2i
and 2i + 1
be the children of the node i
.
Let j
be the minimum of 2i
and 2i + 1
.
If key[Heap[i]] > key[Heap[j]]
, swap the positions of Heap[i]
and Heap[j]
.
Process recursively from the position j
to continue fixing the heap.
The heap data structure with the Heapify-down and Heapify-up operations can efficiently implement a priority queue that is constrained to hold at most N elements at any point in time.
To be able to access given elements of the priority queue efficiently, we simply maintain an additional array Position
that stores the current position of each element in the heap.
To delete the element v, we apply Delete(H,Position[v])
.
To change the key of element v, we identify the position of v, change its key, and fix the heap with Heapify-up
or Heapify-down
.
If we can show that a running time is both and , then grows exactly like to within a constant factor. Therefore, we could say that is .
Example: is and , thus is .
Upper bounds:
Lower bounds:
Tight bounds:
Suppose that f and g are two functions and for some other function h, we have and . Then .
Let k be a fixed constant, and let and h be functions such that . Then .
Suppose that f and g are two functions such that . Then . f is an asymptotically tight bound for the combined function.
Let f and g be two functions that exists.
If the limit is equal to 0. Then .
If the limit is equal to some number c > 0. Then .
If the limit is equal to infinity. Then .
Polynomial: Let f be a polynomial of degree d, in which the coefficient is positive. Then .
Logarithms: For every b > 1 and every x > 0, .
Exponentials: For every r > 1 and every d > 0, .
The algorithm terminates in at most iterations. If we implement each iteration in constant time, the upper bound of this algorithm is .
ManPerf[m, i]
array: The woman on man m's preference lists.
WomanPerf[w, i]
array: The man on woman w's preference lists.
The basic way to get an algorithm with linear time () is to process the input in a single pass, spending a constant amount of time on each item of input encountered.
time is common since it's the running time of any algorithm that splits its input into two equals-ized pieces, solves each piece recursively, and then combines the two solutions in linear time. Mergesort is a well-known example.
More elaborate sets of nested loops often lead to algorithms that run in time.
We obtain a running time of for any constant k when we search over all subsets of size k.
arises naturally as a running time for a search algorithm that must consider all subsets.
arises in problems where the search space consists of all ways to arrange n items in order or all matches between two sets.
The running time of binary search is , because of the successive shrinking of the search region. It arises as a time bound when an algorithm does a constant amount of work to throw away a constant fraction of the input.
A sequence of priority queue operations can be used to sort a set of n numbers. If each operation is time, we could sort n numbers in time.
A heap is a balanced binary tree that satisfies the heap order. We could use an array indexed by to represent the tree.
The heap element with smallest key is at the root, so it takes time to identify the minimal element.
The procedure Heapify-up(H, i)
fixes the heap property in time, assuming that the array H is almost a heap with the key of H[i]
too small. Using Heapify-up
we can insert a new element in a heap of n elements in time.
The procedure Heapify-down(H, i)
fixes the heap property in time, assuming that H is almost a heap with the key value of H[i]
too big. Using Heapify-up
or Heapify-down
we can delete a new element in a heap of n elements in time.
StartHeap(n)
: returns an empty heap with at most N elements. ()
Insert(h, v)
: inserts the item v into heap H. ()
Delete(h, v)
: delsete the item v at position i of heap H. ()
FindMin(H)
: identifies the minimum element in the heap H. ()
ExtractMin(H)
: identifies and deletes an element with minimum key value from a heap. ()