Computer Science

Basic Definations Subnetting concepts Algorithm and Time Set Theory Basics DBMS Basics Data Communications Computer Graphics Basics
History Creations

Algorithm


1.General Approaches in Algorithm Design:

In a broad sense, many algorithms approach problems in the same way. Thus, it is often convenient to classify them based on the approach they employ. One reason to classify algorithms is to gain an insight about an algorithm and understand its general approach. This can also give us ideas about how to look at similar problems for which we do not know algorithms. Of course, some algorithms defy classification, whereas others are based on a combination of approaches. This section presents some common approaches.



1.1 Randomized Algorithms:

Randomized algorithms rely on the statistical properties of random numbers. One example of a randomized algorithm is quicksort. Imagine an unsorted pile of cancelled checks by hand. In order to sort this pile we place the checks numbered less than or equal to what we may think is the median value in one pile, and in the other pile we place the checks numbered greater than this. Once we have the two piles, we divide each of them in the same manner and repeat the process until we end up with one check in every pile. At this point the checks are sorted.


1.2 Divide-and-conquer Algorithms:

Divide-and-conquer algorithms revolve around 3 steps: divide, conquer and combine. In the divide step, we divide the data into smaller, more manageable pieces. In the conquer step, we process each division by performing some operations on it. In the combine step, we recombine the processed divisions. An example of the divide-and- conquer algorithm is merge sort. As before, imagine an unsorted pile of cancelled checks by hand. We begin by dividing the pile in half. Next, we divide each of the resulting two piles in half and continue this process until we end up with one check in every pile. Once all piles contain a single check, we merge the piles two by two so that each pile is a sorted combination of the two that were merged. Merging continues until we end up with one big pile again, at which point the checks are sorted.


1.3 Dynamic-programming solutions:

Dynamic-programming solutions are similar to divide-and- conquer methods in that both solve problems by breaking larger problems into sub-problems whose results are later recombined. However, the approaches differ in how sub-problems are related. In divide-and-conquer algorithms, each sub-problem is independent of the others. Therefore we solve each sub-problem using recursion and combine its results with the results of other sub-problems. In dynamic- programming solutions, sub-problems are not independent of one another. A dynamic-programming solution is better than a divide-and- conquer approach because the latter approach will do more work than necessary, as shared sub-problems are solved more than once. However, if the sub-problems are independent and there is no repetition, using divide-and-conquer algorithms is much better than using dynamic-programming.


1.4 Greedy Algorithms:

Greedy algorithms make decisions that look best at the moment. In other words, they make decisions that are locally optimal in the hope that they will lead to globally optimal solutions. The greedy method extends the solution with the best possible decision at an algorithmic stage based on the current local optimum and the best decision made in a previous stage. It is not exhaustive, and does not give accurate answer to many problems. But when it works, it will be the fastest method.


1.5 Approximation Algorithms:

Approximation algorithms are algorithms that do not compute optimal solutions; instead, they compute solutions that are “good enough”. Often we use approximation algorithms to solve problems that are computationally expensive but are too significant to give up altogether. The travelling salesman problem is one example of a problem usually solved using an approximation algorithm.


2.Analysis of Algorithms:

Analysis of algorithms is the theoretical study of computer program performance and resource usage, and is often practised abstractly without the use of specific programming language or implementation. The practical goal of algorithm analysis is to predict the performance of different algorithms in order to guide program design decisions. There are many reasons to understand the performance of an algorithm. For example, we often have a choice of several algorithms when solving problems (like sorting). Understanding how each performs helps us differentiate between them. Understanding the burden an algorithm places on an application also helps us plan how to use the algorithm more effectively.


Worst-Case Analysis:

Most algorithms do not perform the same in all cases; normally an algorithm’s performance varies with the data passed to it. Typically, three cases are recognized: the best case, average case and worst case. For any algorithm, understanding what constitutes each of these cases is an important part of analysis because performance can vary significantly between them. Consider a simple algorithm such as linear search. Linear search is a natural but inefficient search technique in which we look for an element simply by traversing a set from one end to the other. In the best case, the element we are looking for is the first element we inspect, so we end up traversing only a single element. In the worst case, however, the desired element is the last element that we inspect, in which we end up traversing all the elements. On average, we can expect to find the element somewhere in the middle.


O-notation:

O-notation, also known as Big O-notation, is the most common notation used to express an algorithm’s performance in a formal manner. Formally, O-notation expresses the upper bound is a function within a constant factor. Specifically, if g(n) is an upper bound of f(n), then for some constant c it is possible to find the value of n, call it n 0 , for which any value of n≥n 0 will result in f(n)≤cg(n). Normally we express an algorithm’s performance as a function of the size of the data it processes. That is, for some data of size n, we describe its performance with some function f(n). Primarily we are interested only in the growth rate of f, which describes how quickly the algorithm’s performance will degrade as the size of data it processes becomes arbitrarily large. An algorithm’s growth rate, or order of growth, is significant because ultimately it describes how efficient the algorithm is for aritrary inputs. O-notation reflects an algorithm’s order of growth


Time complexity of sorting algorithms

Algorithm Time Complexity
  Best Average Worst
Selection Sort Ω(n^2) θ(n^2) O(n^2)
Bubble Sort Ω(n) θ(n^2) O(n^2)
Insertion Sort Ω(n) θ(n^2) O(n^2)
Heap Sort Ω(n log(n)) θ(n log(n)) O(n log(n))
Quick Sort Ω(n log(n)) θ(n log(n)) O(n^2)
Merge Sort Ω(n log(n)) θ(n log(n)) O(n log(n))
Bucket Sort Ω(n+k) θ(n+k) O(n^2)
Radix Sort Ω(nk) θ(nk) O(nk)