Algorithm Design Techniques

 

Algorithm Design Techniques

The following is a list of several popular design approaches:

What is an algorithm? 
An Algorithm is a procedure to solve a particular problem in a finite number of steps for a finite-sized input. 
The algorithms can be classified in various ways. They are: 
 

  1. Implementation Method
  2. Design Method
  3. Design Approaches
  4. Other Classifications

In this article, the different algorithms in each classification method are discussed. 

The classification of algorithms is important for several reasons:

Organization: Algorithms can be very complex and by classifying them, it becomes easier to organize, understand, and compare different algorithms.

Problem Solving: Different problems require different algorithms, and by having a classification, it can help identify the best algorithm for a particular problem.

Performance Comparison: By classifying algorithms, it is possible to compare their performance in terms of time and space complexity, making it easier to choose the best algorithm for a particular use case.

Reusability: By classifying algorithms, it becomes easier to re-use existing algorithms for similar problems, thereby reducing development time and improving efficiency.

Research: Classifying algorithms is essential for research and development in computer science, as it helps to identify new algorithms and improve existing ones.

Overall, the classification of algorithms plays a crucial role in computer science and helps to improve the efficiency and effectiveness of solving problems.
Classification by Implementation Method: There are primarily three main categories into which an algorithm can be named in this type of classification. They are

  1. Recursion or Iteration: A recursive algorithm is an algorithm which calls itself again and again until a base condition is achieved whereas iterative algorithms use loops and/or data structures like stacks, queues to solve any problem. Every recursive solution can be implemented as an iterative solution and vice versa. 
    Example: The Tower of Hanoi is implemented in a recursive fashion while Stock Span problem is implemented iteratively.
  2. Exact or Approximate: Algorithms that are capable of finding an optimal solution for any problem are known as the exact algorithm. For all those problems, where it is not possible to find the most optimized solution, an approximation algorithm is used. Approximate algorithms are the type of algorithms that find the result as an average outcome of sub outcomes to a problem. 
    Example: For NP-Hard Problems, approximation algorithms are used. Sorting algorithms are the exact algorithms.
  3. Serial or Parallel or Distributed Algorithms: In serial algorithms, one instruction is executed at a time while parallel algorithms are those in which we divide the problem into subproblems and execute them on different processors. If parallel algorithms are distributed on different machines, then they are known as distributed algorithms.

Classification by Design Method: There are primarily three main categories into which an algorithm can be named in this type of classification. They are: 
 

  1. Greedy Method: In the greedy method, at each step, a decision is made to choose the local optimum, without thinking about the future consequences. 
    Example: Fractional Knapsack, Activity Selection.
  2. Divide and Conquer: The Divide and Conquer strategy involves dividing the problem into sub-problem, recursively solving them, and then recombining them for the final answer. 
    Example: Merge sort, Quicksort.
  3. Dynamic Programming: The approach of Dynamic programming is similar to divide and conquer. The difference is that whenever we have recursive function calls with the same result, instead of calling them again we try to store the result in a data structure in the form of a table and retrieve the results from the table. Thus, the overall time complexity is reduced. “Dynamic” means we dynamically decide, whether to call a function or retrieve values from the table. 
    Example: 0-1 Knapsack, subset-sum problem.
  4. Linear Programming: In Linear Programming, there are inequalities in terms of inputs and maximizing or minimizing some linear functions of inputs. 
    Example: Maximum flow of Directed Graph
  5. Reduction(Transform and Conquer): In this method, we solve a difficult problem by transforming it into a known problem for which we have an optimal solution. Basically, the goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. 
    Example: Selection algorithm for finding the median in a list involves first sorting the list and then finding out the middle element in the sorted list. These techniques are also called transform and conquer.
  6. Backtracking: This technique is very useful in solving combinatorial problems that have a single unique solution. Where we have to find the correct combination of steps that lead to fulfillment of the task.  Such problems have multiple stages and there are multiple options at each stage. This approach is based on exploring each available option at every stage one-by-one. While exploring an option if a point is reached that doesn’t seem to lead to the solution, the program control backtracks one step, and starts exploring the next option. In this way, the program explores all possible course of actions and finds the route that leads to the solution.  
    Example: N-queen problem, maize problem.
  7. Branch and Bound: This technique is very useful in solving combinatorial optimization problem that have multiple solutions and we are interested in find the most optimum solution. In this approach, the entire solution space is represented in the form of a state space tree. As the program progresses each state combination is explored, and the previous solution is replaced by new one if it is not the optimal than the current solution. 
    Example: Job sequencing, Travelling salesman problem.

Classification by Design Approaches : There are two approaches for designing an algorithm. these approaches include 

  1. Top-Down Approach :
  2. Bottom-up approach
  • Top-Down Approach: In the top-down approach, a large problem is divided into small sub-problem. and keep                repeating the process of decomposing problems until the complex problem is solved.
  • Bottom-up approach: The bottom-up approach is also known as the reverse of top-down approaches.
    In approach different, part of a complex program is solved using a programming language and then this is combined into a complete program.

Top-Down Approach:

Breaking down a complex problem into smaller, more manageable sub-problems and solving each sub-problem individually.
Designing a system starting from the highest level of abstraction and moving towards the lower levels.
Bottom-Up Approach:                                                   

Building a system by starting with the individual components and gradually integrating them to form a larger system.
Solving sub-problems first and then using the solutions to build up to a solution of a larger problem.
Note: Both approaches have their own advantages and disadvantages and the choice between them often depends on the specific problem being solved.

Here are examples of the Top-Down and Bottom-Up approaches in code:

Top-Down Approach (in Python):

def solve_problem(problem):
    if problem == "simple":
        return "solved"
    elif problem == "complex":
        sub_problems = break_down_complex_problem(problem)
        sub_solutions = [solve_problem(sub) for sub in sub_problems]
        return combine_sub_solutions(sub_solutions)

Bottom-Up Approach (in Python):

def solve_sub_problems(sub_problems):
    return [solve_sub_problem(sub) for sub in sub_problems]
 
def combine_sub_solutions(sub_solutions):
    # implementation to combine sub-solutions to solve the larger problem
 
def solve_problem(problem):
    if problem == "simple":
        return "solved"
    elif problem == "complex":
        sub_problems = break_down_complex_problem(problem)
        sub_solutions = solve_sub_problems(sub_problems)
        return combine_sub_solutions(sub_solutions)

Other Classifications: Apart from classifying the algorithms into the above broad categories, the algorithm can be classified into other broad categories like: 
 

  1. Randomized Algorithms: Algorithms that make random choices for faster solutions are known as randomized algorithms. 
    Example: Randomized Quicksort Algorithm.
  2. Classification by complexity: Algorithms that are classified on the basis of time taken to get a solution to any problem for input size. This analysis is known as time complexity analysis. 
    Example: Some algorithms take O(n), while some take exponential time.
  3. Classification by Research Area: In CS each field has its own problems and needs efficient algorithms. 
    Example: Sorting Algorithm, Searching Algorithm, Machine Learning etc.
  4. Branch and Bound Enumeration and Backtracking: These are mostly used in Artificial Intelligence.

1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the divide & conquer techniques involve three steps:

  • Divide the original problem into a set of subproblems.
  • Solve every subproblem individually, recursively.
  • Combine the solution of the subproblems (top level) into a solution of the whole original problem.

2. Greedy Technique: Greedy method is used to solve the optimization problem. An optimization problem is one in which we are given a set of input values, which are required either to be maximized or minimized (known as objective), i.e. some constraints or conditions.

  • Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to optimize a given objective.
  • The greedy algorithm doesn't always guarantee the optimal solution however it generally produces a solution that is very close in value to the optimal.

3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all possible small problems and then combine them to obtain solutions for bigger problems.

This is particularly helpful when the number of copying subproblems is exponentially large. Dynamic Programming is frequently related to Optimization Problems.

4. Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be bounded, has to be divided into at least two new restricted subproblems. Branch and Bound algorithm are methods for global optimization in non-convex problems. Branch and Bound algorithms can be slow, however in the worst case they require effort that grows exponentially with problem size, but in some cases we are lucky, and the method coverage with much less effort.

5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed to access a source of independent, unbiased random bits, and it is then allowed to use these random bits to influence its computation.

6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the right one. It is a depth-first search of the set of possible solution. During the search, if an alternative doesn't work, then backtrack to the choice point, the place which presented different alternatives, and tries the next alternative.

7. Randomized Algorithm: A randomized algorithm uses a random number at least once during the computation make a decision.

Example 1: In Quick Sort, using a random number to choose a pivot.

Example 2: Trying to factor a large number by choosing a random number as possible divisors.


Loop invariants

This is a justification technique. We use loop invariant that helps us to understand why an algorithm is correct. To prove statement S about a loop is correct, define S concerning series of smaller statement S0 S1....Sk where,

  • The initial claim so is true before the loop begins.
  • If Si-1 is true before iteration i begin, then one can show that Si will be true after iteration i is over.
  • The final statement Sk implies the statement S that we wish to justify as being true.

Comments

Popular posts from this blog

Recurrence Relation

Recursion Method

Substitution Method