diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/README.md new file mode 100644 index 0000000000..ba60d4377c --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/README.md @@ -0,0 +1,95 @@ +# All-Pairs Shortest Path (APSP) + +## What is All-Pairs Shortest Path? + +The **All-Pairs Shortest Path (APSP)** problem is a classic problem in graph theory. It involves finding the shortest paths between all pairs of nodes in a weighted graph. For every pair of nodes \(u\) and \(v\), the algorithm determines the shortest distance (or path) from \(u\) to \(v\). + +### Problem Statement: + +Given a graph \(G\) with \(n\) vertices and weighted edges, find the shortest paths between every pair of vertices. The weights on the edges may be positive or negative, but the graph should not contain negative-weight cycles. + +### Key Algorithms: + +There are several efficient algorithms designed to solve the APSP problem, two of the most well-known being: + +1. **Floyd-Warshall Algorithm** +2. **Johnson’s Algorithm** + +--- + +## 1. Floyd-Warshall Algorithm + +### Overview: + +The **Floyd-Warshall Algorithm** is a dynamic programming-based algorithm used to solve the APSP problem in \(O(n^3)\) time, where \(n\) is the number of vertices in the graph. It works by iteratively improving the shortest path estimates between all pairs of nodes, considering each node as an intermediate point along potential paths. + +### Steps: + +1. **Initialization**: Start with a distance matrix where the direct edge weight between vertices is given. If no edge exists, the distance is set to infinity. +2. **Update**: For each pair of nodes, check whether including an intermediate node results in a shorter path. Update the distance matrix accordingly. +3. **Result**: The final matrix contains the shortest paths between all pairs of nodes. + +### Time Complexity: + +- **Time Complexity**: \(O(n^3)\) +- **Space Complexity**: \(O(n^2)\) + +### Applications: + +- **Routing Algorithms**: Used in network routing to determine the most efficient paths for data packets between nodes. +- **Game Development**: Helps in finding the shortest paths for characters or elements to travel in a virtual world. +- **Geographic Mapping Systems**: Identifying the quickest travel routes between cities or locations on a map. + +--- + +## 2. Johnson's Algorithm + +### Overview: + +**Johnson’s Algorithm** is an advanced approach for solving the APSP problem, especially when the graph contains sparse edges (i.e., fewer edges compared to a complete graph). The algorithm modifies the weights of the graph to ensure no negative-weight edges, then applies **Dijkstra’s Algorithm** from each node. + +### Steps: + +1. **Graph Reweighting**: Use **Bellman-Ford Algorithm** to adjust the edge weights to ensure all weights are non-negative. +2. **Dijkstra’s Algorithm**: Apply Dijkstra’s Algorithm from each vertex to determine the shortest path to all other vertices. +3. **Result**: After reweighting, the shortest paths are calculated efficiently in \(O(n^2 \log n + nm)\) time. + +### Time Complexity: + +- **Time Complexity**: \(O(n^2 \log n + nm)\) (where \(m\) is the number of edges) +- **Space Complexity**: \(O(n^2)\) + +### Applications: + +- **Large Sparse Graphs**: Johnson’s algorithm is preferred when the graph is sparse (i.e., has fewer edges compared to vertices), such as in road networks or network topology. +- **Telecommunication Networks**: Used to optimize routing paths in large-scale communication systems. +- **Social Networks**: Helps in identifying the shortest relationships or interactions between individuals in a social network. + +--- + +## Key Differences Between Floyd-Warshall and Johnson's Algorithm: + +| Algorithm | Time Complexity | Space Complexity | Suitable For | +|-------------------|--------------------------|------------------|---------------------------------------| +| **Floyd-Warshall**| \(O(n^3)\) | \(O(n^2)\) | Dense graphs, simpler implementation | +| **Johnson’s** | \(O(n^2 \log n + nm)\) | \(O(n^2)\) | Sparse graphs, more complex, scalable | + +--- + +## Applications of APSP Algorithms + +1. **Network Design**: Efficient pathfinding in telecommunication and transportation networks to ensure minimum-cost routing. +2. **Web Mapping Services**: Shortest path algorithms are used in GPS and web services like Google Maps to find optimal routes between locations. +3. **Social Network Analysis**: Determine centrality, influence, and shortest interactions in social graphs. +4. **Robotics and Pathfinding**: Autonomous systems use APSP algorithms to navigate efficiently in environments by planning the shortest routes between points. +5. **Traffic Management Systems**: In urban settings, APSP helps model and manage traffic flow by finding the least congested routes between intersections. + +--- + +## Conclusion + +The **All-Pairs Shortest Path (APSP)** problem is fundamental in graph theory and computer science. Understanding the Floyd-Warshall and Johnson’s algorithms, along with their applications, is critical for solving pathfinding problems in diverse areas like networks, robotics, and optimization. + +By mastering these algorithms, you can optimize real-world systems and solve complex graph-related challenges efficiently. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/floyd_warshall.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/floyd_warshall.py new file mode 100644 index 0000000000..ea37f7197c --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/floyd_warshall.py @@ -0,0 +1,45 @@ +def floydWarshall(graph): + # Create a distance matrix initialized with the values from the input graph + dist = list(map(lambda i: list(map(lambda j: j, i)), graph)) + + # Iterate through each vertex as an intermediate point + for k in range(len(graph)): + # Iterate through each source vertex + for i in range(len(graph)): + # Iterate through each destination vertex + for j in range(len(graph)): + # Update the shortest distance between vertices i and j + dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]) + + # Print the final solution showing shortest distances + printSolution(dist) + +def printSolution(dist): + # Print the header for the distance matrix + print("Following matrix shows the shortest distances between every pair of vertices:") + # Iterate through each row in the distance matrix + for i in range(len(dist)): + # Iterate through each column in the distance matrix + for j in range(len(dist)): + # Check if the distance is infinite (no connection) + if dist[i][j] == INF: + print("%7s" % ("INF"), end=" ") # Print 'INF' if there's no path + else: + print("%7d\t" % (dist[i][j]), end=' ') # Print the distance if it exists + # Print a newline at the end of each row + if j == len(dist) - 1: + print() + +if __name__ == "__main__": + V = int(input("Enter the number of vertices: ")) # Get the number of vertices from the user + INF = float('inf') # Define infinity for graph initialization + + graph = [] # Initialize an empty list to represent the graph + print("Enter the graph as an adjacency matrix (use 'INF' for no connection):") + # Read the adjacency matrix from user input + for i in range(V): + row = list(map(lambda x: float('inf') if x == 'INF' else int(x), input().split())) + graph.append(row) # Append each row to the graph + + # Call the Floyd-Warshall function with the constructed graph + floydWarshall(graph) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/johnsons.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/johnsons.py new file mode 100644 index 0000000000..779e678d6b --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/All_Pair_Shortest_path_problems/johnsons.py @@ -0,0 +1,92 @@ +from collections import defaultdict + +# Define a constant for infinity +INT_MAX = float('Inf') + +def Min_Distance(dist, visit): + # Initialize minimum distance and the corresponding vertex + minimum, minVertex = INT_MAX, -1 + # Iterate through all vertices to find the vertex with the minimum distance + for vertex in range(len(dist)): + if minimum > dist[vertex] and not visit[vertex]: # Check if the vertex is not visited + minimum, minVertex = dist[vertex], vertex # Update minimum distance and vertex + return minVertex # Return the vertex with the minimum distance + +def Dijkstra_Algorithm(graph, Altered_Graph, source): + tot_vertices = len(graph) # Total number of vertices in the graph + sptSet = defaultdict(lambda: False) # Set to track the shortest path tree + dist = [INT_MAX] * tot_vertices # Initialize distances to infinity + dist[source] = 0 # Distance from source to itself is 0 + + # Loop through all vertices + for _ in range(tot_vertices): + curVertex = Min_Distance(dist, sptSet) # Find the vertex with the minimum distance + sptSet[curVertex] = True # Mark the vertex as visited + + # Update distances to adjacent vertices + for vertex in range(tot_vertices): + # Check for an edge and if the current distance can be improved + if (not sptSet[vertex] and + dist[vertex] > dist[curVertex] + Altered_Graph[curVertex][vertex] and + graph[curVertex][vertex] != 0): + dist[vertex] = dist[curVertex] + Altered_Graph[curVertex][vertex] # Update the distance + + # Print the final distances from the source vertex + for vertex in range(tot_vertices): + print(f'Vertex {vertex}: {dist[vertex]}') # Output the distance for each vertex + +def BellmanFord_Algorithm(edges, graph, tot_vertices): + # Initialize distances from source to all vertices as infinity + dist = [INT_MAX] * (tot_vertices + 1) + dist[tot_vertices] = 0 # Set the distance to the new vertex (source) as 0 + + # Add edges from the new source vertex to all other vertices + for i in range(tot_vertices): + edges.append([tot_vertices, i, 0]) + + # Relax edges repeatedly for the total number of vertices + for _ in range(tot_vertices): + for (source, destn, weight) in edges: + # Update distance if a shorter path is found + if dist[source] != INT_MAX and dist[source] + weight < dist[destn]: + dist[destn] = dist[source] + weight + + return dist[0:tot_vertices] # Return distances to original vertices + +def JohnsonAlgorithm(graph): + edges = [] # Initialize an empty list to store edges + # Create edges list from the graph + for i in range(len(graph)): + for j in range(len(graph[i])): + if graph[i][j] != 0: # Check for existing edges + edges.append([i, j, graph[i][j]]) # Append edge to edges list + + # Get modified weights using the Bellman-Ford algorithm + Alter_weights = BellmanFord_Algorithm(edges, graph, len(graph)) + # Initialize altered graph with zero weights + Altered_Graph = [[0 for _ in range(len(graph))] for _ in range(len(graph))] + + # Update the altered graph with modified weights + for i in range(len(graph)): + for j in range(len(graph[i])): + if graph[i][j] != 0: # Check for existing edges + Altered_Graph[i][j] = graph[i][j] + Alter_weights[i] - Alter_weights[j] + + print('Modified Graph:', Altered_Graph) # Output the modified graph + + # Run Dijkstra's algorithm for each vertex as the source + for source in range(len(graph)): + print(f'\nShortest Distance with vertex {source} as the source:\n') + Dijkstra_Algorithm(graph, Altered_Graph, source) # Call Dijkstra's algorithm + +if __name__ == "__main__": + V = int(input("Enter the number of vertices: ")) # Get number of vertices from user + graph = [] # Initialize an empty list for the graph + print("Enter the graph as an adjacency matrix (use 0 for no connection):") + # Read the adjacency matrix from user input + for _ in range(V): + row = list(map(int, input().split())) # Read a row of the adjacency matrix + graph.append(row) # Append the row to the graph + + # Call the Johnson's algorithm with the input graph + JohnsonAlgorithm(graph) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/README.md new file mode 100644 index 0000000000..73bf708e05 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/README.md @@ -0,0 +1,95 @@ +# Backtracking + +## What is Backtracking? + +**Backtracking** is an algorithmic technique used to solve problems incrementally by building possible solutions and discarding those that fail to satisfy the constraints of the problem. It’s a depth-first search approach where we explore all possible paths to find a solution and backtrack whenever we hit a dead-end, i.e., when the current solution cannot be extended further without violating the problem’s constraints. + +### Steps of Backtracking: + +1. **Choose**: Start with an initial state and make a choice that seems feasible. +2. **Explore**: Recursively explore each choice to extend the current solution. +3. **Backtrack**: If the current choice leads to a dead-end, discard it and backtrack to try another option. + +Backtracking efficiently prunes the search space by eliminating paths that do not lead to feasible solutions, making it an ideal approach for solving combinatorial problems. + +### Key Characteristics: + +- **Recursive Approach**: Backtracking often involves recursion to explore all possible solutions. +- **Exhaustive Search**: It tries out all possible solutions until it finds the correct one or determines none exists. +- **Constraint Satisfaction**: Backtracking is well-suited for problems with constraints, where solutions must satisfy certain rules. + +--- + +## Applications of Backtracking + +### 1. **Graph Coloring** + +**Graph Coloring** is the problem of assigning colors to the vertices of a graph such that no two adjacent vertices share the same color. The challenge is to do this using the minimum number of colors. + +- **Backtracking Approach**: Starting with the first vertex, assign a color and move to the next vertex. If no valid color is available for the next vertex, backtrack and try a different color for the previous vertex. + +- **Time Complexity**: \(O(m^n)\), where \(m\) is the number of colors and \(n\) is the number of vertices. + +- **Use Case**: Scheduling problems, where tasks need to be scheduled without conflicts (e.g., class timetabling). + +### 2. **Hamiltonian Cycle** + +The **Hamiltonian Cycle** problem seeks a cycle in a graph that visits each vertex exactly once and returns to the starting point. + +- **Backtracking Approach**: Start from a vertex and add other vertices to the path one by one, ensuring that each added vertex is not already in the path and has an edge connecting it to the previous vertex. If a vertex leads to a dead-end, backtrack and try another path. + +- **Time Complexity**: Exponential, typically \(O(n!)\), where \(n\) is the number of vertices. + +- **Use Case**: Circuit design and optimization, where paths or tours need to be found efficiently. + +### 3. **Knight's Tour** + +The **Knight's Tour** problem involves moving a knight on a chessboard such that it visits every square exactly once. + +- **Backtracking Approach**: Starting from a given position, the knight makes a move to an unvisited square. If a move leads to a dead-end (i.e., no further valid moves), backtrack and try a different move. + +- **Time Complexity**: \(O(8^n)\), where \(n\) is the number of squares on the board (typically \(n = 64\) for a standard chessboard). + +- **Use Case**: Chess puzzle solvers and pathfinding problems on a grid. + +### 4. **Maze Solving** + +The **Maze Solving** problem involves finding a path from the entrance to the exit of a maze, moving only through valid paths. + +- **Backtracking Approach**: Starting from the entrance, attempt to move in one direction. If the path leads to a dead-end, backtrack and try another direction until the exit is reached. + +- **Time Complexity**: Depends on the size of the maze, typically \(O(4^n)\) for an \(n \times n\) maze. + +- **Use Case**: Robotics and AI navigation systems, where the goal is to find the optimal route through a complex environment. + +### 5. **N-Queens Problem** + +The **N-Queens Problem** is a classic puzzle where the goal is to place \(N\) queens on an \(N \times N\) chessboard so that no two queens threaten each other. This means no two queens can share the same row, column, or diagonal. + +- **Backtracking Approach**: Start by placing the first queen in the first row and recursively place queens in subsequent rows. If placing a queen in a row leads to a conflict, backtrack and try placing it in another column. + +- **Time Complexity**: \(O(N!)\), where \(N\) is the number of queens (or the size of the chessboard). + +- **Use Case**: Resource allocation and optimization problems, where multiple entities must be placed in non-conflicting positions (e.g., server load balancing). + +--- + +## Key Differences Between Backtracking Applications: + +| Problem | Time Complexity | Use Case | +|---------------------|-----------------|-------------------------------------------| +| **Graph Coloring** | \(O(m^n)\) | Scheduling, Timetabling | +| **Hamiltonian Cycle**| \(O(n!)\) | Circuit design, Optimization | +| **Knight's Tour** | \(O(8^n)\) | Chess puzzle solvers, Pathfinding | +| **Maze Solving** | \(O(4^n)\) | Robotics, Navigation Systems | +| **N-Queens** | \(O(N!)\) | Resource allocation, Server optimization | + +--- + +## Conclusion + +**Backtracking** is a versatile and powerful technique for solving constraint-based problems. By exploring all possibilities and eliminating invalid paths through backtracking, this approach enables the efficient solving of complex combinatorial problems. Applications like **Graph Coloring**, **Hamiltonian Cycle**, **Knight's Tour**, **Maze Solving**, and the **N-Queens Problem** showcase the wide applicability of backtracking, from puzzle-solving to real-world optimization tasks. + +Mastering backtracking is essential for understanding and solving a range of computational problems, making it a critical tool in algorithmic design. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/graph_coloring.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/graph_coloring.py new file mode 100644 index 0000000000..ed73670a37 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/graph_coloring.py @@ -0,0 +1,64 @@ +# Number of vertices in the graph +V = 4 + +def print_solution(color): + # Print the solution if it exists + print("Solution Exists: Following are the assigned colors") + print(" ".join(map(str, color))) # Print colors assigned to each vertex + +def is_safe(v, graph, color, c): + # Check if it is safe to assign color c to vertex v + for i in range(V): + # If there is an edge between v and i, and i has the same color, return False + if graph[v][i] and c == color[i]: + return False + return True # Color assignment is safe + +def graph_coloring_util(graph, m, color, v): + # Base case: If all vertices are assigned a color + if v == V: + return True + + # Try different colors for vertex v + for c in range(1, m + 1): + # Check if assigning color c to vertex v is safe + if is_safe(v, graph, color, c): + color[v] = c # Assign color c to vertex v + + # Recur to assign colors to the next vertex + if graph_coloring_util(graph, m, color, v + 1): + return True # If successful, return True + + color[v] = 0 # Backtrack: remove color c from vertex v + + return False # If no color can be assigned, return False + +def graph_coloring(graph, m): + # Initialize color assignment for vertices + color = [0] * V + + # Start graph coloring utility function + if not graph_coloring_util(graph, m, color, 0): + print("Solution does not exist") # If no solution exists + return False + + print_solution(color) # Print the colors assigned to vertices + return True # Solution found + +def main(): + print("Enter the number of vertices:") + global V # Declare V as global to modify it + V = int(input()) # Read the number of vertices from user + + graph = [] # Initialize an empty list for the adjacency matrix + print("Enter the adjacency matrix (0 for no edge, 1 for edge):") + for _ in range(V): + row = list(map(int, input().split())) # Read each row of the adjacency matrix + graph.append(row) # Append the row to the graph + + m = int(input("Enter the number of colors: ")) # Read the number of colors from user + + graph_coloring(graph, m) # Call the graph coloring function + +if __name__ == "__main__": + main() # Run the main function diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/hamiltonian_cycle.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/hamiltonian_cycle.py new file mode 100644 index 0000000000..b09dea0fdc --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/hamiltonian_cycle.py @@ -0,0 +1,63 @@ +class Graph(): + def __init__(self, vertices): + # Initialize the adjacency matrix for the graph with all zeros + self.graph = [[0 for _ in range(vertices)] for _ in range(vertices)] + self.V = vertices # Store the number of vertices + + def is_safe(self, v, pos, path): + # Check if the vertex v can be added to the Hamiltonian Cycle + # It should be adjacent to the last vertex in the path and not already in the path + if self.graph[path[pos-1]][v] == 0: + return False # Not adjacent + return v not in path[:pos] # Check if v is already in the path + + def ham_cycle_util(self, path, pos): + # Base case: if all vertices are included in the path + if pos == self.V: + # Check if there is an edge from the last vertex to the first vertex + return self.graph[path[pos-1]][path[0]] == 1 + + # Try different vertices as the next candidate in the Hamiltonian Cycle + for v in range(1, self.V): + if self.is_safe(v, pos, path): # Check if adding vertex v is safe + path[pos] = v # Add vertex v to the path + + # Recur to construct the rest of the path + if self.ham_cycle_util(path, pos + 1): + return True # If successful, return True + + path[pos] = -1 # Backtrack: remove vertex v from the path + + return False # No Hamiltonian Cycle found + + def ham_cycle(self): + path = [-1] * self.V # Initialize path array + path[0] = 0 # Start at the first vertex (0) + + # Start the utility function to find the Hamiltonian Cycle + if not self.ham_cycle_util(path, 1): + print("Solution does not exist\n") # If no cycle exists + return False + + self.print_solution(path) # Print the solution if found + return True + + def print_solution(self, path): + # Print the Hamiltonian Cycle + print("Solution Exists: Following is one Hamiltonian Cycle") + print(" -> ".join(map(str, path + [path[0]]))) # Include the start point to complete the cycle + +def main(): + vertices = int(input("Enter the number of vertices: ")) # Get number of vertices from user + g = Graph(vertices) # Create a graph object + print("Enter the adjacency matrix (0 for no edge, 1 for edge):") + + # Read the adjacency matrix from user input + for i in range(vertices): + row = list(map(int, input().split())) + g.graph[i] = row # Assign each row to the graph's adjacency matrix + + g.ham_cycle() # Start the Hamiltonian Cycle function + +if __name__ == "__main__": + main() # Run the main function diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/knights_tour.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/knights_tour.py new file mode 100644 index 0000000000..1d84a47224 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/knights_tour.py @@ -0,0 +1,58 @@ +# Size of the chessboard +n = 8 + +def is_safe(x, y, board): + # Check if the knight's move is within bounds and the position is not visited + return 0 <= x < n and 0 <= y < n and board[x][y] == -1 + +def print_solution(board): + # Print the chessboard with the knight's tour path + for row in board: + print(' '.join(map(str, row))) # Print each row of the board + +def solve_knight_tour(n): + # Create a chessboard initialized with -1 (indicating unvisited) + board = [[-1 for _ in range(n)] for _ in range(n)] + + # Possible moves for a knight (x, y offsets) + move_x = [2, 1, -1, -2, -2, -1, 1, 2] + move_y = [1, 2, 2, 1, -1, -2, -2, -1] + + board[0][0] = 0 # Starting position of the knight + pos = 1 # Starting position index for the knight's tour + + # Start solving the knight's tour problem + if not solve_knight_tour_util(n, board, 0, 0, move_x, move_y, pos): + print("Solution does not exist") # If no solution is found + else: + print_solution(board) # Print the found solution + +def solve_knight_tour_util(n, board, curr_x, curr_y, move_x, move_y, pos): + # Base case: If all squares are visited + if pos == n**2: + return True + + # Try all possible knight moves + for i in range(8): + new_x = curr_x + move_x[i] # New x coordinate after the move + new_y = curr_y + move_y[i] # New y coordinate after the move + + # Check if the new position is safe to move + if is_safe(new_x, new_y, board): + board[new_x][new_y] = pos # Mark the new position with the move count + + # Recur to continue the tour + if solve_knight_tour_util(n, board, new_x, new_y, move_x, move_y, pos + 1): + return True # If successful, return True + + board[new_x][new_y] = -1 # Backtrack: unmark the position + + return False # No valid moves found, return False + +def main(): + global n # Declare n as global to modify it + n = int(input("Enter the size of the chessboard (e.g., 8 for an 8x8 board): ")) # Get chessboard size from user + solve_knight_tour(n) # Start the knight's tour solution process + +if __name__ == "__main__": + main() # Run the main function diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/maze_solving.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/maze_solving.py new file mode 100644 index 0000000000..8cc76b63a5 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/maze_solving.py @@ -0,0 +1,52 @@ +# Directions: D (down), L (left), R (right), U (up) +direction = "DLRU" +# Direction vectors for moving in the maze (down, left, right, up) +dr = [1, 0, 0, -1] +dc = [0, -1, 1, 0] + +def is_valid(row, col, n, maze): + # Check if the position (row, col) is within bounds and is a valid path (1) + return 0 <= row < n and 0 <= col < n and maze[row][col] == 1 + +def find_path(row, col, maze, n, ans, current_path): + # If the bottom-right corner of the maze is reached, append the current path to the answer list + if row == n - 1 and col == n - 1: + ans.append(current_path) + return + + # Mark the current cell as visited + maze[row][col] = 0 + + # Explore all possible directions (down, left, right, up) + for i in range(4): + next_row = row + dr[i] # Calculate the new row index + next_col = col + dc[i] # Calculate the new column index + + # If the next position is valid, continue the search + if is_valid(next_row, next_col, n, maze): + find_path(next_row, next_col, maze, n, ans, current_path + direction[i]) # Append direction to the path + + # Backtrack: Unmark the current cell + maze[row][col] = 1 + +def main(): + n = int(input("Enter the size of the maze (n x n): ")) # Get the size of the maze from the user + print("Enter the maze row by row (1 for path, 0 for block):") + # Read the maze input as a list of lists + maze = [list(map(int, input().split())) for _ in range(n)] + + result = [] # List to store all valid paths + current_path = "" # String to store the current path + + # Check if the starting and ending points are valid paths + if maze[0][0] != 0 and maze[n - 1][n - 1] != 0: + find_path(0, 0, maze, n, result, current_path) # Start finding paths from (0, 0) + + # Print the result: either valid paths or -1 if no paths found + if not result: + print(-1) # No valid paths found + else: + print("Valid paths:", " ".join(result)) # Print all valid paths + +if __name__ == "__main__": + main() # Run the main function diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/n_queens.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/n_queens.py new file mode 100644 index 0000000000..cd31f97e51 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Backtracking/n_queens.py @@ -0,0 +1,66 @@ +def printSolution(board, N): + # Print the board with queens represented by "Q" and empty spaces by "." + for i in range(N): + for j in range(N): + print("Q" if board[i][j] == 1 else ".", end=" ") + print() # Newline for the next row + + +def isSafe(board, row, col, N): + # Check if it's safe to place a queen at board[row][col] + + # Check this row on the left side for any queens + for i in range(col): + if board[row][i] == 1: + return False + + # Check upper diagonal on the left side for any queens + for i, j in zip(range(row, -1, -1), range(col, -1, -1)): + if board[i][j] == 1: + return False + + # Check lower diagonal on the left side for any queens + for i, j in zip(range(row, N, 1), range(col, -1, -1)): + if board[i][j] == 1: + return False + + return True # If all checks passed, it's safe + + +def solveNQUtil(board, col, N): + # Base case: If all queens are placed successfully, return true + if col >= N: + return True + + # Consider this column and try placing a queen in all rows + for i in range(N): + if isSafe(board, i, col, N): # Check if placing a queen is safe + board[i][col] = 1 # Place the queen + + # Recur to place the rest of the queens + if solveNQUtil(board, col + 1, N): + return True # If successful, return true + + # If placing the queen doesn't lead to a solution, backtrack + board[i][col] = 0 # Remove the queen + + return False # If no position is found, return false + + +def solveNQ(N): + # Initialize the board as a 2D array with 0s + board = [[0] * N for _ in range(N)] + + # Start solving the N Queens problem + if not solveNQUtil(board, 0, N): + print("Solution does not exist") # If no solution is found + return False + + printSolution(board, N) # Print the solution + return True + + +# Driver Code +if __name__ == '__main__': + N = int(input("Enter the number of queens (N): ")) # Get the number of queens from the user + solveNQ(N) # Solve the N Queens problem diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Branch_and_Bound/8_puzzle.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Branch_and_Bound/8_puzzle.py new file mode 100644 index 0000000000..ae5e675a83 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Branch_and_Bound/8_puzzle.py @@ -0,0 +1,88 @@ +import numpy as np +from queue import PriorityQueue + +class Node: + def __init__(self, state, path_cost, level, parent=None): + self.state = state # Current state of the puzzle + self.path_cost = path_cost # Cost to reach this node + self.level = level # Depth of the node in the search tree + self.parent = parent # Parent node for path reconstruction + + def __lt__(self, other): + # Comparison method for priority queue + return (self.path_cost + self.level) < (other.path_cost + other.level) + +def get_blank_position(state): + # Returns the position of the blank (0) in the puzzle + return np.argwhere(state == 0)[0] + +def is_goal(state): + # Checks if the current state is the goal state + goal_state = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 0]]) + return np.array_equal(state, goal_state) + +def get_successors(state): + # Generates all possible successor states by moving the blank tile + successors = [] + blank_pos = get_blank_position(state) + x, y = blank_pos + # Define possible moves (up, down, left, right) + directions = [(-1, 0), (1, 0), (0, -1), (0, 1)] + + for dx, dy in directions: + new_x, new_y = x + dx, y + dy + # Check if the new position is within bounds + if 0 <= new_x < 3 and 0 <= new_y < 3: + new_state = state.copy() # Create a copy of the current state + # Swap the blank with the adjacent tile + new_state[x, y], new_state[new_x, new_y] = new_state[new_x, new_y], new_state[x, y] + successors.append(new_state) # Add new state to successors + return successors + +def manhattan_distance(state): + # Calculates the Manhattan distance heuristic for the given state + distance = 0 + for i in range(3): + for j in range(3): + value = state[i, j] + if value != 0: # Ignore the blank tile + # Calculate goal position for the current tile + goal_x, goal_y = divmod(value - 1, 3) + distance += abs(i - goal_x) + abs(j - goal_y) # Add distances + return distance + +def branch_and_bound(initial_state): + # Main function to perform the Branch and Bound search + if is_goal(initial_state): + return [] # If the initial state is the goal, return an empty path + pq = PriorityQueue() # Create a priority queue + pq.put(Node(initial_state, 0, 0)) # Enqueue the initial state + + while not pq.empty(): + current_node = pq.get() # Get the node with the lowest cost + if is_goal(current_node.state): + path = [] + while current_node: # Backtrack to get the path + path.append(current_node.state) + current_node = current_node.parent + return path[::-1] # Return the path in correct order + + # Generate successors for the current state + for successor in get_successors(current_node.state): + path_cost = current_node.path_cost + 1 # Increment path cost + heuristic_cost = manhattan_distance(successor) # Calculate heuristic cost + pq.put(Node(successor, path_cost, current_node.level + 1, current_node)) # Enqueue new node + + return None # Return None if no solution is found + +if __name__ == "__main__": + print("Enter the initial state of the puzzle as 9 space-separated integers (0 for blank):") + input_state = list(map(int, input().split())) + initial_state = np.array(input_state).reshape(3, 3) # Reshape input into 3x3 matrix + + solution_path = branch_and_bound(initial_state) # Find solution path + if solution_path: + for step in solution_path: # Print each step in the solution + print(step) + else: + print("No solution found.") # Indicate if no solution exists diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Branch_and_Bound/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Branch_and_Bound/README.md new file mode 100644 index 0000000000..5982b0d49c --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Branch_and_Bound/README.md @@ -0,0 +1,61 @@ +# Branch and Bound + +## What is Branch and Bound? + +**Branch and Bound (B&B)** is an algorithmic method used to solve optimization problems by systematically exploring the solution space. It works by breaking down a problem into smaller subproblems (branching) and calculating bounds (upper and lower) to prune paths that do not lead to the optimal solution. This reduces the number of possible solutions the algorithm needs to explore, making it highly efficient for NP-hard problems. + +### Key Characteristics: + +- **Branching**: Divide the problem into smaller subproblems. +- **Bounding**: Use cost functions to calculate bounds for each subproblem. +- **Pruning**: Discard paths whose bounds exceed the current best solution. + +--- + +## Application: 8-Puzzle Problem + +The **8-Puzzle Problem** is a sliding puzzle consisting of a 3x3 grid with eight numbered tiles and one empty space. The objective is to move the tiles around to reach a specific goal configuration (usually in numerical order, with the blank space in the bottom-right corner). + +### How Branch and Bound Solves the 8-Puzzle Problem: + +1. **Initial State**: Start from the given initial configuration of tiles. +2. **Branching**: From the current configuration, generate all possible moves by sliding the blank space up, down, left, or right. +3. **Bounding**: For each new configuration (branch), calculate a cost using a heuristic like: + - **Number of misplaced tiles**: Counts the tiles not in their goal positions. + - **Manhattan distance**: The sum of the horizontal and vertical distances of each tile from its goal position. +4. **Pruning**: Discard paths with a higher cost than the current best solution to avoid unnecessary exploration. +5. **Optimal Solution**: Continue exploring and pruning until the goal configuration is reached. + +### Example: + +If the initial configuration is: + +``` +1 2 3 +4 5 6 +7 _ 8 +``` + +The goal is to rearrange the tiles into: + +``` +1 2 3 +4 5 6 +7 8 _ +``` + +By branching from each move and using a bounding heuristic (like Manhattan distance), the algorithm efficiently prunes paths that don’t lead to the goal state. + +### Time Complexity: + +The time complexity of the 8-Puzzle problem using Branch and Bound is generally exponential, \(O(b^d)\), where: +- \(b\) is the branching factor (number of possible moves from a given configuration), +- \(d\) is the depth of the solution (number of moves to reach the goal). + +--- + +## Conclusion + +The **8-Puzzle Problem** is a classic example of how **Branch and Bound** can be used to solve optimization problems by pruning non-optimal paths and reducing the solution space. By using heuristics like Manhattan distance, Branch and Bound finds the optimal solution efficiently, making it a powerful tool for solving sliding puzzles and other combinatorial problems. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/README.md new file mode 100644 index 0000000000..0b8e002891 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/README.md @@ -0,0 +1,81 @@ +Here's a formatted explanation of **Divide and Conquer**: + +# Divide and Conquer + +## What is Divide and Conquer? + +**Divide and Conquer** is an algorithmic paradigm used to solve complex problems by breaking them down into smaller, more manageable sub-problems. The basic idea is to: + +1. **Divide**: Split the problem into smaller sub-problems of the same type. +2. **Conquer**: Solve each sub-problem independently. In many cases, the sub-problems are small enough to be solved directly. +3. **Combine**: Once the sub-problems are solved, merge the solutions to get the solution to the original problem. + +This method is highly efficient for problems that can be recursively divided, reducing the time complexity significantly. + +### Steps of Divide and Conquer + +1. **Break Down**: The problem is divided into smaller sub-problems. +2. **Recursive Approach**: Solve these sub-problems recursively. +3. **Merge**: The sub-solutions are combined to form the final solution. + +### Key Characteristics + +- **Recursion**: Divide and Conquer algorithms are often implemented using recursion. +- **Efficiency**: It improves the efficiency of problem-solving, especially with sorting and searching algorithms. +- **Reduced Time Complexity**: By breaking the problem, many Divide and Conquer algorithms achieve logarithmic or linearithmic time complexity, such as \(O(n \log n)\). + +--- + +## Applications of Divide and Conquer + +### 1. **Binary Search** + +Binary Search is a classic application of the divide and conquer technique. It works on sorted arrays by repeatedly dividing the search interval in half. If the value of the search key is less than the item in the middle of the interval, the search continues in the lower half; otherwise, it continues in the upper half. + +- **Time Complexity**: \(O(\log n)\) +- **Use Case**: Efficiently find an element in a sorted array or list. + +### 2. **Merge Sort** + +Merge Sort is a sorting algorithm that uses the divide and conquer approach. It splits an array into halves, recursively sorts each half, and then merges the two halves back together. + +- **Time Complexity**: \(O(n \log n)\) +- **Use Case**: Efficiently sorting large datasets. + +### 3. **Quick Sort** + +Quick Sort is another efficient sorting algorithm that follows the divide and conquer principle. It selects a 'pivot' element, partitions the array around the pivot, and recursively sorts the sub-arrays. + +- **Time Complexity**: \(O(n \log n)\) (on average), \(O(n^2)\) (worst case) +- **Use Case**: Sorting arrays with random or unsorted elements. + +### 4. **Tower of Hanoi** + +The Tower of Hanoi problem is a classic example of recursive problem-solving using divide and conquer. The goal is to move disks from one rod to another while following specific rules. + +- **Time Complexity**: \(O(2^n)\) +- **Use Case**: Demonstrates recursion and the power of divide and conquer in breaking down a complex problem. + +### 5. **Maximum and Minimum Problem** + +This problem finds the maximum and minimum elements in an array using the divide and conquer approach. The array is divided into two parts, and the maximum and minimum values of the two halves are combined to find the overall maximum and minimum. + +- **Time Complexity**: \(O(n)\) +- **Use Case**: Efficiently determining extreme values in a dataset. + +--- + +## Conclusion + +**Divide and Conquer** is a powerful problem-solving technique that enhances the efficiency of algorithms, particularly when dealing with large datasets or complex recursive problems. By mastering this approach, developers can tackle challenges such as sorting, searching, and optimization more effectively. + +**Common Applications** include: +- Binary Search +- Merge Sort +- Quick Sort +- Tower of Hanoi +- Max-Min Problem + +Understanding how to implement divide and conquer can significantly improve algorithmic thinking and performance in various domains of computer science. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/binary_search.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/binary_search.py new file mode 100644 index 0000000000..4492722483 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/binary_search.py @@ -0,0 +1,75 @@ +def binarySearch(arr, low, high, x): + """ + Performs binary search on a sorted array to find the index of element x. + + Parameters: + arr (list): The sorted array to search. + low (int): The starting index of the array segment. + high (int): The ending index of the array segment. + x (int): The element to search for. + + Returns: + int: The index of element x in arr, or -1 if not found. + """ + while low <= high: + mid = low + (high - low) // 2 # Calculate mid index + + if arr[mid] == x: + return mid # Element found + elif arr[mid] < x: + low = mid + 1 # Search in the right half + else: + high = mid - 1 # Search in the left half + + return -1 # Element not found + +def getArrayInput(): + """ + Prompts the user to input an array of integers. + + Returns: + list: A list of integers input by the user. + """ + while True: + try: + arr = list(map(int, input("Enter array elements (separated by spaces): ").split())) + if not arr: + print("Array cannot be empty. Please try again.") + else: + return arr # Return the input array + except ValueError: + print("Invalid input! Please enter integers only.") + +def getElementInput(): + """ + Prompts the user to input an integer to search in the array. + + Returns: + int: The integer input by the user. + """ + while True: + try: + x = int(input("Enter the element to search: ")) + return x # Return the input element + except ValueError: + print("Invalid input! Please enter an integer.") + +if __name__ == '__main__': + # Get user input for the array + arr = getArrayInput() + + # Optionally sort the array if not sorted + sort_choice = input("Do you want to sort the array before searching? (y/n): ").strip().lower() + if sort_choice == 'y': + arr.sort() # Sort the array + print("Sorted array:", arr) + + # Get user input for the element to search + x = getElementInput() + + # Perform binary search + result = binarySearch(arr, 0, len(arr) - 1, x) + if result != -1: + print(f"Element {x} is present at index {result}") + else: + print(f"Element {x} is not present in the array") diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/merge_sort.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/merge_sort.py new file mode 100644 index 0000000000..a9f9acd628 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/merge_sort.py @@ -0,0 +1,74 @@ +def merge(arr, left, mid, right): + # Merge two subarrays + n1 = mid - left + 1 # Size of left subarray + n2 = right - mid # Size of right subarray + + L = [0] * n1 # Temp array for left + R = [0] * n2 # Temp array for right + + # Copy data to temp arrays + for i in range(n1): + L[i] = arr[left + i] + for j in range(n2): + R[j] = arr[mid + 1 + j] + + i = j = 0 # Initial indexes for L[] and R[] + k = left # Initial index for merged array + + # Merge temp arrays back to arr + while i < n1 and j < n2: + if L[i] <= R[j]: + arr[k] = L[i] # Copy from L[] + i += 1 + else: + arr[k] = R[j] # Copy from R[] + j += 1 + k += 1 + + # Copy remaining elements of L[] + while i < n1: + arr[k] = L[i] + i += 1 + k += 1 + + # Copy remaining elements of R[] + while j < n2: + arr[k] = R[j] + j += 1 + k += 1 + +def merge_sort(arr, left, right): + # Sort array using merge sort + if left < right: # Check if more than one element + mid = (left + right) // 2 # Find mid point + merge_sort(arr, left, mid) # Sort first half + merge_sort(arr, mid + 1, right) # Sort second half + merge(arr, left, mid, right) # Merge halves + +def getArrayInput(): + # Get array input from user + while True: + try: + arr = list(map(int, input("Enter array elements (separated by spaces): ").split())) + if not arr: + print("Array cannot be empty.") + else: + return arr # Return array + except ValueError: + print("Invalid input! Enter integers only.") + +def print_list(arr): + # Print array elements + for i in arr: + print(i, end=" ") + print() # New line + +if __name__ == "__main__": + arr = getArrayInput() # Get user input + print("Given array is:") + print_list(arr) # Print original array + + merge_sort(arr, 0, len(arr) - 1) # Sort the array + + print("\nSorted array is:") + print_list(arr) # Print sorted array diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/min_max.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/min_max.py new file mode 100644 index 0000000000..c209689401 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/min_max.py @@ -0,0 +1,40 @@ +def max_min_naive(arr): + # Initialize max_val and min_val with the first element of the array + max_val = arr[0] + min_val = arr[0] + + # Iterate through the array starting from the second element + for i in range(1, len(arr)): + # Update max_val if the current element is greater + if arr[i] > max_val: + max_val = arr[i] + # Update min_val if the current element is smaller + if arr[i] < min_val: + min_val = arr[i] + + # Return the maximum and minimum values + return max_val, min_val + +def getArrayInput(): + # Continuously prompt the user for input until valid data is provided + while True: + try: + # Split the input string into integers and store them in an array + arr = list(map(int, input("Enter array elements (separated by spaces): ").split())) + if not arr: + print("Array cannot be empty. Please try again.") + else: + return arr # Return the valid array + except ValueError: + print("Invalid input! Please enter integers only.") + +if __name__ == '__main__': + # Get array input from the user + arr = getArrayInput() + + # Find the maximum and minimum values using max_min_naive function + max_val, min_val = max_min_naive(arr) + + # Print the maximum and minimum values + print("Maximum element is:", max_val) + print("Minimum element is:", min_val) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/quick_sort.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/quick_sort.py new file mode 100644 index 0000000000..99b3782794 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/quick_sort.py @@ -0,0 +1,37 @@ +def partition(arr, low, high): + # Set the last element as the pivot + pivot = arr[high] + # Initialize i to be one position before the low index + i = low - 1 + + # Traverse the array from low to high-1 + for j in range(low, high): + # If the current element is smaller than the pivot + if arr[j] < pivot: + i += 1 # Increment the index for the smaller element + swap(arr, i, j) # Swap the current element with the element at index i + + # Swap the pivot with the element at i+1 to place it in the correct position + swap(arr, i + 1, high) + return i + 1 # Return the partition index + +def swap(arr, i, j): + # Swap the elements at index i and j + arr[i], arr[j] = arr[j], arr[i] + +def quickSort(arr, low, high): + # Perform QuickSort only if the low index is less than the high index + if low < high: + # Get the partition index + pi = partition(arr, low, high) + # Recursively sort elements before and after the partition index + quickSort(arr, low, pi - 1) # Sort elements on the left of pivot + quickSort(arr, pi + 1, high) # Sort elements on the right of pivot + +if __name__ == "__main__": + # Take input from the user as a list of integers + arr = list(map(int, input("Enter the numbers to sort, separated by spaces: ").split())) + # Call QuickSort on the entire array + quickSort(arr, 0, len(arr) - 1) + # Print the sorted array + print("Sorted array:", *arr) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/tower_of_hanoi.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/tower_of_hanoi.py new file mode 100644 index 0000000000..a5e87718ae --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Divide_and_Conquer/tower_of_hanoi.py @@ -0,0 +1,35 @@ +def TowerOfHanoi(n, from_rod, to_rod, aux_rod): + # Base case: No moves needed if no disks + if n == 0: + return + + # Move n-1 disks from the source rod to the auxiliary rod + TowerOfHanoi(n - 1, from_rod, aux_rod, to_rod) + + # Move the nth disk from the source rod to the target rod + print(f"Move disk {n} from rod {from_rod} to rod {to_rod}") + + # Move the n-1 disks from the auxiliary rod to the target rod + TowerOfHanoi(n - 1, aux_rod, to_rod, from_rod) + +def getNumberOfDisks(): + # Input validation loop to ensure the user enters a valid number of disks + while True: + try: + n = int(input("Enter the number of disks: ")) + if n <= 0: + print("Number of disks must be greater than zero. Please try again.") + else: + return n + except ValueError: + print("Invalid input! Please enter a valid integer.") + +if __name__ == "__main__": + # Get the number of disks from the user + N = getNumberOfDisks() + + # Print the solution for Tower of Hanoi with N disks + print(f"\nSolving Tower of Hanoi for {N} disks:") + + # Call the TowerOfHanoi function with rods labeled A, B, and C + TowerOfHanoi(N, 'A', 'C', 'B') diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/01_knapsack.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/01_knapsack.py new file mode 100644 index 0000000000..ea5114c37e --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/01_knapsack.py @@ -0,0 +1,44 @@ +def knapSack(W, wt, val, n, memo=None): + # Initialize memo dictionary if it's not provided + if memo is None: + memo = {} + + # Check if the solution for (n, W) is already memoized + if (n, W) in memo: + return memo[(n, W)] + + # Base case: No items left or knapsack capacity is 0 + if n == 0 or W == 0: + return 0 + + # If the weight of the nth item is more than the remaining capacity, skip it + if wt[n-1] > W: + memo[(n, W)] = knapSack(W, wt, val, n-1, memo) + else: + # Find the maximum of including or excluding the current item + memo[(n, W)] = max( + val[n-1] + knapSack(W - wt[n-1], wt, val, n-1, memo), # Include the item + knapSack(W, wt, val, n-1, memo) # Exclude the item + ) + + # Return the memoized value + return memo[(n, W)] + +if __name__ == '__main__': + # Input profits (values of items) + profit = list(map(int, input("Enter profits separated by spaces: ").split())) + + # Input weights of the items + weight = list(map(int, input("Enter weights separated by spaces: ").split())) + + # Input the total capacity of the knapsack + W = int(input("Enter the capacity of the knapsack: ")) + + # Number of items + n = len(profit) + + # Get the maximum value for the knapsack + result = knapSack(W, weight, profit, n) + + # Output the result + print(f"The maximum value in the knapsack is: {result}") diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/README.md new file mode 100644 index 0000000000..f9c5b38acd --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/README.md @@ -0,0 +1,95 @@ +# Dynamic Programming + +## What is Dynamic Programming? + +**Dynamic Programming (DP)** is an algorithmic technique used to solve complex problems by breaking them down into simpler overlapping subproblems. Instead of solving the same subproblem multiple times (as in a naive recursive approach), DP stores the results of subproblems in a table (usually an array or matrix) and reuses these results to avoid redundant computation. + +### Key Concepts: + +1. **Overlapping Subproblems**: A problem can be broken down into smaller subproblems that are solved multiple times. DP optimizes by solving each subproblem only once and storing the solution. +2. **Optimal Substructure**: The solution to a problem can be constructed from the solutions to its subproblems. This property allows DP to build solutions step-by-step. + +### Approaches: + +- **Top-Down Approach (Memoization)**: Solve the problem recursively and store the results of subproblems in a table. +- **Bottom-Up Approach (Tabulation)**: Solve the problem iteratively by solving subproblems first and storing their results in a table to build up the solution. + +--- + +## Applications of Dynamic Programming + +### 1. **0/1 Knapsack Problem** + +The **0/1 Knapsack Problem** is a combinatorial optimization problem where a set of items, each with a weight and a value, are given. The objective is to maximize the total value of the items that can be placed in a knapsack of a given weight capacity. + +- **Dynamic Programming Approach**: The problem is solved by considering the value of including or excluding each item and storing the maximum value that can be achieved for each subproblem (based on current capacity and items considered). +- **Steps**: + 1. Create a 2D table where rows represent items and columns represent capacities. + 2. Fill the table by deciding whether to include an item or not. + 3. Use the filled table to determine the maximum value. + +- **Time Complexity**: \(O(nW)\), where \(n\) is the number of items and \(W\) is the capacity of the knapsack. + +- **Use Case**: Resource allocation, budgeting, inventory management. + +### 2. **Longest Common Subsequence (LCS)** + +The **Longest Common Subsequence (LCS)** problem is about finding the longest sequence that appears in the same order in two given sequences. The elements of the LCS don’t need to be contiguous but must appear in the same relative order in both sequences. + +- **Dynamic Programming Approach**: The problem is solved by comparing characters from both sequences and building the LCS for each prefix of the sequences, storing results in a 2D table. + +- **Steps**: + 1. Create a table where rows represent characters of the first sequence and columns represent characters of the second sequence. + 2. Fill the table using the recurrence relation: if characters match, add 1 to the diagonal value; otherwise, take the maximum of the left or top value. + +- **Time Complexity**: \(O(mn)\), where \(m\) and \(n\) are the lengths of the two sequences. + +- **Use Case**: DNA sequence analysis, file comparison, version control systems. + +### 3. **Matrix Chain Multiplication** + +In the **Matrix Chain Multiplication** problem, the goal is to determine the optimal way to multiply a given sequence of matrices. The problem is to find the minimum number of scalar multiplications needed to multiply the sequence of matrices together. + +- **Dynamic Programming Approach**: This problem can be solved by recursively breaking it down into smaller problems and storing the results of each subproblem. The idea is to find the best place to split the chain of matrices to minimize the cost of multiplication. + +- **Steps**: + 1. Create a 2D table where each cell represents the minimum cost to multiply matrices from \(i\) to \(j\). + 2. Fill the table using the recurrence relation that minimizes the number of operations for every possible matrix split. + +- **Time Complexity**: \(O(n^3)\), where \(n\) is the number of matrices. + +- **Use Case**: Optimization of computer graphics, scientific computing, chain operations in algorithms. + +### 4. **Fibonacci Sequence** + +The **Fibonacci Sequence** is a classic problem where each number in the sequence is the sum of the two preceding numbers. It can be solved efficiently using dynamic programming by storing the previously computed Fibonacci numbers. + +- **Dynamic Programming Approach**: Instead of using the naive recursive method, the problem can be solved iteratively by building up the solution from the base cases. + +- **Steps**: + 1. Start from the base cases \(F(0) = 0\) and \(F(1) = 1\). + 2. Store the result of each Fibonacci number in an array. + 3. Use the stored values to compute larger Fibonacci numbers. + +- **Time Complexity**: \(O(n)\), where \(n\) is the position in the Fibonacci sequence. + +- **Use Case**: Algorithms that require the Fibonacci sequence, financial modeling, biological systems modeling. + +--- + +## Key Differences Between Applications: + +| Problem | Time Complexity | Use Case | +|-----------------------------|----------------------|-----------------------------------------------------| +| **0/1 Knapsack** | \(O(nW)\) | Resource allocation, budgeting | +| **LCS** | \(O(mn)\) | DNA sequence analysis, file comparison | +| **Matrix Chain Multiplication** | \(O(n^3)\) | Computer graphics, scientific computing | +| **Fibonacci** | \(O(n)\) | Financial modeling, biological systems | + +--- + +## Conclusion + +**Dynamic Programming** is a powerful technique for solving optimization problems that have overlapping subproblems and optimal substructure. It significantly reduces the time complexity of recursive solutions by storing intermediate results and avoiding redundant computations. Key applications include the **0/1 Knapsack Problem**, **Longest Common Subsequence (LCS)**, **Matrix Chain Multiplication**, and the **Fibonacci Sequence**. By mastering dynamic programming, developers can tackle complex computational problems more efficiently and effectively. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/lcs.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/lcs.py new file mode 100644 index 0000000000..b4570809c9 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/lcs.py @@ -0,0 +1,30 @@ +def get_lcs_length(S1, S2): + m = len(S1) # Length of the first string + n = len(S2) # Length of the second string + + # Create a 2D list (dp table) to store the length of LCS for substrings + dp = [[0] * (n + 1) for _ in range(m + 1)] + + # Fill the dp table + for i in range(1, m + 1): + for j in range(1, n + 1): + # If characters match, add 1 to the previous diagonal value + if S1[i - 1] == S2[j - 1]: + dp[i][j] = dp[i - 1][j - 1] + 1 + else: + # If not, take the maximum value from the left or above cell + dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]) + + # The last cell contains the length of the longest common subsequence + return dp[m][n] + +if __name__ == "__main__": + # Take input strings from the user + S1 = input("Enter the first string: ") + S2 = input("Enter the second string: ") + + # Calculate the LCS length + result = get_lcs_length(S1, S2) + + # Output the result + print("Length of LCS is", result) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/matrix_multiplication.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/matrix_multiplication.py new file mode 100644 index 0000000000..91e278bc39 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/matrix_multiplication.py @@ -0,0 +1,33 @@ +import sys + +def minMult(arr): + n = len(arr) # Length of the array + + # Initialize a 2D table dp to store the minimum cost of matrix multiplication + dp = [[0] * n for _ in range(n)] + + # l is the chain length (number of matrices involved) + for l in range(2, n): # Start from chain length 2 since l=1 is trivial + for i in range(n - l): + j = i + l # End of the chain + dp[i][j] = sys.maxsize # Initialize with a large value + + # Try placing parentheses at different positions + for k in range(i + 1, j): + # Calculate the cost of multiplying matrices from i to j with split at k + q = dp[i][k] + dp[k][j] + arr[i] * arr[k] * arr[j] + # Store the minimum cost in dp[i][j] + dp[i][j] = min(dp[i][j], q) + + # Return the minimum cost to multiply the entire chain of matrices + return dp[0][n - 1] + +if __name__ == "__main__": + # Input the matrix dimensions as space-separated integers + arr = list(map(int, input("Enter the dimensions of matrices separated by spaces: ").split())) + + # Calculate the minimum number of scalar multiplications + result = minMult(arr) + + # Output the result + print(f"The minimum number of multiplications is: {result}") diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/nth_fibonacci.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/nth_fibonacci.py new file mode 100644 index 0000000000..d1523ea767 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Dynammic_Programming/nth_fibonacci.py @@ -0,0 +1,21 @@ +def nth_fibonacci(n, memo={}): + # Check if Fibonacci value is already calculated and stored in memo + if n in memo: + return memo[n] + + # Base cases: return n if n is 0 or 1 (Fibonacci(0) = 0, Fibonacci(1) = 1) + if n <= 1: + return n + + # Recursive case: calculate and store Fibonacci(n) in memo + memo[n] = nth_fibonacci(n - 1, memo) + nth_fibonacci(n - 2, memo) + return memo[n] + +# Get input from the user for the Fibonacci number position +n = int(input("Enter the position of the Fibonacci number to find: ")) + +# Calculate the nth Fibonacci number +result = nth_fibonacci(n) + +# Print the result +print(f"The {n}th Fibonacci number is: {result}") diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/BFS.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/BFS.py new file mode 100644 index 0000000000..c88ff677e4 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/BFS.py @@ -0,0 +1,37 @@ +from collections import deque + +def bfs(adj, s): + q = deque() # Initialize a queue using deque + visited = [False] * len(adj) # Create a visited list to track visited nodes + visited[s] = True # Mark the starting node as visited + q.append(s) # Add the starting node to the queue + + while q: + curr = q.popleft() # Dequeue the front node + print(curr, end=" ") # Print the current node + for x in adj[curr]: # Iterate through all adjacent nodes + if not visited[x]: # If the node is not visited + visited[x] = True # Mark it as visited + q.append(x) # Enqueue the adjacent node + +def add_edge(adj, u, v): + adj[u].append(v) # Add edge from u to v + adj[v].append(u) # Since it's an undirected graph, add edge from v to u + +if __name__ == "__main__": + V = int(input("Enter the number of vertices: ")) # Input the number of vertices + adj = [[] for _ in range(V)] # Initialize an adjacency list for the graph + E = int(input("Enter the number of edges: ")) # Input the number of edges + print("Enter each edge in the format 'u v':") + + # Input edges and add them to the adjacency list + for _ in range(E): + u, v = map(int, input().split()) + add_edge(adj, u, v) + + # Input the starting vertex for BFS + start_vertex = int(input("Enter the starting vertex for BFS (0 to {}): ".format(V - 1))) + + # Perform BFS starting from the chosen vertex + print("BFS starting from vertex {}: ".format(start_vertex)) + bfs(adj, start_vertex) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/DFS.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/DFS.py new file mode 100644 index 0000000000..d437bb3b2e --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/DFS.py @@ -0,0 +1,32 @@ +def dfs_rec(adj, visited, s): + visited[s] = True # Mark the current node as visited + print(s, end=" ") # Print the current node + for i in adj[s]: # Loop through all adjacent vertices + if not visited[i]: # If the adjacent vertex is not visited + dfs_rec(adj, visited, i) # Recursively perform DFS on the adjacent vertex + +def dfs(adj, s): + visited = [False] * len(adj) # Initialize a visited list + dfs_rec(adj, visited, s) # Start DFS from the source vertex + +def add_edge(adj, s, t): + adj[s].append(t) # Add an edge from s to t + adj[t].append(s) # Since it's an undirected graph, add an edge from t to s + +if __name__ == "__main__": + V = int(input("Enter the number of vertices: ")) # Input the number of vertices + E = int(input("Enter the number of edges: ")) # Input the number of edges + adj = [[] for _ in range(V)] # Initialize an adjacency list for the graph + + print("Enter each edge in the format 'u v':") + # Input edges and add them to the adjacency list + for _ in range(E): + u, v = map(int, input().split()) + add_edge(adj, u, v) + + # Input the starting vertex for DFS + source = int(input("Enter the starting vertex for DFS (0 to {}): ".format(V - 1))) + + # Perform DFS starting from the chosen vertex + print("DFS from source:", source) + dfs(adj, source) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/README.md new file mode 100644 index 0000000000..f96208ae37 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Graph_Traversing/README.md @@ -0,0 +1,73 @@ +# Graph Traversing + +## What is Graph Traversing? + +**Graph Traversing** refers to the process of visiting all the vertices and edges in a graph in a systematic manner. Graphs, which consist of nodes (vertices) connected by edges, can represent a wide range of real-world problems such as networks, relationships, and processes. Traversing a graph helps in exploring the structure, finding specific nodes, and analyzing the connectivity of the graph. + +### Key Concepts: + +1. **Graph**: A collection of nodes connected by edges. Graphs can be directed (edges have a direction) or undirected (edges have no direction). +2. **Traversal**: Visiting the vertices of the graph either by depth-first or breadth-first approach. +3. **Visited Nodes**: During traversal, a node is marked as "visited" to avoid processing the same node more than once. + +### Two Main Types of Graph Traversal: + +- **Breadth-First Search (BFS)**: Explores all the vertices at the present depth before moving on to vertices at the next depth level. +- **Depth-First Search (DFS)**: Explores as far along a branch as possible before backtracking. + +--- + +## Applications of Graph Traversing + +### 1. **Breadth-First Search (BFS)** + +**Breadth-First Search (BFS)** is a graph traversal technique that explores nodes layer by layer. It starts from a given node (usually the root or source node) and explores all its neighbors before moving on to the next layer of neighbors. BFS uses a queue to keep track of the nodes that need to be explored. + +- **How It Works**: + 1. Begin at the source node and mark it as visited. + 2. Enqueue all its neighbors and continue exploring by dequeuing the next node. + 3. Repeat until all nodes are visited or the target node is found. + +- **Time Complexity**: \(O(V + E)\), where \(V\) is the number of vertices and \(E\) is the number of edges. + +- **Applications**: + - **Shortest Path in Unweighted Graphs**: BFS is ideal for finding the shortest path in an unweighted graph, as it explores the graph level by level. + - **Web Crawlers**: BFS helps web crawlers in exploring the links on a website, starting from a given URL and moving outward. + - **Social Networks**: Finding the degree of separation between users. + - **Connected Components**: In an undirected graph, BFS helps find all the connected components. + +### 2. **Depth-First Search (DFS)** + +**Depth-First Search (DFS)** is a graph traversal technique that explores as far along a branch as possible before backtracking to the previous vertex. It uses a stack (either implicitly via recursion or explicitly) to keep track of the vertices being explored. + +- **How It Works**: + 1. Begin at the source node and mark it as visited. + 2. Move to an adjacent unvisited node, mark it as visited, and continue the process. + 3. If no adjacent unvisited node is available, backtrack to the previous vertex and explore other unvisited nodes. + +- **Time Complexity**: \(O(V + E)\), where \(V\) is the number of vertices and \(E\) is the number of edges. + +- **Applications**: + - **Cycle Detection**: DFS helps in detecting cycles in directed and undirected graphs. + - **Topological Sorting**: In directed acyclic graphs (DAGs), DFS can be used to perform a topological sort, which orders vertices based on dependencies. + - **Path Finding**: DFS is useful in finding paths between two nodes, though it may not always provide the shortest path. + - **Solving Mazes**: DFS can be used to explore all possible paths in mazes or puzzles until the exit is found. + +--- + +## Key Differences Between BFS and DFS: + +| Algorithm | Approach | Data Structure | Use Case | Time Complexity | +|-----------|---------------------------|----------------|-------------------------------|-----------------| +| **BFS** | Layer-by-layer traversal | Queue | Shortest path in unweighted graph | \(O(V + E)\) | +| **DFS** | Depth exploration | Stack/Recursion| Path finding, cycle detection | \(O(V + E)\) | + +--- + +## Conclusion + +**Graph Traversing** is essential for exploring and analyzing the structure of graphs, whether it’s for finding paths, detecting cycles, or searching for specific nodes. **Breadth-First Search (BFS)** and **Depth-First Search (DFS)** are the two main approaches for graph traversal. BFS is particularly effective for shortest path problems in unweighted graphs, while DFS excels at exploring paths and detecting cycles. Both techniques are fundamental to many applications in computer science, including network analysis, AI, and game development. + +By mastering these traversal techniques, developers can efficiently solve a variety of problems in graph theory and real-world systems. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/README.md new file mode 100644 index 0000000000..8095c10791 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/README.md @@ -0,0 +1,112 @@ +# Greedy Algorithms + +## What are Greedy Algorithms? + +**Greedy Algorithms** are a class of algorithms that make the locally optimal choice at each stage with the hope of finding a global optimum. The fundamental principle is to choose the best option available at the moment, without considering the larger consequences. This approach is typically used for optimization problems, where the goal is to maximize or minimize a particular quantity. + +### Key Characteristics: + +1. **Local Optimum**: Greedy algorithms make a choice that looks best at that moment. +2. **Irrevocability**: Once a decision is made, it cannot be changed. +3. **Optimal Substructure**: The problem can be broken down into subproblems, where the optimal solution of the subproblems contributes to the optimal solution of the overall problem. +4. **Feasibility**: The chosen option must satisfy the problem’s constraints. + +--- + +## Applications of Greedy Algorithms + +### 1. **Activity Selection Problem** + +The **Activity Selection Problem** involves selecting the maximum number of activities that don’t overlap in time. Each activity has a start and finish time, and the goal is to maximize the number of non-conflicting activities. + +- **Greedy Approach**: + 1. Sort activities based on their finish times. + 2. Select the first activity and compare its finish time with the start times of the remaining activities. + 3. Select subsequent activities that start after the last selected activity finishes. + +- **Time Complexity**: \(O(n \log n)\) due to sorting. + +- **Use Case**: Scheduling tasks in resource management and maximizing resource utilization. + +### 2. **Job Scheduling Problem** + +The **Job Scheduling Problem** involves scheduling jobs with deadlines to maximize profit. Each job has a deadline and associated profit if completed by that deadline. + +- **Greedy Approach**: + 1. Sort jobs in descending order of profit. + 2. Assign jobs to the latest available time slot before their deadlines. + +- **Time Complexity**: \(O(n \log n)\) for sorting, plus \(O(n^2)\) for scheduling if using a naive approach, or \(O(n)\) with union-find data structures. + +- **Use Case**: Task scheduling in operating systems and maximizing profit in project management. + +### 3. **Fractional Knapsack Problem** + +In the **Fractional Knapsack Problem**, you are given weights and values of items and a maximum weight capacity. The goal is to maximize the total value in the knapsack, allowing fractions of items. + +- **Greedy Approach**: + 1. Calculate the value-to-weight ratio for each item. + 2. Sort items based on this ratio. + 3. Fill the knapsack with whole items first, then take a fraction of the last item to reach the maximum weight. + +- **Time Complexity**: \(O(n \log n)\) due to sorting. + +- **Use Case**: Resource allocation in logistics and investment portfolios. + +### 4. **Optimal Merge Pattern** + +The **Optimal Merge Pattern** is about combining files with the minimum cost of merging, typically used in file compression. + +- **Greedy Approach**: + 1. Use a min-heap to combine the two smallest files iteratively until one file remains. + 2. Each merge operation incurs a cost equal to the sum of the sizes of the files being merged. + +- **Time Complexity**: \(O(n \log n)\). + +- **Use Case**: File compression algorithms and optimal coding strategies. + +### 5. **Huffman Coding** + +**Huffman Coding** is a greedy algorithm used for lossless data compression, constructing optimal prefix codes based on character frequencies. + +- **Greedy Approach**: + 1. Create a priority queue of characters sorted by frequency. + 2. Combine the two least frequent characters until only one tree remains, assigning binary codes based on the tree structure. + +- **Time Complexity**: \(O(n \log n)\) for the priority queue operations. + +- **Use Case**: Data compression formats like JPEG and MP3. + +### 6. **Traveling Salesman Problem (TSP)** + +The **Traveling Salesman Problem (TSP)** involves finding the shortest possible route that visits a set of cities and returns to the origin city. While TSP is NP-hard, a greedy approach can provide approximate solutions. + +- **Greedy Approach**: + 1. Start from a city and repeatedly visit the nearest unvisited city until all cities are visited. + +- **Time Complexity**: \(O(n^2)\) using an adjacency matrix. + +- **Use Case**: Route optimization in logistics, delivery services, and circuit board manufacturing. + +--- + +## Key Differences Between Applications: + +| Problem | Time Complexity | Use Case | +|-------------------------------|----------------------|------------------------------------------------| +| **Activity Selection** | \(O(n \log n)\) | Resource management | +| **Job Scheduling** | \(O(n \log n)\) | Maximizing profit in project management | +| **Fractional Knapsack** | \(O(n \log n)\) | Resource allocation in logistics | +| **Optimal Merge Pattern** | \(O(n \log n)\) | File compression algorithms | +| **Huffman Coding** | \(O(n \log n)\) | Data compression formats | +| **Traveling Salesman Problem**| \(O(n^2)\) | Route optimization | + +--- + +## Conclusion + +**Greedy Algorithms** are a powerful technique for solving optimization problems by making locally optimal choices. They provide efficient solutions to a variety of problems, including the **Activity Selection Problem**, **Job Scheduling**, **Fractional Knapsack**, **Optimal Merge Pattern**, **Huffman Coding**, and the **Traveling Salesman Problem**. By understanding the principles and applications of greedy algorithms, developers can tackle real-world problems effectively, maximizing efficiency and optimizing resource usage. + +Mastering greedy algorithms not only enhances problem-solving skills but also lays a solid foundation for further exploration of advanced algorithmic techniques. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/activity_selection.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/activity_selection.py new file mode 100644 index 0000000000..cd72815530 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/activity_selection.py @@ -0,0 +1,30 @@ +def printMaxActivities(s, f): + n = len(f) # Get the number of activities + print("Following activities are selected:") + + i = 0 # The first activity is always selected + print(i, end=' ') + + # Loop through the remaining activities + for j in range(1, n): + # If the start time of the current activity is greater or equal to + # the finish time of the last selected activity + if s[j] >= f[i]: + print(j, end=' ') # Select this activity + i = j # Update the last selected activity index + +if __name__ == '__main__': + n = int(input("Enter the number of activities: ")) # Input the number of activities + s = [] + f = [] + + # Input the start times of the activities + print("Enter start times of activities separated by spaces:") + s = list(map(int, input().split())) + + # Input the finish times of the activities + print("Enter finish times of activities separated by spaces:") + f = list(map(int, input().split())) + + # Call the function to print the maximum set of activities + printMaxActivities(s, f) diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/fractional_knapsack.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/fractional_knapsack.py new file mode 100644 index 0000000000..619e8f29e2 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/fractional_knapsack.py @@ -0,0 +1,34 @@ +class Item: + def __init__(self, profit, weight): + self.profit = profit # Initialize profit of the item + self.weight = weight # Initialize weight of the item + +def fractionalKnapsack(W, arr): + # Sort the items based on their profit-to-weight ratio in descending order + arr.sort(key=lambda x: (x.profit / x.weight), reverse=True) + finalvalue = 0.0 # Initialize the final value of the knapsack + + for item in arr: + # If the item can fully fit in the knapsack + if item.weight <= W: + W -= item.weight # Reduce the remaining capacity + finalvalue += item.profit # Add the full profit of the item + else: + # If the item can't fully fit, take the fractional part + finalvalue += item.profit * W / item.weight + break # No more capacity left, break the loop + + return finalvalue # Return the maximum value + +if __name__ == "__main__": + W = float(input("Enter the capacity of the knapsack: ")) # Input the knapsack's capacity + n = int(input("Enter the number of items: ")) # Input the number of items + arr = [] + + for _ in range(n): + profit = float(input("Enter profit of item: ")) # Input profit for each item + weight = float(input("Enter weight of item: ")) # Input weight for each item + arr.append(Item(profit, weight)) # Create an item and add it to the list + + max_val = fractionalKnapsack(W, arr) # Calculate the maximum value using the fractional knapsack algorithm + print("Maximum value in the knapsack:", max_val) # Output the result diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/huffman_code.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/huffman_code.py new file mode 100644 index 0000000000..db1bbfb1df --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/huffman_code.py @@ -0,0 +1,44 @@ +import heapq # Import the heapq module to use a priority queue + +class Node: + def __init__(self, freq, symbol, left=None, right=None): + self.freq = freq # Frequency of the symbol + self.symbol = symbol # Character symbol + self.left = left # Left child + self.right = right # Right child + self.huff = '' # Huffman code for the symbol + + def __lt__(self, nxt): + return self.freq < nxt.freq # Comparison based on frequency for priority queue + +def printNodes(node, val=''): + newVal = val + str(node.huff) # Append the Huffman code + if node.left: + printNodes(node.left, newVal) # Traverse left child + if node.right: + printNodes(node.right, newVal) # Traverse right child + if not node.left and not node.right: + print(f"{node.symbol} -> {newVal}") # Print the symbol and its corresponding Huffman code + +if __name__ == "__main__": + # Input characters and their corresponding frequencies + chars = input("Enter characters separated by spaces: ").split() + freq = list(map(int, input("Enter corresponding frequencies separated by spaces: ").split())) + + nodes = [] + # Create a priority queue with nodes for each character + for x in range(len(chars)): + heapq.heappush(nodes, Node(freq[x], chars[x])) + + # Build the Huffman tree + while len(nodes) > 1: + left = heapq.heappop(nodes) # Pop the two nodes with the smallest frequency + right = heapq.heappop(nodes) + left.huff = 0 # Assign 0 to the left child + right.huff = 1 # Assign 1 to the right child + # Create a new node with combined frequency + newNode = Node(left.freq + right.freq, left.symbol + right.symbol, left, right) + heapq.heappush(nodes, newNode) # Push the new node back into the priority queue + + print("Huffman Codes:") # Output the generated Huffman codes + printNodes(nodes[0]) # Print the Huffman codes starting from the root node diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/job_scheduling.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/job_scheduling.py new file mode 100644 index 0000000000..a5f12af41b --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/job_scheduling.py @@ -0,0 +1,36 @@ +def printJobScheduling(arr, t): + n = len(arr) # Get the number of jobs + + # Sort jobs based on profit in descending order + arr.sort(key=lambda x: x[2], reverse=True) + + result = [False] * t # To track which time slots are filled + job = ['-1'] * t # To store the job names for the time slots + + # Iterate over each job + for i in range(len(arr)): + # Find a free slot for this job, starting from the last possible time slot + for j in range(min(t - 1, arr[i][1] - 1), -1, -1): + if not result[j]: # Check if the slot is free + result[j] = True # Mark the slot as filled + job[j] = arr[i][0] # Assign the job to this slot + break # Move to the next job + + print(job) # Print the scheduled jobs + +if __name__ == '__main__': + n = int(input("Enter the number of jobs: ")) # Input the number of jobs + arr = [] # List to hold job details + + print("Enter job details in the format: JobName Deadline Profit (space-separated):") + # Input job details + for _ in range(n): + job_detail = input().split() + job_name = job_detail[0] # Job name + deadline = int(job_detail[1]) # Deadline for the job + profit = int(job_detail[2]) # Profit of the job + arr.append([job_name, deadline, profit]) # Append job details to the list + + t = int(input("Enter the maximum number of time slots: ")) # Input the maximum number of time slots + print("Following is the maximum profit sequence of jobs:") + printJobScheduling(arr, t) # Schedule and print jobs diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/optimal_merge_pattern.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/optimal_merge_pattern.py new file mode 100644 index 0000000000..8089316e4a --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/optimal_merge_pattern.py @@ -0,0 +1,89 @@ +class Heap: + def __init__(self): + self.h = [] # Initialize the heap as an empty list + + def parent(self, index): + # Return the index of the parent node + return (index - 1) // 2 if index > 0 else None + + def lchild(self, index): + # Return the index of the left child + return (2 * index) + 1 + + def rchild(self, index): + # Return the index of the right child + return (2 * index) + 2 + + def addItem(self, item): + # Add a new item to the heap + self.h.append(item) # Append item to the heap + index = len(self.h) - 1 # Get the index of the newly added item + parent = self.parent(index) # Get the parent index + + # Move the new item up the heap to maintain heap property + while index > 0 and item < self.h[parent]: + self.h[index], self.h[parent] = self.h[parent], self.h[index] # Swap + index = parent # Move up the heap + parent = self.parent(index) + + def deleteItem(self): + # Remove and return the minimum item (root) from the heap + length = len(self.h) + self.h[0], self.h[length - 1] = self.h[length - 1], self.h[0] # Swap root with the last item + deleted = self.h.pop() # Remove the last item (the root) + self.moveDownHeapify(0) # Restore heap property + return deleted + + def moveDownHeapify(self, index): + # Move down the heap to restore heap property + lc, rc = self.lchild(index), self.rchild(index) # Get left and right children + length, smallest = len(self.h), index # Initialize smallest as the current index + + # Check if left child is smaller than the current smallest + if lc < length and self.h[lc] < self.h[smallest]: + smallest = lc + # Check if right child is smaller than the current smallest + if rc < length and self.h[rc] < self.h[smallest]: + smallest = rc + # If the smallest is not the current index, swap and continue heapifying + if smallest != index: + self.h[smallest], self.h[index] = self.h[index], self.h[smallest] + self.moveDownHeapify(smallest) + + def increaseItem(self, index, value): + # Increase the value of the item at the given index + if value <= self.h[index]: + return # Do nothing if the new value is not greater + self.h[index] = value # Update the value + self.moveDownHeapify(index) # Restore heap property + +class OptimalMergePattern: + def __init__(self, items): + self.items = items # Store the items + self.heap = Heap() # Create a heap instance + + def optimalMerge(self): + # Calculate the optimal merge cost using a min-heap + if len(self.items) <= 1: + return sum(self.items) # If there's one or no item, return the sum + + # Add all items to the heap + for item in self.items: + self.heap.addItem(item) + + total_cost = 0 # Initialize total cost + # Merge items until one item is left + while len(self.heap.h) > 1: + first = self.heap.deleteItem() # Remove the smallest item + second = self.heap.h[0] # Get the next smallest item (root) + total_cost += (first + second) # Add their sum to the total cost + self.heap.increaseItem(0, first + second) # Merge them and add back to heap + + return total_cost # Return the total merge cost + +if __name__ == '__main__': + n = int(input("Enter the number of items: ")) # Input the number of items + items = list(map(int, input("Enter the item sizes separated by spaces: ").split())) # Input item sizes + omp = OptimalMergePattern(items) # Create an instance of the OptimalMergePattern class + result = omp.optimalMerge() # Calculate optimal merge cost + print("Optimal Merge Cost:", result) # Output the result diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/travel_salesman.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/travel_salesman.py new file mode 100644 index 0000000000..1fa419d94f --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Greedy_Techniques/travel_salesman.py @@ -0,0 +1,64 @@ +from typing import List +from collections import defaultdict + +# Define a constant for maximum integer value +INT_MAX = 2147483647 + +def findMinRoute(tsp: List[List[int]]): + # Initialize total cost, counter, and indices for the current city + total_cost = 0 + counter = 0 + j = 0 # Index for the next city to consider + i = 0 # Index for the current city + min_cost = INT_MAX # Initialize minimum cost to maximum integer + visited = defaultdict(int) # Track visited cities using a default dictionary + + visited[0] = 1 # Mark the starting city (city 0) as visited + route = [0] * len(tsp) # Initialize the route array to track the selected route + + # Main loop to find the minimum cost route + while i < len(tsp) and j < len(tsp[i]): + if counter >= len(tsp[i]) - 1: + break # Exit if all cities have been visited + + # Check if the city has not been visited and is not the current city + if j != i and (visited[j] == 0): + # Update minimum cost and corresponding city if a cheaper path is found + if tsp[i][j] < min_cost: + min_cost = tsp[i][j] # Update minimum cost + route[counter] = j + 1 # Record the city in the route + + j += 1 # Move to the next city + + # If we've reached the end of the cities list, update total cost + if j == len(tsp[i]): + total_cost += min_cost # Add the minimum cost found to total cost + min_cost = INT_MAX # Reset minimum cost for the next iteration + visited[route[counter] - 1] = 1 # Mark the selected city as visited + j = 0 # Reset index for the next iteration + i = route[counter] - 1 # Move to the next city in the route + counter += 1 # Increment counter for route + + # Check for the last city to return to the starting city + i = route[counter - 1] - 1 + + # Loop to find the minimum cost for returning to the starting city + for j in range(len(tsp)): + if (i != j) and tsp[i][j] < min_cost: + min_cost = tsp[i][j] # Update minimum cost if a cheaper path is found + route[counter] = j + 1 # Record the returning city in the route + + total_cost += min_cost # Add the cost to return to the total cost + print("Minimum Cost is:", total_cost) # Print the total minimum cost + +if __name__ == "__main__": + n = int(input("Enter the number of cities: ")) # Input number of cities + tsp = [] # Initialize the adjacency matrix for the cities + + # Input the adjacency matrix with distances + print("Enter the adjacency matrix (use -1 for no direct path):") + for _ in range(n): + row = list(map(int, input().split())) # Input each row of the matrix + tsp.append(row) # Add the row to the adjacency matrix + + findMinRoute(tsp) # Call the function to find the minimum route diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Maximum_Flow/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Maximum_Flow/README.md new file mode 100644 index 0000000000..1fa8e69133 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Maximum_Flow/README.md @@ -0,0 +1,44 @@ +# Maximum Flow + +## What is Maximum Flow? + +**Maximum Flow** is a concept in network flow theory that deals with finding the greatest possible flow in a flow network from a source node to a sink node while respecting the capacity constraints of the edges. A flow network consists of nodes and directed edges, where each edge has a capacity that indicates the maximum amount of flow that can pass through it. The goal is to determine the maximum flow that can be sent from the source to the sink without exceeding the capacities of the edges. + +### Key Concepts: + +1. **Flow Network**: A directed graph where each edge has a capacity and the flow must respect these capacities. +2. **Source and Sink**: The source node is where the flow originates, and the sink node is where the flow is intended to reach. +3. **Flow Conservation**: The amount of flow into a node must equal the amount of flow out, except for the source and sink. +4. **Capacity Constraint**: The flow along an edge cannot exceed its capacity. + +--- + +## Applications of Maximum Flow + +### **Ford-Fulkerson Method** + +The **Ford-Fulkerson Method** is a popular algorithm for computing the maximum flow in a flow network. It uses the concept of augmenting paths to iteratively increase the flow until no more augmenting paths can be found. The algorithm can be implemented using Depth-First Search (DFS) or Breadth-First Search (BFS) to identify these paths. + +- **How It Works**: + 1. Initialize the flow in all edges to zero. + 2. While there exists an augmenting path from the source to the sink, increase the flow along this path. + 3. Adjust the capacities of the edges along the path to account for the new flow. + 4. Repeat until no more augmenting paths can be found. + +- **Time Complexity**: The time complexity of the Ford-Fulkerson method depends on the method used to find augmenting paths: + - With DFS: \(O(max\_flow \cdot E)\) in the worst case. + - With BFS (Edmonds-Karp): \(O(V \cdot E^2)\). + +- **Use Case**: The Ford-Fulkerson method is widely used in various applications, including: + - **Network Routing**: Optimizing data flow in telecommunications and computer networks. + - **Transportation Networks**: Managing and optimizing transportation logistics and traffic flow. + - **Bipartite Matching**: Solving problems like job assignment and matching students to schools. + - **Circulation Problems**: Ensuring the flow of goods in supply chain management while respecting capacities and demands. + +--- + +## Conclusion + +**Maximum Flow** is a critical concept in graph theory with numerous real-world applications. The **Ford-Fulkerson Method** stands out as a powerful technique for finding the maximum flow in a flow network. By leveraging this method, developers can solve complex problems related to network routing, transportation logistics, bipartite matching, and circulation. + +Understanding the principles of maximum flow can significantly enhance algorithmic skills, enabling practitioners to tackle optimization challenges across various domains effectively. Mastering these concepts lays a strong foundation for advanced topics in network flows and graph theory. diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Maximum_Flow/ford_fulkenson.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Maximum_Flow/ford_fulkenson.py new file mode 100644 index 0000000000..e3140f4131 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Maximum_Flow/ford_fulkenson.py @@ -0,0 +1,65 @@ +from collections import deque + +def bfs(rGraph, s, t, parent): + # Initialize visited list to track visited vertices + visited = [False] * len(rGraph) + q = deque() # Create a queue for BFS + q.append(s) # Start BFS from the source vertex + visited[s] = True # Mark source as visited + parent[s] = -1 # Source has no parent + + # Perform BFS to find an augmenting path + while q: + u = q.popleft() # Get the front vertex + for v in range(len(rGraph)): # Check all vertices + # If not visited and there's remaining capacity + if not visited[v] and rGraph[u][v] > 0: + q.append(v) # Add vertex to queue + parent[v] = u # Update parent + visited[v] = True # Mark as visited + + # Return whether we reached the sink + return visited[t] + +def fordFulkerson(graph, s, t): + # Create a residual graph + rGraph = [row[:] for row in graph] + parent = [-1] * len(graph) # Array to store the path + max_flow = 0 # Initialize max flow + + # While there's an augmenting path in the residual graph + while bfs(rGraph, s, t, parent): + path_flow = float('inf') # Initialize path flow + v = t + + # Find the maximum flow through the path found + while v != s: + u = parent[v] + path_flow = min(path_flow, rGraph[u][v]) # Find minimum capacity in the path + v = parent[v] + + # update residual capacities of the edges and reverse edges + v = t + while v != s: + u = parent[v] + rGraph[u][v] -= path_flow # Decrease forward edge capacity + rGraph[v][u] += path_flow # Increase reverse edge capacity + v = parent[v] + + max_flow += path_flow # Add path flow to overall flow + + return max_flow + +if __name__ == "__main__": + V = int(input("Enter the number of vertices: ")) # Input the number of vertices + print("Enter the adjacency matrix (use 0 for no connection):") + graph = [] + for _ in range(V): + row = list(map(int, input().split())) # Input each row of the adjacency matrix + graph.append(row) + + source = int(input("Enter the source vertex (0 to {}): ".format(V-1))) # Input source vertex + sink = int(input("Enter the sink vertex (0 to {}): ".format(V-1))) # Input sink vertex + + max_flow = fordFulkerson(graph, source, sink) # Calculate max flow using Ford-Fulkerson + print("The maximum possible flow is", max_flow) # Output the result diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/README.md new file mode 100644 index 0000000000..962e243cc5 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/README.md @@ -0,0 +1,71 @@ +# Minimum Spanning Trees (MST) + +## What is a Minimum Spanning Tree? + +A **Minimum Spanning Tree (MST)** of a connected, undirected graph is a subset of its edges that connects all the vertices together, without any cycles and with the minimum possible total edge weight. In other words, an MST is a tree that spans all the vertices and minimizes the sum of the weights of the edges included in the tree. + +### Key Characteristics: + +1. **Spanning Tree**: A tree that includes all the vertices of the graph. +2. **Minimum Weight**: The sum of the weights of the edges in the MST is the smallest possible compared to all other spanning trees. +3. **Uniqueness**: If all edge weights are distinct, the MST is unique; otherwise, there may be multiple MSTs. + +--- + +## Applications of Minimum Spanning Trees + +### 1. **Kruskal's Algorithm** + +**Kruskal's Algorithm** is a greedy algorithm used to find the MST of a graph. It works by sorting all the edges in non-decreasing order of their weights and adding edges one by one to the MST, ensuring that no cycles are formed. + +- **How It Works**: + 1. Sort all edges of the graph by their weight. + 2. Initialize a forest (a set of trees), where each vertex is a separate tree. + 3. For each edge in the sorted list, check if it forms a cycle with the spanning tree formed so far. + 4. If it doesn’t form a cycle, add it to the MST. + 5. Repeat until there are \(V - 1\) edges in the MST (where \(V\) is the number of vertices). + +- **Time Complexity**: \(O(E \log E)\) or \(O(E \log V)\), where \(E\) is the number of edges and \(V\) is the number of vertices. + +- **Use Case**: Kruskal’s algorithm is particularly useful in network design problems, such as: + - **Network Cabling**: Minimizing the cost of connecting different network nodes. + - **Transportation Networks**: Designing efficient road systems or pipelines. + - **Cluster Analysis**: Identifying groups in data sets. + +### 2. **Prim's Algorithm** + +**Prim's Algorithm** is another greedy method for finding the MST, starting from a single vertex and growing the tree by adding the least expensive edge from the tree to a vertex outside the tree. + +- **How It Works**: + 1. Initialize the MST with a single vertex (the starting point). + 2. While there are still vertices not included in the MST, select the edge with the minimum weight that connects a vertex in the MST to a vertex outside of it. + 3. Add this edge and the connected vertex to the MST. + 4. Repeat until all vertices are included. + +- **Time Complexity**: + - Using an adjacency matrix: \(O(V^2)\) + - Using a priority queue: \(O(E \log V)\) + +- **Use Case**: Prim’s algorithm is often used in scenarios like: + - **Network Design**: Similar to Kruskal’s, especially for dense graphs where it might be more efficient. + - **Minimum Cost Wiring**: Connecting multiple devices with the least amount of cable. + - **Game Development**: Constructing terrains and networks in simulations. + +--- + +## Key Differences Between Algorithms: + +| Algorithm | Time Complexity | Use Case | +|-------------|-----------------------|-------------------------------------------------| +| **Kruskal's** | \(O(E \log E)\) | Efficient for sparse graphs, network design | +| **Prim's** | \(O(V^2)\) or \(O(E \log V)\) | Efficient for dense graphs, minimum cost wiring | + +--- + +## Conclusion + +**Minimum Spanning Trees (MST)** are fundamental structures in graph theory with a wide range of applications. **Kruskal's** and **Prim's Algorithms** are two prominent greedy algorithms used to find the MST of a graph efficiently. By mastering these algorithms, developers can address various optimization problems in networking, transportation, and resource allocation effectively. + +Understanding MST concepts and their implementations enhances algorithmic thinking and equips practitioners to tackle complex challenges across different domains. Mastering these algorithms lays a robust foundation for further exploration of advanced topics in graph theory and optimization techniques. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/kruskal.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/kruskal.py new file mode 100644 index 0000000000..6e32265acc --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/kruskal.py @@ -0,0 +1,64 @@ +class Graph: + def __init__(self, vertices): + self.V = vertices # Number of vertices in the graph + self.graph = [] # List to store the edges + + def add_edge(self, u, v, w): + # Function to add an edge to the graph + self.graph.append([u, v, w]) # Append edge as a list [u, v, weight] + + def find(self, parent, i): + # Function to find the parent of an element i + if parent[i] != i: + parent[i] = self.find(parent, parent[i]) # Path compression + return parent[i] + + def union(self, parent, rank, x, y): + # Function to perform union of two subsets x and y + if rank[x] < rank[y]: + parent[x] = y # Make y the parent of x + elif rank[x] > rank[y]: + parent[y] = x # Make x the parent of y + else: + parent[y] = x # Make x the parent of y + rank[x] += 1 # Increase rank of x + + def kruskal_mst(self): + result = [] # To store the resultant MST + i, e = 0, 0 # Initialize variables for the current edge and the number of edges in MST + self.graph.sort(key=lambda item: item[2]) # Sort edges based on their weight + parent = list(range(self.V)) # Create a parent list for union-find + rank = [0] * self.V # Create a rank list for union-find + + # Loop until we include V-1 edges in the MST + while e < self.V - 1: + u, v, w = self.graph[i] # Get the next edge + i += 1 + x = self.find(parent, u) # Find the parent of u + y = self.find(parent, v) # Find the parent of v + if x != y: # If they belong to different sets + e += 1 # Increase the count of edges in MST + result.append([u, v, w]) # Add this edge to the result + self.union(parent, rank, x, y) # Union the sets + + # Calculate the total cost of the MST + minimum_cost = sum(weight for _, _, weight in result) + print("Edges in the constructed MST") + for u, v, weight in result: + print(f"{u} -- {v} == {weight}") # Print each edge + print("Minimum Spanning Tree Cost:", minimum_cost) # Print the total cost of MST + +def main(): + vertices = int(input("Enter the number of vertices: ")) # Input number of vertices + g = Graph(vertices) # Create a new graph instance + edges = int(input("Enter the number of edges: ")) # Input number of edges + print("Enter each edge in the format 'u v w' where u and v are vertices and w is the weight:") + + for _ in range(edges): + u, v, w = map(int, input().split()) # Input each edge + g.add_edge(u, v, w) # Add the edge to the graph + + g.kruskal_mst() # Find and print the MST + +if __name__ == '__main__': + main() # Run the main function diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/prim.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/prim.py new file mode 100644 index 0000000000..efe38604bf --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Minimum_spanning_tree/prim.py @@ -0,0 +1,58 @@ +import sys + +class Graph(): + def __init__(self, vertices): + # Initialize the graph with the given number of vertices + self.V = vertices + self.graph = [[0 for _ in range(vertices)] for _ in range(vertices)] # Adjacency matrix + + def printMST(self, parent): + # Function to print the constructed MST + print("Edge \tWeight") + for i in range(1, self.V): + # Print each edge and its weight + print(f"{parent[i]} - {i} \t{self.graph[i][parent[i]]}") + + def minKey(self, key, mstSet): + # Function to find the vertex with the minimum key value not included in the MST + min_value = sys.maxsize # Initialize minimum value to infinity + min_index = 0 # Initialize index of the minimum key + for v in range(self.V): + # Update min_value and min_index if a smaller key is found + if key[v] < min_value and not mstSet[v]: + min_value = key[v] + min_index = v + return min_index + + def primMST(self): + # Function to construct the MST using Prim's algorithm + key = [sys.maxsize] * self.V # Initialize all keys to infinity + parent = [None] * self.V # Array to store the constructed MST + key[0] = 0 # Make the first vertex the root + mstSet = [False] * self.V # To keep track of vertices included in the MST + parent[0] = -1 # First node is the root of the MST + + # The loop runs V-1 times to construct the MST + for _ in range(self.V): + # Get the vertex with the minimum key value + u = self.minKey(key, mstSet) + mstSet[u] = True # Include the vertex in the MST + + # Update the key value and parent index of the adjacent vertices + for v in range(self.V): + # Only update the key if the edge weight is less than the current key value + if self.graph[u][v] > 0 and not mstSet[v] and key[v] > self.graph[u][v]: + key[v] = self.graph[u][v] + parent[v] = u # Update parent to the current vertex + + self.printMST(parent) # Print the resulting MST + +if __name__ == '__main__': + vertices = int(input("Enter the number of vertices: ")) # Input number of vertices + g = Graph(vertices) # Create a new graph instance + print("Enter the adjacency matrix:") # Prompt for the adjacency matrix + for i in range(vertices): + row = list(map(int, input().split())) # Input each row of the adjacency matrix + g.graph[i] = row # Set the row in the graph + + g.primMST() # Call the Prim's algorithm function to find the MST diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/README.md new file mode 100644 index 0000000000..46300bf5a2 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/README.md @@ -0,0 +1,75 @@ +# Design and Analysis of Algorithms 🧠 + +## Overview + +Algorithm design is the process of creating efficient procedures for solving problems, while analysis involves evaluating their performance regarding time and space complexity. + +### Key Characteristics + +- **Efficiency**: Time and space requirements. +- **Correctness**: Accuracy of output. +- **Scalability**: Performance with increased input size. + +## ⏲️📏 Time and Space Complexity + +**Time Complexity** refers to the amount of computational time an algorithm takes to complete as a function of the input size. It is expressed using asymptotic notations (Big O, Omega, Theta). + +**Space Complexity** refers to the amount of memory an algorithm uses relative to the input size, including both the auxiliary space (temporary space used by the algorithm) and space for the input data. + +### ⚖️ Trade-offs + +- **Time vs. Space**: Some algorithms can be optimized for faster execution at the expense of higher memory usage (e.g., caching results). + +- **Memory Efficiency**: Reducing space usage can lead to slower execution times, particularly in recursive algorithms or those using extensive data structures. + +Finding the right balance between time and space complexity is crucial for optimizing algorithms, especially in resource-constrained environments. + +## 📊 Asymptotic Notations + +Asymptotic notations describe the limiting behavior of algorithms, providing a way to express their time and space complexity as input size grows: + +- **Big O Notation (O)**: Represents the upper bound of an algorithm's complexity, indicating the worst-case scenario (e.g., O(n), O(n²)). + +- **Omega Notation (Ω)**: Represents the lower bound, indicating the best-case scenario (e.g., Ω(n), Ω(log n)). + +- **Theta Notation (Θ)**: Represents a tight bound, indicating that an algorithm’s complexity grows at a rate bounded both above and below (e.g., Θ(n), Θ(n log n)). + +## 🔍 Types of Algorithms + +- **Greedy Algorithms**: Make locally optimal choices for global solutions (e.g. Activity Selection). + +- **Dynamic Programming**: Solve problems by breaking them into simpler subproblems (e.g., Fibonacci, 0/1 knapsack). + +- **Divide and Conquer**: Split problems into smaller parts, solve, and combine results (e.g., Merge Sort, Binary Search). + +- **Backtracking**: Build solutions incrementally and abandon if constraints are violated (e.g., N-Queens). + +- **Branch and Bound**: Systematic exploration of candidate solutions using a tree structure (e.g., 8 Puzzles). + +- **All-Pairs Shortest Path**: Find shortest paths between all vertex pairs (e.g., Floyd-Warshall). + +- **Single Source Shortest Path**: Find shortest paths from one source vertex to others (e.g., Dijkstra's, Bellman-Ford). + +- **Graph Traversing**: Visit all nodes systematically (e.g., DFS, BFS). + +- **Minimum Spanning Trees (MST)**: Connect all vertices with minimum edge weight (e.g., Prim's, Kruskal's). + +- **Maximum Flow**: Determine the maximum flow in a network (e.g., Ford-Fulkerson). + + ## 📌 **Importance of Design and Analysis of Algorithms (DAA)** + +- **Enhances Efficiency**: Develops algorithms that save time and resources, crucial for managing large data sets. + +- **Facilitates Performance Analysis**: Provides tools to assess time and space complexity, enabling the selection of optimal algorithms for specific tasks. + +- **Lays the Foundation for Advanced Studies**: Essential for understanding advanced topics in computer science, including AI and data structures. + +- **Applicable Across Diverse Fields**: Algorithms play a vital role in various domains, from finance to healthcare, showcasing their versatility and significance. + +- **Boosts Career Opportunities**: Proficiency in DAA is often a key requirement for technical roles, enhancing career prospects in the tech industry. + +## Conclusion + +Understanding algorithm design, analysis, time and space complexity, and asymptotic notations is crucial for efficient problem-solving in computer science and software development. 🖥️💡 + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/README.md b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/README.md new file mode 100644 index 0000000000..ef179953d9 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/README.md @@ -0,0 +1,70 @@ +# Single Source Shortest Path + +## What is Single Source Shortest Path? + +The **Single Source Shortest Path (SSSP)** problem involves finding the shortest paths from a source vertex to all other vertices in a weighted graph. This problem is essential in various applications, such as routing and navigation, where determining the quickest route is necessary. The graph can be directed or undirected, and the weights on the edges can represent distances, costs, or any other metric. + +### Key Characteristics: + +1. **Source Vertex**: The starting point from which shortest paths are calculated. +2. **Shortest Path**: The path with the least total weight from the source to any other vertex. +3. **Weighted Graph**: A graph in which edges have weights that can represent costs or distances. + +--- + +## Applications of Single Source Shortest Path + +### 1. **Dijkstra's Algorithm** + +**Dijkstra's Algorithm** is a widely used method for solving the SSSP problem in graphs with non-negative edge weights. It employs a greedy approach, continually selecting the vertex with the smallest known distance from the source to explore its neighbors. + +- **How It Works**: + 1. Initialize the distance to the source as zero and to all other vertices as infinity. + 2. Create a priority queue to keep track of vertices to explore. + 3. While the queue is not empty: + - Extract the vertex with the minimum distance. + - Update the distances to its adjacent vertices if a shorter path is found. + - Repeat until all vertices have been processed. + +- **Time Complexity**: \(O((V + E) \log V)\) when using a priority queue with an adjacency list, where \(V\) is the number of vertices and \(E\) is the number of edges. + +- **Use Case**: Dijkstra's algorithm is particularly useful in: + - **GPS Navigation Systems**: Finding the shortest driving routes. + - **Network Routing Protocols**: Optimizing data transmission paths in networks. + - **Game Development**: Implementing AI for pathfinding in dynamic environments. + +### 2. **Bellman-Ford Algorithm** + +**Bellman-Ford Algorithm** is another algorithm for solving the SSSP problem, capable of handling graphs with negative edge weights. It iteratively relaxes all edges, ensuring that the shortest paths are correctly identified even when negative weights are involved. + +- **How It Works**: + 1. Initialize the distance to the source as zero and to all other vertices as infinity. + 2. For each vertex, iterate through all edges and update the distances. + 3. Repeat this process \(V - 1\) times (where \(V\) is the number of vertices). + 4. Optionally, check for negative weight cycles by trying to relax the edges one more time. + +- **Time Complexity**: \(O(V \cdot E)\) + +- **Use Case**: The Bellman-Ford algorithm is effective for: + - **Detecting Negative Cycles**: Identifying arbitrage opportunities in financial markets. + - **Routing Algorithms**: Applications in networks where negative weights may occur, such as adjusting routes based on penalties. + - **Social Network Analysis**: Analyzing connections with potential negative impacts. + +--- + +## Key Differences Between Algorithms: + +| Algorithm | Time Complexity | Edge Weights | Use Case | +|------------------|---------------------|----------------------|------------------------------------------------| +| **Dijkstra's** | \(O((V + E) \log V)\) | Non-negative only | GPS systems, network routing | +| **Bellman-Ford** | \(O(V \cdot E)\) | Can include negatives | Detecting negative cycles, financial markets | + +--- + +## Conclusion + +**Single Source Shortest Path (SSSP)** is a fundamental concept in graph theory with critical applications in routing, navigation, and network design. **Dijkstra's** and **Bellman-Ford Algorithms** provide robust solutions to this problem, each suited to different types of graphs and scenarios. + +Mastering these algorithms equips developers to effectively solve a variety of optimization challenges, enhancing their algorithmic thinking and problem-solving capabilities. Understanding SSSP is essential for advancing in areas such as computer science, operations research, and artificial intelligence. + +--- diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/bellman_ford.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/bellman_ford.py new file mode 100644 index 0000000000..3a9d07911b --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/bellman_ford.py @@ -0,0 +1,49 @@ +class Graph: + def __init__(self, vertices): + # Initialize the graph with the given number of vertices + self.V = vertices + self.graph = [] # List to store the edges of the graph + + def addEdge(self, u, v, w): + # Add an edge to the graph represented as a tuple (u, v, weight) + self.graph.append((u, v, w)) + + def printArr(self, dist): + # Print the distances from the source to each vertex + print("Vertex Distance from Source") + for i in range(self.V): + print(f"{i}\t\t{dist[i]}") + + def BellmanFord(self, src): + # Function to implement the Bellman-Ford algorithm + dist = [float("Inf")] * self.V # Initialize distances to all vertices as infinity + dist[src] = 0 # Distance to the source vertex is 0 + + # Relax all edges |V| - 1 times + for _ in range(self.V - 1): + for u, v, w in self.graph: + # Update distance if a shorter path is found + if dist[u] != float("Inf") and dist[u] + w < dist[v]: + dist[v] = dist[u] + w + + # Check for negative weight cycles + for u, v, w in self.graph: + if dist[u] != float("Inf") and dist[u] + w < dist[v]: + print("Graph contains negative weight cycle") + return + + # Print the computed distances + self.printArr(dist) + +if __name__ == '__main__': + vertices = int(input("Enter the number of vertices: ")) # Input the number of vertices + g = Graph(vertices) # Create a new graph instance + + edges = int(input("Enter the number of edges: ")) # Input the number of edges + for _ in range(edges): + # Input each edge in the format (u v weight) + u, v, w = map(int, input("Enter edge (u v weight): ").split()) + g.addEdge(u, v, w) # Add the edge to the graph + + src = int(input("Enter the source vertex: ")) # Input the source vertex for distance calculation + g.BellmanFord(src) # Call the Bellman-Ford algorithm to compute distances diff --git a/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/dijkstra.py b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/dijkstra.py new file mode 100644 index 0000000000..f6e1917e03 --- /dev/null +++ b/Algorithms_and_Data_Structures/Design_and_Analysis_of_Algorithms/Single_Source_Shortest_path_problems/dijkstra.py @@ -0,0 +1,56 @@ +import sys + +class Graph(): + def __init__(self, vertices): + # Initialize the graph with the number of vertices + self.V = vertices + # Create an adjacency matrix initialized to 0 + self.graph = [[0 for _ in range(vertices)] for _ in range(vertices)] + + def printSolution(self, dist): + # Print the shortest distances from the source vertex + print("Vertex \tDistance from Source") + for node in range(self.V): + print(f"{node} \t {dist[node]}") + + def minDistance(self, dist, sptSet): + # Find the vertex with the minimum distance from the set of vertices not yet processed + min_value = sys.maxsize + min_index = 0 + for u in range(self.V): + if dist[u] < min_value and not sptSet[u]: + min_value = dist[u] + min_index = u + return min_index + + def dijkstra(self, src): + # Implement Dijkstra's algorithm + dist = [sys.maxsize] * self.V # Initialize distances to all vertices as infinity + dist[src] = 0 # Distance to the source vertex is 0 + sptSet = [False] * self.V # To track vertices included in the shortest path tree + + # Loop through all vertices + for _ in range(self.V): + # Get the vertex with the minimum distance from the unvisited set + x = self.minDistance(dist, sptSet) + sptSet[x] = True # Mark the vertex as processed + + # Update the distance value of the neighboring vertices of the picked vertex + for y in range(self.V): + if self.graph[x][y] > 0 and not sptSet[y] and dist[y] > dist[x] + self.graph[x][y]: + dist[y] = dist[x] + self.graph[x][y] + + # Print the calculated shortest distances + self.printSolution(dist) + +if __name__ == "__main__": + vertices = int(input("Enter the number of vertices: ")) # Input for number of vertices + g = Graph(vertices) # Create a new graph instance + print("Enter the adjacency matrix (use 0 for no direct path):") + for i in range(vertices): + # Input the adjacency matrix row by row + row = list(map(int, input().split())) + g.graph[i] = row # Set the row in the graph + + source = int(input("Enter the source vertex: ")) # Input the source vertex + g.dijkstra(source) # Call Dijkstra's algorithm to find the shortest paths