Performance of Algorithms
I am Jyotiprakash, a deeply driven computer systems engineer, software developer, teacher, and philosopher. With a decade of professional experience, I have contributed to various cutting-edge software products in network security, mobile apps, and healthcare software at renowned companies like Oracle, Yahoo, and Epic. My academic journey has taken me to prestigious institutions such as the University of Wisconsin-Madison and BITS Pilani in India, where I consistently ranked among the top of my class.
At my core, I am a computer enthusiast with a profound interest in understanding the intricacies of computer programming. My skills are not limited to application programming in Java; I have also delved deeply into computer hardware, learning about various architectures, low-level assembly programming, Linux kernel implementation, and writing device drivers. The contributions of Linus Torvalds, Ken Thompson, and Dennis Ritchie—who revolutionized the computer industry—inspire me. I believe that real contributions to computer science are made by mastering all levels of abstraction and understanding systems inside out.
In addition to my professional pursuits, I am passionate about teaching and sharing knowledge. I have spent two years as a teaching assistant at UW Madison, where I taught complex concepts in operating systems, computer graphics, and data structures to both graduate and undergraduate students. Currently, I am an assistant professor at KIIT, Bhubaneswar, where I continue to teach computer science to undergraduate and graduate students. I am also working on writing a few free books on systems programming, as I believe in freely sharing knowledge to empower others.
In the rapidly evolving world of technology, efficient algorithms are the backbone of performance-critical applications. The effectiveness of an algorithm can dramatically influence the speed, scalability, and responsiveness of a system. Whether it’s for sorting large datasets, searching through massive amounts of data, or optimizing complex processes, the choice of algorithm has a significant impact on the overall performance.
Understanding algorithm performance involves analyzing their time and space complexities. The time complexity, typically expressed using Big-O notation, gives us a high-level understanding of the algorithm's efficiency as the input size grows. For example, a sorting algorithm with a time complexity of \(O(n \log n)\) will generally perform better than one with a time complexity of \(O(n^2)\) for large datasets.
Mathematically, if we consider an algorithm with an input size of $n$, the time complexity $T(n)$ can be expressed as:
$$T(n) = O(f(n))$$
where $f(n)$ represents the growth rate of the algorithm. This growth rate helps in comparing the efficiency of different algorithms and choosing the most appropriate one for a given problem.
Space complexity, on the other hand, measures the amount of memory an algorithm requires in relation to the input size. An algorithm with a space complexity of $O(1)$ uses constant space, whereas an algorithm with a space complexity of $O(n)$ uses space proportional to the input size.
Consider an example where we compare two algorithms for a search problem. Algorithm A has a time complexity of $O(n)$, while Algorithm B has a time complexity of \(O(\log n)\). For small inputs, the performance difference might be negligible. However, as the input size increases, Algorithm B will significantly outperform Algorithm A.
Asymptotic Notations
Asymptotic notations are mathematical tools used to describe the running time or space requirements of an algorithm in terms of the input size $n$. The most commonly used asymptotic notations are Big-O ($O$), Omega (\(\Omega\)), and Theta (\(\Theta\)) notations.
Big-O Notation ($O$)
Big-O notation provides an upper bound on the growth rate of a function. It describes the worst-case scenario, indicating the maximum amount of time or space an algorithm will require as the input size grows.
Definition:
A function $f(n)$ is said to be $O(g(n))$ if there exist positive constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq f(n) \leq c \cdot g(n)$$
In other words, $f(n)$ does not grow faster than $g(n)$ up to a constant factor.
Omega Notation (\(\Omega\))
Omega notation provides a lower bound on the growth rate of a function. It describes the best-case scenario, indicating the minimum amount of time or space an algorithm will require as the input size grows.
Definition:
A function $f(n)$ is said to be \(\Omega(g(n))\) if there exist positive constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c \cdot g(n) \leq f(n)$$
In other words, $f(n)$ grows at least as fast as $g(n)$ up to a constant factor.
Theta Notation (\(\Theta\))
Theta notation provides a tight bound on the growth rate of a function. It describes both the upper and lower bounds, indicating that the algorithm's running time or space requirement grows at the same rate as the function $g(n)$.
Definition:
A function $f(n)$ is said to be \(\Theta(g(n))\) if there exist positive constants \(c_1\), \(c_2\), and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n)$$
In other words, $f(n)$ grows at the same rate as $g(n)$ up to constant factors.
Explanation
Big-O ($O$) notation is used to describe the worst-case complexity of an algorithm. It provides an upper limit on the time or space required, ensuring that the algorithm will not exceed this bound regardless of the input size.
Omega (\(\Omega\)) notation is used to describe the best-case complexity of an algorithm. It provides a lower limit, indicating the minimum time or space required for certain input sizes.
Theta (\(\Theta\)) notation is used to describe the average or tight bound complexity. It provides both an upper and a lower limit, ensuring that the algorithm's running time or space requirement will fall within these bounds for large input sizes.
Examples of Asymptotic Notations
Example 1: Proving Big-O ($O$)
Consider the function \(f(n) = 3n^2 + 2n \log n + 5\).
To prove $f(n)$ is \(O(n^2)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq f(n) \leq c \cdot n^2$$
Start with the definition of $f(n)$:
$$f(n) = 3n^2 + 2n \log n + 5$$
For large $n$, the term \(3n^2\) will dominate the terms \(2n \log n\) and $5$. Hence, we can focus on \(3n^2\):
$$3n^2 + 2n \log n + 5 \leq 3n^2 + n^2 + n^2 \quad \text{for} \quad n \geq 2$$
$$= 5n^2$$
Therefore, we can choose \(c = 5\) and \(n_0 = 2\). Thus,
$$f(n) \leq 5n^2 \quad \text{for all} \quad n \geq 2$$
Hence, \(f(n) = 3n^2 + 2n \log n + 5\) is \(O(n^2)\).
Example 2: Proving Omega (\(\Omega\))
Consider the function \(f(n) = 3n^2 + 2n \log n + 5\).
To prove $f(n)$ is \(\Omega(n^2)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c \cdot n^2 \leq f(n)$$
Start with the definition of $f(n)$:
$$f(n) = 3n^2 + 2n \log n + 5$$
For large $n$, the term \(3n^2\) will dominate the terms \(2n \log n\) and $5$. Hence, we can focus on \(3n^2\):
$$3n^2 \leq 3n^2 + 2n \log n + 5 \quad \text{for all} \quad n \geq 1$$
Therefore, we can choose \(c = 3\) and \(n_0 = 1\). Thus,
$$3n^2 \leq f(n) \quad \text{for all} \quad n \geq 1$$
Hence, \(f(n) = 3n^2 + 2n \log n + 5\) is \(\Omega(n^2)\).
Example 3: Proving Theta (\(\Theta\))
Consider the function \(f(n) = 3n^2 + 2n \log n + 5\).
To prove $f(n)$ is \(\Theta(n^2)\):
We need to find constants \(c_1\), \(c_2\), and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c_1 \cdot n^2 \leq f(n) \leq c_2 \cdot n^2$$
Start with the definition of $f(n)$:
$$f(n) = 3n^2 + 2n \log n + 5$$
For the upper bound (Big-O proof):
- We already showed that \(f(n) \leq 5n^2\) for \(c_2 = 5\) and \(n_0 = 2\).
For the lower bound (Omega proof):
- We already showed that \(3n^2 \leq f(n)\) for \(c_1 = 3\) and \(n_0 = 1\).
Combining both results:
$$3n^2 \leq 3n^2 + 2n \log n + 5 \leq 5n^2 \quad \text{for all} \quad n \geq 2$$
Hence, \(f(n) = 3n^2 + 2n \log n + 5\) is \(\Theta(n^2)\).
Example 4: Proving Asymptotic Notations for a Logarithmic Function
Consider the function \(f(n) = n \log n + 3\).
To prove $f(n)$ is \(O(n \log n)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq f(n) \leq c \cdot n \log n$$
Start with the definition of $f(n)$:
$$f(n) = n \log n + 3$$
For large $n$, the term \(n \log n\) will dominate the term $3$. Hence, we can focus on \(n \log n\):
$$n \log n + 3 \leq n \log n + n \log n \quad \text{for} \quad n \geq 1$$
$$= 2n \log n$$
Therefore, we can choose \(c = 2\) and \(n_0 = 1\). Thus,
$$f(n) \leq 2n \log n \quad \text{for all} \quad n \geq 1$$
Hence, \(f(n) = n \log n + 3\) is \(O(n \log n)\).
To prove $f(n)$ is \(\Omega(n \log n)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c \cdot n \log n \leq f(n)$$
Start with the definition of $f(n)$:
$$f(n) = n \log n + 3$$
For large $n$, the term \(n \log n\) will dominate the term $3$. Hence, we can focus on \(n \log n\):
$$n \log n \leq n \log n + 3 \quad \text{for all} \quad n \geq 1$$
Therefore, we can choose \(c = 1\) and \(n_0 = 1\). Thus,
$$n \log n \leq f(n) \quad \text{for all} \quad n \geq 1$$
Hence, \(f(n) = n \log n + 3\) is \(\Omega(n \log n)\).
To prove $f(n)$ is \(\Theta(n \log n)\):
We need to find constants \(c_1\), \(c_2\), and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c_1 \cdot n \log n \leq f(n) \leq c_2 \cdot n \log n$$
Start with the definition of $f(n)$:
$$f(n) = n \log n + 3$$
For the upper bound (Big-O proof):
- We already showed that \(f(n) \leq 2n \log n\) for \(c_2 = 2\) and \(n_0 = 1\).
For the lower bound (Omega proof):
- We already showed that \(n \log n \leq f(n)\) for \(c_1 = 1\) and \(n_0 = 1\).
Combining both results:
$$n \log n \leq n \log n + 3 \leq 2n \log n \quad \text{for all} \quad n \geq 1$$
Hence, \(f(n) = n \log n + 3\) is \(\Theta(n \log n)\).
Example 5: Comparing Polynomial Functions
Let's compare different polynomial functions:
\(f(n) = n\)
\(g(n) = n^2\)
\(h(n) = n^3\)
Comparing $n$ and \(n^2\)
To prove $n$ is \(O(n^2)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq n \leq c \cdot n^2$$
Choose \(c = 1\) and \(n_0 = 1\). Thus,
$$n \leq n^2 \quad \text{for all} \quad n \geq 1$$
Hence, $n$ is \(O(n^2)\).
To prove \(n^2\) is \(\Omega(n)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c \cdot n \leq n^2$$
Choose \(c = 1\) and \(n_0 = 1\). Thus,
$$n \leq n^2 \quad \text{for all} \quad n \geq 1$$
Hence, \(n^2\) is \(\Omega(n)\).
To prove $n$ is not \(\Theta(n^2)\):
- Since $n$ is \(O(n^2)\) but not \(\Omega(n^2)\), it cannot be \(\Theta(n^2)\).
Comparing \(n^2\) and \(n^3\)
To prove \(n^2\) is \(O(n^3)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq n^2 \leq c \cdot n^3$$
Choose \(c = 1\) and \(n_0 = 1\). Thus,
$$n^2 \leq n^3 \quad \text{for all} \quad n \geq 1$$
Hence, \(n^2\) is \(O(n^3)\).
To prove \(n^3\) is \(\Omega(n^2)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c \cdot n^2 \leq n^3$$
Choose \(c = 1\) and \(n_0 = 1\). Thus,
$$n^2 \leq n^3 \quad \text{for all} \quad n \geq 1$$
Hence, \(n^3\) is \(\Omega(n^2)\).
To prove \(n^2\) is not \(\Theta(n^3)\):
- Since \(n^2\) is \(O(n^3)\) but not \(\Omega(n^3)\), it cannot be \(\Theta(n^3)\).
Example 6: Comparing \(n \log n\) and \(n^2\)
To prove \(n \log n\) is \(O(n^2)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq n \log n \leq c \cdot n^2$$
For large $n$, \(\log n\) grows slower than $n$. Choose \(c = 1\) and \(n_0 = 2\). Thus,
$$n \log n \leq n^2 \quad \text{for all} \quad n \geq 2$$
Hence, \(n \log n\) is \(O(n^2)\).
To prove \(n^2\) is \(\Omega(n \log n)\):
We need to find constants $c$ and \(n_0\) such that for all \(n \geq n_0\),
$$0 \leq c \cdot n \log n \leq n^2$$
For large $n$, \(n^2\) grows faster than \(n \log n\). Choose \(c = 1\) and \(n_0 = 2\). Thus,
$$n \log n \leq n^2 \quad \text{for all} \quad n \geq 2$$
Hence, \(n^2\) is \(\Omega(n \log n)\).
To prove \(n \log n\) is not \(\Theta(n^2)\):
- Since \(n \log n\) is \(O(n^2)\) but not \(\Omega(n^2)\), it cannot be \(\Theta(n^2)\).
Increasing Order of Complexity in Big-O Terms
\(\log n\)
$n$
\(n \log n\)
\(n^2\)
\(n^3\)
\(2^n\) (exponential)
$n!$ (factorial)
Motivation for Striving for Lower Complexity
In computer science and algorithm design, it is crucial to choose algorithms with the lowest possible complexity to ensure efficiency, scalability, and optimal performance. Here’s why:
\(\log n\):
Example: Binary search.
Performance: Extremely efficient, even for very large input sizes.
Motivation: Logarithmic time complexity means the algorithm’s execution time increases very slowly as the input size grows. This ensures that the algorithm remains fast even with significantly large datasets.
$n$:
Example: Linear search.
Performance: Directly proportional to the input size.
Motivation: Linear time complexity is still efficient for many practical applications. The execution time grows at a manageable rate with the input size.
\(n \log n\):
Example: Merge sort, quicksort.
Performance: Efficient for large datasets.
Motivation: Algorithms with \(n \log n\) complexity are suitable for sorting and other operations that need to handle large amounts of data effectively.
\(n^2\):
Example: Bubble sort, insertion sort (in the worst case).
Performance: Quadratic growth, which becomes impractical for large datasets.
Motivation: While manageable for small datasets, quadratic time complexity can be a significant bottleneck as the input size increases. Optimizing to \(n \log n\) or lower is preferred for better performance.
\(n^3\):
Example: Some matrix multiplication algorithms.
Performance: Cubic growth, which is typically inefficient for all but very small datasets.
Motivation: Cubic time complexity is generally impractical for most real-world applications. Algorithms with this complexity should be avoided or optimized if possible.
\(2^n\) (Exponential):
Example: Solving the traveling salesman problem using brute force.
Performance: Exponential growth, which is infeasible for even moderately sized datasets.
Motivation: Exponential time complexity leads to extremely long execution times as the input size grows. Finding approximate or heuristic solutions is often necessary to handle such problems efficiently.
$n!$ (Factorial):
Example: Generating all permutations of a set.
Performance: Factorial growth, which is impractical for all but the smallest datasets.
Motivation: Factorial time complexity grows faster than any other class listed here, making it unusable for practical purposes when dealing with anything beyond very small inputs. Striving for lower complexity is crucial.
What happens when n goes to infinity?
Insertion Sort (\(O(n^2)\))
Description: Insertion Sort is a simple and intuitive algorithm that builds the final sorted array one item at a time. It is efficient for small datasets or nearly sorted data.
Performance: For small values of $n$, such as 10 or 100, Insertion Sort can be quite fast because the overhead of the algorithm is minimal.
Merge Sort (\(O(n \log n)\))
Description: Merge Sort is a divide-and-conquer algorithm that divides the input array into two halves, recursively sorts them, and then merges the sorted halves.
Performance: For larger values of $n$, such as millions or billions, Merge Sort is significantly more efficient because it handles larger datasets with much better performance due to its lower time complexity.
Comparative Analysis
Let's illustrate the performance difference with an example:
Insertion Sort for small $n$:
For \(n = 100\):
Time complexity: \(O(n^2) = 100^2 = 10,000\) operations.
Execution time: Let's assume each operation takes 1 microsecond (\(10^{-6}\) seconds).
Total time: \(10,000 \times 10^{-6} = 0.01\) seconds (10 milliseconds).
Merge Sort for large $n$:
For \(n = 1,000,000,000\):
Time complexity: \(O(n \log n) = 1,000,000,000 \log_2 1,000,000,000\).
Approximate number of operations: \(1,000,000,000 \times 30 = 30,000,000,000\) operations (since \(\log_2 1,000,000,000 \approx 30\)).
Execution time: Let's assume each operation takes 1 microsecond (\(10^{-6}\) seconds).
Total time: \(30,000,000,000 \times 10^{-6} = 30,000\) seconds (approximately 8.3 hours).
Motivation
For small $n$: Insertion Sort can be preferable because its simplicity and low overhead might make it faster for small datasets despite its higher theoretical time complexity. For example, sorting a small array of 100 elements in 10 milliseconds is quite efficient.
For large $n$: As the input size grows, the efficiency of the algorithm becomes critical. For $n$ in the billions, using Insertion Sort would be impractical:
For \(n = 1,000,000,000\), the time complexity is \(O(n^2) = 1,000,000,000^2 = 10^{18}\) operations.
Assuming each operation takes 1 microsecond, the total time would be \(10^{18} \times 10^{-6} = 10^{12}\) seconds.
This translates to approximately 31,700 years!
In contrast, Merge Sort can handle the same dataset in just a few hours, making it vastly more practical for large-scale sorting tasks.
Choosing the right algorithm is crucial depending on the size of the input data:
For small datasets, simpler algorithms with higher theoretical complexity might perform adequately or even better due to lower overhead.
For large datasets, more efficient algorithms with lower time complexity are essential to ensure reasonable execution times.
Motivation for Space Complexity
In computer science, analyzing the space complexity of an algorithm is just as important as analyzing its time complexity. Space complexity refers to the amount of memory an algorithm uses in relation to the input size. Understanding space complexity is crucial for several reasons:
Resource Management: Efficient use of memory is critical in environments with limited resources, such as embedded systems, mobile devices, or systems with large datasets. Algorithms that use excessive memory can lead to resource exhaustion, causing programs to crash or slow down significantly.
Scalability: As applications handle increasingly large datasets, the memory requirements can grow rapidly. Analyzing space complexity helps ensure that algorithms can scale efficiently without consuming prohibitive amounts of memory.
Performance Optimization: Memory usage can impact the overall performance of an application. High memory consumption can lead to increased garbage collection, paging, or cache misses, which can degrade performance. Optimizing space complexity can help mitigate these issues.
Algorithm Design: Understanding space complexity aids in designing better algorithms. By being aware of the memory requirements, developers can make informed decisions about trade-offs between time and space efficiency, choosing the most appropriate data structures and approaches.
System Stability: In multi-user or multi-application environments, excessive memory usage by one algorithm can affect the performance and stability of other processes. Efficient algorithms help maintain overall system stability and performance.
Cost Efficiency: In cloud computing and data center environments, memory resources come with a cost. Efficient algorithms that use less memory can help reduce operational costs.
Examples
#include <stdio.h>
void singleLoop(int n) {
for (int i = 0; i < n; i++) {
printf("%d ", i);
}
}
int main() {
int n = 10;
singleLoop(n);
return 0;
}
Analysis for Single Loop
Time Complexity:
Best Case: \(\Omega(n)\)
Average Case: \(\Theta(n)\)
Worst Case: $O(n)$
The time complexity is linear because the loop runs from 0 to \(n-1\), performing a constant time operation (printing) for each iteration.
Space Complexity:
Fixed Space: The function uses a constant amount of space for the integer
iand the integerninmain.Variable Space: There is no additional space required that depends on the input size
n.Total Space Complexity: $O(1)$
Example 2: Nested Loops with Array
#include <stdio.h>
void nestedLoopsWithArray(int n) {
int arr[n][n];
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
arr[i][j] = i * j;
printf("%d ", arr[i][j]);
}
}
}
int main() {
int n = 3;
nestedLoopsWithArray(n);
return 0;
}
Analysis for Nested Loops with Array
Time Complexity:
Best Case: \(\Omega(n^2)\)
Average Case: \(\Theta(n^2)\)
Worst Case: \(O(n^2)\)
The time complexity is quadratic because there are two nested loops, each running from 0 to \(n-1\), resulting in \(n \times n = n^2\) iterations.
Space Complexity:
Fixed Space: The function uses a constant amount of space for the integers
i,j, and the integerninmain.Variable Space: The array
arrrequires \(n \times n\) space.Total Space Complexity: \(O(n^2)\)
Example 3: Recursive Function
#include <stdio.h>
int factorial(int n) {
if (n == 0 || n == 1)
return 1;
else
return n * factorial(n - 1);
}
int main() {
int n = 5;
printf("Factorial of %d is %d\n", n, factorial(n));
return 0;
}
Analysis for Recursive Function
Time Complexity:
Best Case: \(\Omega(n)\)
Average Case: \(\Theta(n)\)
Worst Case: $O(n)$
The time complexity is linear because the recursion depth is $n$, with each recursive call performing a constant time operation (multiplication).
Space Complexity:
Fixed Space: The function uses a constant amount of space for the integer
ninmain.Variable Space: The recursion depth is $n$, so the call stack will use $O(n)$ space.
Total Space Complexity: $O(n)$
Example 4: Linear Search
#include <stdio.h>
int linearSearch(int arr[], int n, int x) {
for (int i = 0; i < n; i++) {
if (arr[i] == x)
return i;
}
return -1;
}
int main() {
int arr[] = {2, 3, 4, 10, 40};
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = linearSearch(arr, n, x);
printf("Element is present at index %d\n", result);
return 0;
}
Analysis for Linear Search
Time Complexity:
Best Case: \(\Omega(1)\)
Average Case: \(\Theta(n)\)
Worst Case: $O(n)$
The time complexity in the best case is constant when the element is found at the first position. In the average and worst case, it is linear because the loop may need to iterate through the entire array.
Space Complexity:
Fixed Space: The function uses a constant amount of space for the integer
i, the integerx, and the integerninmain.Variable Space: The array
arruses $O(n)$ space, wherenis the size of the array.Total Space Complexity: $O(n)$
Example 5: Binary Search
#include <stdio.h>
int binarySearch(int arr[], int l, int r, int x) {
if (r >= l) {
int mid = l + (r - l) / 2;
if (arr[mid] == x)
return mid;
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);
return binarySearch(arr, mid + 1, r, x);
}
return -1;
}
int main() {
int arr[] = {2, 3, 4, 10, 40};
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
printf("Element is present at index %d\n", result);
return 0;
}
Analysis for Binary Search
Time Complexity:
Best Case: \(\Omega(1)\)
Average Case: \(\Theta(\log n)\)
Worst Case: \(O(\log n)\)
The time complexity in the best case is constant when the element is found at the middle of the array. In the average and worst case, it is logarithmic because the search space is halved in each recursive step.
Space Complexity:
Fixed Space: The function uses a constant amount of space for the integers
mid,x,l, andr.Variable Space: The array
arruses $O(n)$ space, wherenis the size of the array. The recursion depth is \(\log n\), so the call stack will use \(O(\log n)\) space.Total Space Complexity: \(O(n + \log n)\), which simplifies to $O(n)$
Example 6: Matrix Multiplication
#include <stdio.h>
void multiplyMatrices(int a[2][2], int b[2][2], int result[2][2]) {
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
result[i][j] = 0;
for (int k = 0; k < 2; k++) {
result[i][j] += a[i][k] * b[k][j];
}
}
}
}
int main() {
int a[2][2] = {{1, 2}, {3, 4}};
int b[2][2] = {{5, 6}, {7, 8}};
int result[2][2];
multiplyMatrices(a, b, result);
printf("Result matrix is:\n");
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
printf("%d ", result[i][j]);
}
printf("\n");
}
return 0;
}
Analysis for Matrix Multiplication
Time Complexity:
Best Case: \(\Omega(n^3)\)
Average Case: \(\Theta(n^3)\)
Worst Case: \(O(n^3)\)
The time complexity is cubic because there are three nested loops, each running from 0 to \(n-1\), resulting in \(n \times n \times n = n^3\) iterations for an \(n \times n\) matrix.
Space Complexity:
Fixed Space: The function uses a constant amount of space for the matrices
a,b, andresult.Variable Space: There is no additional space required that depends on the input size
n(since the matrices are of fixed size).Total Space Complexity: $O(1)$