---
id: time-comp
title: "Time Complexity"
author: Darren Yao, Benjamin Qi
description: Measuring how long your algorithm takes to run in terms of the input size.
---
## Additional Resources
```cpp
int i = 0;
while(i < n){
// constant time node here
i++;
}
```
Because we ignore constant factors and lower order terms, the following examples are also $O(n)$:
```cpp
for(int i = 1; i <= 5*n + 17; i++){
// constant time code here
}
```
```cpp
for(int i = 1; i <= n + 457737; i++){
// constant time code here
}
```
We can find the time complexity of multiple loops by multiplying together the time complexities of each loop. This example is $O(nm)$, because the outer loop runs $O(n)$ iterations and the inner loop $O(m)$.
```cpp
for(int i = 1; i <= n; i++){
for(int j = 1; j <= m; j++){
// constant time code here
}
}
```
In this example, the outer loop runs $O(n)$ iterations, and the inner loop runs anywhere between $1$ and $n$ iterations (which is a maximum of $n$). Since Big O notation calculates worst-case time complexity, we must (?) take the factor of $n$ from the inner loop. Thus, this code is $O(n^2)$.
```cpp
for(int i = 1; i <= n; i++){
for(int j = i; j <= n; j++){
// constant time code here
}
}
```
If an algorithm contains multiple blocks, then its time complexity is the worst time complexity out of any block. For example, the following code is $O(n^2)$.
```cpp
for(int i = 1; i <= n; i++){
for(int j = 1; j <= n; j++){
// constant time code here
}
}
for(int i = 1; i <= n + 58834; i++){
// more constant time code here
}
```
The following code is $O(n^2 + nm)$, because it consists of two blocks of complexity $O(n^2)$ and $O(nm)$, and neither of them is a lower order function with respect to the other.
```cpp
for(int i = 1; i <= n; i++){
for(int j = 1; j <= n; j++){
// constant time code here
}
}
for(int i = 1; i <= n; i++){
for(int j = 1; j <= m; j++){
// more constant time code here
}
}
```
## Common Complexities and Constraints
Complexity factors that come from some common algorithms and data structures are as follows:
- Mathematical formulas that just calculate an answer: $O(1)$
- Unordered set/map: $O(1)$ per operation
- Binary search: $O(\log n)$
- Ordered set/map or priority queue: $O(\log n)$ per operation
- Prime factorization of an integer, or checking primality or compositeness of an integer naively: $O(\sqrt{n})$
- Reading in $n$ items of input: $O(n)$
- Iterating through an array or a list of $n$ elements: $O(n)$
- Sorting: usually $O(n \log n)$ for default sorting algorithms (mergesort, for example `Collections.sort` or `Arrays.sort` on objects)
- Java Quicksort `Arrays.sort` function on primitives: $O(n^2)$
- See "Introduction to Data Structures" for details.
- Iterating through all subsets of size $k$ of the input elements: $O(n^k)$. For example, iterating through all triplets is $O(n^3)$.
- Iterating through all subsets: $O(2^n)$
- Iterating through all permutations: $O(n!)$
Here are conservative upper bounds on the value of $n$ for each time complexity. You can probably get away with more than this, but this should allow you to quickly check whether an algorithm is viable.