diff --git a/content/4_Silver/0_Silver_Complexity.md b/content/4_Silver/0_Silver_Complexity.md index 1448798..8c6ae4b 100644 --- a/content/4_Silver/0_Silver_Complexity.md +++ b/content/4_Silver/0_Silver_Complexity.md @@ -5,31 +5,38 @@ author: Darren Yao order: 0 --- -In programming contests, there is a strict limit on program runtime. This means that in order to pass, your program needs to finish running within a certain timeframe. For USACO, this limit is 4 seconds for Java submissions. A conservative estimate for the number of operations the grading server can handle per second is $10^8$ (but could be closer to $5 \cdot 10^8$ given good constant factors). +In programming contests, there is a strict limit on program runtime. This means that in order to pass, your program needs to finish running within a certain timeframe. -# Big O Notation and Complexity Calculations + -We want a method of how many operations it takes to run each algorithm, in terms of the input size n. Fortunately, this can be done relatively easily using Big O notation, which expresses worst-case complexity as a function of n, as n gets arbitrarily large. Complexity is an upper bound for the number of steps an algorithm requires, as a function of the input size. In Big O notation, we denote the complexity of a function as O(f(n)), where f(n) is a function without constant factors or lower-order terms. We'll see some examples of how this works, as follows. +For USACO, this limit is $4$ seconds for Java submissions. A conservative estimate for the number of operations the grading server can handle per second is $10^8$ (but could be closer to $5 \cdot 10^8$ given good constant factors). -The following code is O(1), because it executes a constant number of operations. -``` +## [Big O Notation](https://en.wikipedia.org/wiki/Big_O_notation) and Complexity Calculations + +We want a method of how many operations it takes to run each algorithm, in terms of the input size $n$. Fortunately, this can be done relatively easily using **Big O notation**, which expresses worst-case complexity as a function of $n$ as $n$ gets arbitrarily large. Complexity is an upper bound for the number of steps an algorithm requires as a function of the input size. In Big O notation, we denote the complexity of a function as $O(f(n))$, where $f(n)$ is a function without constant factors or lower-order terms. We'll see some examples of how this works, as follows. + +The following code is $O(1)$, because it executes a constant number of operations. + +```cpp int a = 5; int b = 7; int c = 4; int d = a + b + c + 153; ``` -Input and output operations are also assumed to be O(1). -In the following examples, we assume that the code inside the loops is O(1). +Input and output operations are also assumed to be $O(1)$. -The time complexity of loops is the number of iterations that the loop runs. For example, the following code examples are both O(n). -``` +In the following examples, we assume that the code inside the loops is $O(1)$. + +The time complexity of loops is the number of iterations that the loop runs. For example, the following code examples are both $O(n)$. + +```cpp for(int i = 1; i <= n; i++){ // constant time code here } ``` -``` +```cpp int i = 0; while(i < n){ // constant time node here @@ -37,22 +44,23 @@ while(i < n){ } ``` -Because we ignore constant factors and lower order terms, the following examples are also O(n): +Because we ignore constant factors and lower order terms, the following examples are also $O(n)$: -``` +```cpp for(int i = 1; i <= 5*n + 17; i++){ // constant time code here } ``` -``` +```cpp for(int i = 1; i <= n + 457737; i++){ // constant time code here } ``` -We can find the time complexity of multiple loops by multiplying together the time complexities of each loop. This example is O(nm), because the outer loop runs O(n) iterations and the inner loop O(m). -``` +We can find the time complexity of multiple loops by multiplying together the time complexities of each loop. This example is $O(nm)$, because the outer loop runs $O(n)$ iterations and the inner loop $O(m)$. + +```cpp for(int i = 1; i <= n; i++){ for(int j = 1; j <= m; j++){ // constant time code here @@ -60,8 +68,9 @@ for(int i = 1; i <= n; i++){ } ``` -In this example, the outer loop runs O(n) iterations, and the inner loop runs anywhere between 1 and n iterations (which is a maximum of n). Since Big O notation calculates worst-case time complexity, we must take the factor of n from the inner loop. Thus, this code is O(n^2). -``` +In this example, the outer loop runs $O(n)$ iterations, and the inner loop runs anywhere between 1 and n iterations (which is a maximum of $n$). Since Big O notation calculates worst-case time complexity, we must take the factor of $n$ from the inner loop. Thus, this code is $O(n^2)$. + +```cpp for(int i = 1; i <= n; i++){ for(int j = i; j <= n; j++){ // constant time code here @@ -69,8 +78,9 @@ for(int i = 1; i <= n; i++){ } ``` -If an algorithm contains multiple blocks, then its time complexity is the worst time complexity out of any block. For example, the following code is O(n^2). -``` +If an algorithm contains multiple blocks, then its time complexity is the worst time complexity out of any block. For example, the following code is $O(n^2)$. + +```cpp for(int i = 1; i <= n; i++){ for(int j = 1; j <= n; j++){ // constant time code here @@ -81,8 +91,9 @@ for(int i = 1; i <= n + 58834; i++){ } ``` -The following code is O(n^2 + nm), because it consists of two blocks of complexity O(n^2) and O(nm), and neither of them is a lower order function with respect to the other. -``` +The following code is $O(n^2 + nm)$, because it consists of two blocks of complexity $O(n^2)$ and $O(nm)$, and neither of them is a lower order function with respect to the other. + +```cpp for(int i = 1; i <= n; i++){ for(int j = 1; j <= n; j++){ // constant time code here @@ -95,37 +106,37 @@ for(int i = 1; i <= n; i++){ } ``` -# Common Complexities and Constraints +## Common Complexities and Constraints + Complexity factors that come from some common algorithms and data structures are as follows: -- Mathematical formulas that just calculate an answer: O(1) -- Unordered set/map: O(1) per operation -- Binary search: O(log n) -- Ordered set/map or priority queue: O(log n) per operation -- Prime factorization of an integer, or checking primality or compositeness of an integer: O(\sqrt{n}) -- Reading in n items of input: O(n) -- Iterating through an array or a list of n elements: O(n) -- Sorting: usually O(n log n) for default sorting algorithms (mergesort, for example Collections.sort or Arrays.sort on objects) +- Mathematical formulas that just calculate an answer: $O(1)$ +- Unordered set/map: $O(1)$ per operation +- Binary search: $O(\log n)$ +- Ordered set/map or priority queue: $O(log n)$ per operation +- Prime factorization of an integer, or checking primality or compositeness of an integer naively: $O(\sqrt{n})$ +- Reading in $n$ items of input: $O(n)$ +- Iterating through an array or a list of $n$ elements: $O(n)$ +- Sorting: usually $O(n \log n)$ for default sorting algorithms (mergesort, for example Collections.sort or Arrays.sort on objects) - Java Quicksort Arrays.sort function on primitives on pathological worst-case data sets, don't use this in CodeForces rounds -- Iterating through all subsets of size k of the input elements: O(n^k). For example, iterating through all triplets is O(n^3). -- Iterating through all subsets: O(2^n) -- Iterating through all permutations: O(n!) - - -Here are conservative upper bounds on the value of n for each time complexity. You can probably get away with more than this, but this should allow you to quickly check whether an algorithm is viable. - -- n | Possible complexities -- n <= 10 | O(n!), O(n^7), O(n^6) -- n <= 20 | O(2^n \cdot n), O(n^5) -- n <= 80 | O(n^4) -- n <= 400 | O(n^3) -- n <= 7500 | O(n^2) -- n <= 7 * 10^4 | O(n \sqrt n) -- n <= 5 * 10^5 | O(n \log n) -- n <= 5 * 10^6 | O(n) -- n <= 10^18 | O(\log^2 n), O(\log n), O(1) +- Iterating through all subsets of size $k$ of the input elements: $O(n^k)$. For example, iterating through all triplets is $O(n^3)$. +- Iterating through all subsets: $O(2^n)$ +- Iterating through all permutations: $O(n!)$ +Here are conservative upper bounds on the value of $n$ for each time complexity. You can probably get away with more than this, but this should allow you to quickly check whether an algorithm is viable. +- $n$ | Possible complexities +- $n \le 10$ | $O(n!)$, $O(n^7)$, $O(n^6)$ +- $n \le 20$ | $O(2^n \cdot n)$, $O(n^5)$ +- $n \le 80$ | $O(n^4)$ +- $n \le 400$ | $O(n^3)$ +- $n \le 7500$ | $O(n^2)$ +- $n \le 7 \cdot 10^4$ | $O(n \sqrt n)$ +- $n \le 5 \cdot 10^5$ | $O(n \log n)$ +- $n \le 5 \cdot 10^6$ | $O(n)$ +- $n \le 10^{18}$ | $O(\log^2 n)$, $O(\log n)$, $O(1)$ +## Other Resources + - CPH 2 diff --git a/content/6_Plat/Olympiads.md b/content/6_Plat/0_Plat_Olympiads.md similarity index 93% rename from content/6_Plat/Olympiads.md rename to content/6_Plat/0_Plat_Olympiads.md index 24cfa62..fbbd1a4 100644 --- a/content/6_Plat/Olympiads.md +++ b/content/6_Plat/0_Plat_Olympiads.md @@ -1,9 +1,17 @@ -# Olympiads +--- +slug: /plat/oly +title: "Olympiads" +author: Benjamin Qi +order: 0 +--- + > Hello, Which online judge should I practice more to do well in **IOI** ? > the closest OJ for IOI style? > Do you have any magic problems sets to suggest? +Once you've reached the platinum level, it may be helpful to practice with problems from other (inter)national olympiads. + ## National See [here](https://ioinformatics.org/page/members/7) for additional links. The [OI Checklist](https://oichecklist.pythonanywhere.com/) is a great way to track your progress. :)