
Prošlo je neko vrijeme otkako sam počeo razmišljati o tome da se vratim osnovama i obradim temeljne koncepte informatike. I shvatio sam, prije nego što uskočim u skup teških tema poput struktura podataka, operativnih sustava, OOP-a, baza podataka i dizajna sustava (ozbiljno, popis je beskrajan), vjerojatno bih trebao pokupiti temu koju svi nekako ne želimo dodir: analiza složenosti algoritma.
Da! Koncept koji se previdi većinu vremena, jer većina nas programera razmišlja, "Hm, vjerojatno to neću morati znati dok zapravo kodiram!".?
Pa, nisam siguran jeste li ikada osjetili potrebu da shvatite kako analiza algoritma zapravo funkcionira. Ali ako jeste, evo mog pokušaja da to objasnim na najlucidniji mogući način. Nadam se da će pomoći nekome poput mene.?
Što je uopće analiza algoritma i zašto nam je potrebna? ?
Prije nego što zađemo u analizu složenosti algoritma, prvo dajmo kratku ideju o tome što je analiza algoritma. Analiza algoritma bavi se usporedbom algoritama na temelju broja računalnih resursa koje svaki algoritam koristi.
Ono što želimo postići ovom praksom jest sposobnost informirane odluke o tome koji je algoritam pobjednik u smislu učinkovite upotrebe resursa (vremena ili memorije, ovisno o slučaju upotrebe). Ima li ovo smisla?
Uzmimo primjer. Pretpostavimo da imamo funkcijski proizvod () koji množi sve elemente niza, osim elementa s trenutnim indeksom, i vraća novi niz. Ako prolazim [1,2,3,4,5] kao ulaz, trebao bih dobiti [120, 60, 40, 30, 24] kao rezultat.
Navedene funkcije koristi dva ugniježđena za petlje izračunati željeni rezultat. U prvom prolazu uzima element na trenutnoj poziciji. U drugom prolazu množi taj element sa svakim elementom u polju - osim kada se element prve petlje podudara s trenutnim elementom druge petlje. U tom ga slučaju jednostavno pomnoži s 1 da proizvod ostane nepromijenjen.
Možete li slijediti? Sjajno!
To je jednostavan pristup koji dobro funkcionira, ali možemo li ga poboljšati? Možemo li ga izmijeniti na takav način da ne moramo dvije upotrebe ugniježđenih petlji? Možda pohranjivanje rezultata pri svakom dodavanju i korištenje toga?
Razmotrimo sljedeću metodu. U ovoj modificiranoj verziji primijenjeno je načelo da za svaki element izračunajte umnožak vrijednosti s desne strane, izračunajte umnožak vrijednosti s lijeve strane i jednostavno pomnožite te dvije vrijednosti. Prilično slatko, zar ne?
Ovdje, umjesto da koristimo ugniježđene petlje za izračunavanje vrijednosti pri svakom izvođenju, koristimo dvije nes ugniježđene petlje, što smanjuje ukupnu složenost za faktor O (n) (na to ćemo doći kasnije).
Sigurno možemo zaključiti da potonji algoritam radi bolje od prethodnog. Zasada je dobro? Savršen!
U ovom trenutku možemo također brzo pogledati različite vrste analiza algoritama koje postoje vani. Ne trebamo ulaziti u detalje na razini minute, već samo moramo osnovno razumjeti tehnički žargon.
Ovisno o tome kada se algoritam analizira, odnosno prije implementacije ili nakon implementacije, analiza algoritma može se podijeliti u dvije faze:
- Apriori analiza - Kao što i samo ime govori, u apriori (prior) radimo analizu (prostor i vrijeme) algoritma prije nego što ga pokrenemo na određenom sustavu. Dakle u osnovi, ovo je teoretska analiza algoritma. Učinkovitost algoritma mjeri se pod pretpostavkom da su svi ostali čimbenici, na primjer brzina procesora, konstantni i nemaju utjecaja na implementaciju.
- Apostiari analiza - Apostiari analiza algoritma provodi se tek nakon pokretanja na fizičkom sustavu. Odabrani algoritam implementiran je pomoću programskog jezika koji se izvršava na ciljanom računalu. To izravno ovisi o konfiguracijama sustava i promjenama od sustava do sustava.
U industriji rijetko provodimo Apostiari analizu, jer je softver obično napravljen za anonimne korisnike koji ga mogu pokretati na različitim sustavima.
Budući da se vremenska i prostorna složenost mogu razlikovati od sustava do sustava, Apriori analiza je najpraktičnija metoda za pronalaženje složenosti algoritama. To je zato što gledamo samo asimptotske varijacije (na to ćemo doći kasnije) algoritma, koji daju složenost na temelju ulazne veličine, a ne na konfiguracijama sustava.
Sad kad osnovno razumijemo što je analiza algoritma, možemo prijeći na našu glavnu temu: složenost algoritma. Usredotočit ćemo se na Apriori analizu , imajući na umu opseg ovog posta, pa krenimo.
Duboko zaronite u složenost asimptotskom analizom
Analiza složenosti algoritma alat je koji nam omogućuje objašnjenje kako se algoritam ponaša s povećanjem ulaznih podataka.
Dakle, ako želite pokrenuti algoritam s primjerom skupa podataka veličine n , složenost možemo definirati kao numeričku funkciju f (n) - vrijeme u odnosu na ulaznu veličinu n .

Sad se sigurno pitate, nije li moguće da algoritam uzima različite količine vremena na istim ulazima, ovisno o čimbenicima poput brzine procesora, skupa uputa, brzine diska i marke kompajlera? Ako da, onda se potapšajte po leđima, jer ste potpuno u pravu !?
Ovdje asimptotska analiza dolazi u ovu sliku. Ovdje je koncept procijeniti izvedbu algoritma u smislu veličine ulaza (bez mjerenja stvarnog vremena potrebnog za pokretanje). Dakle, u osnovi izračunavamo kako se vrijeme (ili prostor) koje algoritam uzima povećava kad ulaznu veličinu učinimo beskrajno velikom.
Analiza složenosti provodi se na dva parametra:
- Time: Time complexity gives an indication as to how long an algorithm takes to complete with respect to the input size. The resource which we are concerned about in this case is CPU (and wall-clock time).
- Space: Space complexity is similar, but is an indication as to how much memory is “required” to execute the algorithm with respect to the input size. Here, we dealing with system RAM as a resource.
Are you still with me? Good! Now, there are different notations which we use for analyzing complexity through asymptotic analysis. We will go through all of them one by one and understand the fundamentals behind each.
The Big oh (Big O)
The very first and most popular notation used for complexity analysis is BigO notation. The reason for this is that it gives the worst case analysis of an algorithm. The nerd universe is mostly concerned about how badly an algorithm can behave, and how it can be made to perform better. BigO provides us exactly that.
Let’s get into the mathematical side of it to understand things at their core.

Let’s consider an algorithm which can be described by a function f(n). So, to define the BigO of f(n), we need to find a function, let’s say, g(n), which bounds it. Meaning, after a certain value, n0, the value of g(n) would always exceed f(n).
We can write it as,
f(n) ≤ C g(n)
where n≥n0; C> 0; n0≥1
If above conditions are fulfilled, we can say that g(n) is the BigO of f(n), or
f(n) = O (g(n))
Can we apply the same to analyze an algorithm? This basically means that in worst case scenario of running an algorithm, the value should not pass beyond a certain point, which is g(n) in this case. Hence, g(n) is the BigO of f(n).
Let’s go through some commonly used bigO notations and their complexity and understand them a little better.
- O(1): Describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
function firstItem(arr){ return arr[0];}
The above function firstItem(), will always take the same time to execute, as it returns the first item from an array, irrespective of its size. The running time of this function is independent of input size, and so it has a constant complexity of O(1).
Relating it to the above explanation, even in the worst case scenario of this algorithm (assuming input to be extremely large), the running time would remain constant and not go beyond a certain value. So, its BigO complexity is constant, that is O(1).
- O(N): Describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. Take a look at the example below. We have a function called matchValue() which returns true whenever a matching case is found in the array. Here, since we have to iterate over the whole of the array, the running time is directly proportional to the size of the array.
function matchValue(arr, k){ for(var i = 0; i < arr.length; i++){ if(arr[i]==k){ return true; } else{ return false; } } }
This also demonstrates how Big O favors the worst-case performance scenario. A matching case could be found during any iteration of the for
loop and the function would return early. But Big O notation will always assume the upper limit (worst-case) where the algorithm will perform the maximum number of iterations.
- O(N²): This represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N³), O(N⁴), etc.
function containsDuplicates(arr){ for (var outer = 0; outer < arr.length; outer++){ for (var inner = 0; inner < arr.length; inner++){ if (outer == inner) continue; if (arr[outer] == arr[inner]) return true; } } return false;}
- O(2^N): Denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2^N) function is exponential — starting off very shallow, then rising meteorically. An example of an O(2^N) function is the recursive calculation of Fibonacci numbers:
function recursiveFibonacci(number){ if (number <= 1) return number; return recursiveFibonacci(number - 2) + recursiveFibonacci(number - 1);}
Are you getting the hang of this? Perfect. If not, feel free to fire up your queries in the comments below. :)
Moving on, now that we have a better understanding of the BigO notation, let us get to the next type of asymptotic analysis which is, the Big Omega(Ω).
The Big Omega (Ω)?
The Big Omega(Ω) provides us with the best case scenario of running an algorithm. Meaning, it would give us the minimum amount of resources (time or space) an algorithm would take to run.
Let’s dive into the mathematics of it to analyze it graphically.

We have an algorithm which can be described by a function f(n). So, to define the BigΩ of f(n), we need to find a function, let’s say, g(n), which is tightest to the lower bound of f(n). Meaning, after a certain value, n0, the value of f(n) would always exceed g(n).
We can write it as,
f(n)≥ C g(n)
where n≥n0; C> 0; n0≥1
If above conditions are fulfilled, we can say that g(n) is the BigΩ of f(n), or
f(n) = Ω (g(n))
Can we infer that Ω(…) is complementary to O(…)? Moving on to the last section of this post…
The Big Theta (θ)?
The Big Theta(θ) is a sort of a combination of both BigO and BigΩ. It gives us the average case scenario of running an algorithm. Meaning, it would give us the mean of the best and worst case. Let’s analyse it mathematically.

Considering an algorithm which can be described by a function f(n). The Bigθ of f(n) would be a function, let’s say, g(n), which bounds it the tightest by both lower and upper bound, such that,
C₁g(n) ≤ f(n)≤ C₂ g(n)
whereC₁, C₂ >0, n≥ n0,
n0 ≥ 1
Meaning, after a certain value, n0, the value of C₁g(n) would always be less than f(n), and value of C₂ g(n) would always exceed f(n).
Now that we have a better understanding of the different types of asymptotic complexities, let’s have an example to get a clearer idea of how all this works practically.
Consider an array, of size, say, n, and we want to do a linear search to find an element x in it. Suppose the array looks something like this in the memory.

Going by the concept of linear search, if x=9, then that would be the best case scenario for the following case (as we don’t have to iterate over the whole array). And from what we have just learned, the complexity for this can be written as Ω(1). Makes sense?
Similarly, if x were equal to 14, that would be the worst case scenario, and the complexity would have been O(n).
What would be the average case complexity for this?
θ(n/2) => 1/2 θ(n) => θ(n) (As we ignore constants while calculating asymptotic complexities).
So, there you go folks. A fundamental insight into algorithmic complexities. Did it go well with you? Leave your advice, questions, suggestions in the comments below. Thanks for reading!❤️
References:-
- A nice write-up by Dionysis “dionyziz” Zindros: //discrete.gr/complexity/
- A good series on algorithm & data structures: //interactivepython.org/runestone/static/pythonds/AlgorithmAnalysis/WhatIsAlgorithmAnalysis.html