
A series of steps taken to solve a problem or achieve a goal.
The chart to the right shows a flowchart for an algorithm that describes how to make a cake.


Source: danielmiessler.com/study/big-o-notation/
Big O time is the language and metric used to describe the efficiency of algorithms.
<aside> đ§ This definition is intentionally broad, because the efficiency of an algorithm can be described in many different ways - how many CPU cycles it takes to run vs. the actual amount of time it takes vs. how many iterations it takes, etc. What Big O measures is intentionally extremely big picture and it intentionally doesnât concern itself with the unit of measurement we are using to describe it and we will instead refer to this broad idea as runtime. As you learn more Big O youâll see that it is very very hand-wavy towards detail. During this lesson we will describe the efficiency of algorithms using time, since that is a familiar unit of measurement for all of us.
</aside>
Big O gives us way to talk about the efficiency of algorithms (broadly described as their runtime) and allows us to judge what exactly makes an algorithm faster, slower, and more or less space efficient than a similar algorithm. This means we can make better programming decisions!
Big O Measures time complexity (runtimes) and space complexity in a given algorithm. Big O Notation is written according to its performance in the worst case scenario.

The best possible result we could expect from an algorithm at runtime. Expressed as Ω (Omega). Not a useful concept. Many algorithms can get lucky and produce a result of Ω(N)
The average result we could expect from an algorithm at runtime. Expressed as Î (Theta). Useful for differentiating searching and sorting algorithms. The âreal worldâ performance of an algorithm. Outliers should be expected and taken into account, but are not the norm. When you have 2 algorithms with the same worst case runtime complexity, they can be quickly compared to one another using their expected case.