If you want to make money online : Register now

# [Answers] Explanation of a simple algorithm's Big O complexity

, ,
Problem Detail:

I'm looking at a past paper, where there is the following Algorithm, and we are asked to give the runtime in O notation:

Loop2(n)     for i := 0 to n         j := 0         s := 0         while s <= i             j := j + 1             s := s + j 

I can see that the outer loop (alone) is run $O(n)$ times, and that the inner loop is $O(\sqrt n)$. However, the part that confuses me is that the correct answer, according to the paper, is $O(n \cdot \sqrt n)$.

I though that, since the inner loop is dependent on $i$, that it would be run $\sqrt i$ times, for $i$ going from $0..n$. Meaning that in total, the loop would run roughly $n$ times, making the whole algorithm $O(n\cdot n)=O(n^2)$. I kinda drew the logic from an analysis of insertion sort, where $$\sum_{i=0}^{n} n-i = O(n^2)$$

I think I remember the above correctly. Can anyone fix my broken logic here?

The inner loop is $O(\sqrt{i})$, you are correct. Since $i\leq n$, then $\sqrt{i}\leq \sqrt{n}$, and we have $n\times\sqrt{i}\leq n\times\sqrt{n}$. Hence the complexity is $O(n\sqrt{n})$.