Cheap and Secure Web Hosting Provider : See Now

[Answers] Accounting method amortized analysis of a sequence of operations (CLRS 17.2-2)

, , No Comments
Problem Detail: 

I'm working through CLRS, Chapter 17 on Amortized Analysis. One of the problems I attempted to solve is 17.2-2 (described below), but my answer differs from the one in the instructor's manual, so I thought I'd solicit your feedback. I appreciate any help with this.

Problem. We have a data structure supporting some number of operations. Given a sequence of n such operations, we're to determine the amortized cost of each operation using the Accounting Method [CLRS, 17.2]. The actual cost of $i^{th}$ operation is defined as follows:

$$ cost(i) = \begin{cases} i & \text{if $i$ is an exact power of 2} \\[2ex] 1 & \text{otherwise} \end{cases} $$

Assume that exact powers of 2 form the sequence: $1, 2, 4, 8, \ldots$.

Solution. (My attempt) For the sake of concreteness, consider a sequence of 10 operations:

$$ \begin{array}{c|ccc} i & cost(i) & notes\\ \hline 1 & 1 & 1 = 2^0\\ 2 & 2 & 2 = 2^1\\ 3 & 1 &\\ 4 & 4 & 4 = 2^2\\ 5 & 1 &\\ 6 & 1 &\\ 7 & 1 &\\ 8 & 8 & 8 = 2^3\\ 9 & 1 &\\ 10 & 1 &\\ \end{array} $$

The approach is predicated on the observation, based on this data, that between consecutive power of 2 operations (say, 1 and 2 or 2 and 4 or 4 and 8, etc.) there are exactly $i-1$ unit-cost operations. For example, if $i=1$, then between operations 1 and 2 there should be $1-1 = 0$ unit-cost operations and indeed there are none. Another example, consider $i=4$. We again verify that between 4 and 8 there are $4-1 = 3$ unit-time operations.

Based on that hypothesis, we assign the following "charges" ($\widehat{cost}$) to each of the operations:

$$ \widehat{cost}(i) = \begin{cases} i + (i - 1) & \text{if $i$ is an exact power of 2} \\[2ex] \tag{$\alpha$} 0 & \text{otherwise} \end{cases} $$

The idea is that each power of 2 operation pays for itself (that's the $i$ summand) and leaves credit just enough to cover the $cost$ of the next $(i-1)$ unit-cost operations. Visually:

$$ \begin{array}{c|ccc} i & cost(i) & \widehat{cost(i)} & \text{"remaining credit"}\\ \hline 1 & 1 & 1 & 0\\ 2 & 2 & 3 & 1\\ 3 & 1 & 0 & 0\\ 4 & 4 & 7 & 3\\ 5 & 1 & 0 & 2\\ 6 & 1 & 0 & 1\\ 7 & 1 & 0 & 0\\ 8 & 8 & 15 & 7\\ 9 & 1 & 0 & 6\\ 10 & 1 & 0 & 5\\ \end{array} $$

At the end of the sequence, we're left with 5 units of credit, which will pay the price of the next 5 unit-cost operations ($i = 11, \ldots, 15$).

The amortized cost of a given operation is $i + (i - 1)$, which is bounded by $2n$. The book has a $3n$ in the analysis, so I'm wondering where I'm messing up.

Asked By : Readingtao

Answered By : Lee Gao

An amortized cost is just the average cost of a sequence of $n$ operations (in the worst cases). If I remember correctly, its definition is defined as

$$ \hat c(n) \sim \frac{\sum\limits_kc(k)}{n} $$

Now, $$ \hat c(n) = \begin{cases} 2n - 1 & \text{if $n$ is a power of 2} \\ 0 & \text{otherwise} \end{cases} $$ obviously fits that description. In the same vein, so does $$ \hat c(n)_{\text{alt}} = \begin{cases} n & \text{if $n$ is a power of 2} \\ 1 & \text{otherwise} \end{cases} $$ which is just our original cost function ;)

In fact, many such cost functions exist. However, none of them clearly spells out a tight bound on the cost of any arbitrary instruction.

For example, to say that $\hat c(n) \sim 2n$ is to say that the total cost of $n$ operations is approximately $\sum_k^n 2k = 1 \times n^2$. On the other hand, the true total cost of $n$ operations is \begin{align*} \sum_k c(k) &= n + (2^0 - 1) + (2^1 - 1) + \dots + (2^{\lfloor\log_2(n)\rfloor} - 1) \\ &= n + 2^{\lfloor \log_2(n) \rfloor + 1} - 1 - (\lfloor\log_2(n)\rfloor + 1) \\ &\le n + 2n - \log_2(n) - 2 \\ &\le 3n \end{align*}

This means that a good "amortization" technique here would give an amortized cost of $\hat c(n) = 3$ for any arbitrary $n$.

Now, why is $\hat c(n) = 3$ more "elucidating" than $\hat c(n) = \text{n is a power of 2} ~?~ 2n - 1 ~:~ 0$? Well, because it gives a uniform amortized value for every $n$. This then forms the (unspoken) cardinal rule of amortized runtime analysis:

Good amortized cost functions are not piecewise.

Why? If $\hat c(n) = \square$ uniformly, then we can easily say that each and every operation is bounded by the same $\square$.

Now, I can definitely hear some readers protest: "but I see case analyses in CLRS itself!" Do you? Well, for algorithms with multiple operations (say PUSH, POP, and MULTIPOP), then we need to give a cost function for each individual operation. However, the amortized cost function given for each operation is "continuous"!

Fret not, this is a pretty common problem. I haven't gone through CLRS myself, but I've stumbled upon these confusions as well when I first went through this technique.

We've already seen that $\hat c(n) = 3$ provides a valid bound, but it doesn't really give us any intuition about the problem (beyond pushing around some inequalities). Why 3? How do we get to that answer without working with (or around) arithmetic and geometric series?

There are good news and bad news. Bad news first: the most general framework you have here is the vague phrase that you already know; just figure out how much you have to pay upfront so you can cover the up to the next "big shock." While this might not seem like very much to work with, it does give us some general intuitions: the important aspect of the problem is not only the magnitude of these shocks (heh), but also their location.

In our case, we know that the location of these shocks are logarithmically distributed, which means that most of the time, you just incur a cost of one. However, these shocks are also pretty big, so you'll need to pay enough up-front to cover for them. In particular, after $2^k$ operations, you'll have to pay at least $2^k - 1$ operations and a further $2^k$ operations. This immediately suggests that $2^{k} + 2^{k} - 1 \le \hat c(2^k)$. Following a similar argument, we also have $\hat c(2^k) \le 2^{k} - k + k \times 2^k$. This refines our bound to $2n -1 \le \hat c(n) \le \lfloor \log_2 n + 1\rfloor n - \lfloor \log_2 n\rfloor$. After this, you can keep refining this bound until you get something as tight as possible. Unfortunately, there's really no way to do this unless you stumble onto the geometric series and realize that $\sum_k^n 2^k = 2^{n+1} - 1$, but at least this should give you a way to make gradual progress in your analysis.

Good luck!

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/65981

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback