Cheap and Secure Web Hosting Provider : See Now

[Solved]: $O(\sqrt[3]{n})$ lower bound on random-access memory?

, , No Comments
Problem Detail: 

Here is a perhaps naive question that has been tingling me: Is there a O($\sqrt[3]{n}$) asymptotic lower bound for addressing arbitrarily large memory randomly? My cause of belief is that the shortest path to any memory stored physically must be through three-dimensional space, and the diagonal here must have some minimal length.

For example, when sorting arbitrarily many elements, then addressing these elements must eventually cost something proportional to the distance, and even if you have high-speed cable between every single point in some space, it seems as if there is a geometric limit bounded at higher than O(n lg n).

What is wrong with my argument?

Asked By : Simon Shine

Answered By : D.W.

If you are talking about latency, yes, that sounds right to me. A lower bound on distance implies a lower bound on latency, due to speed-of-light considerations. However in practice these speed-of-light considerations probably do not dominate until the amount of memory is extremely large.

By the way, the situation is different if we are talking about bandwidth (i.e., the number of random-access memory operations performed per second), rather than latency. One can process many random-access memory operations concurrently, by using sorting networks.

One important caveat is that you seem to be assuming an architecture where we have one big CPU on the left and one big memory on the right, and they are separated somehow. However that's not necessarily the only way to structure a computation. One can also structure a computation, for instance, as a parallel computation, where you have a 2-D or 3-D mesh of small processors each with their own local working memory. Now accessing your local memory can be very fast, whereas accessing far-away memory is slow. In some sense there is still a $\sqrt{n}$ or $\sqrt[3]{n}$ bound operating, but the $n$ is much smaller for local memory than for far-away memory, so if your algorithm is designed in a way that ensures that most memory operations are to local memory, then you may be able to "beat" the lower bound you described (in some sense).

In computer architecture, one often measures the performance of a circuit by the $AT$ measure: in other words, the area of the circuit ($A$) multiplied by the time it takes for the circuit to complete a computation ($T$). If you like, you can think of this as a price-to-performance ratio: it is the price (which is assumed to be proportional to the area of the circuit) divided by the performance (the number of computations that can be performed per second, i.e., the inverse of $T$). A standard architecture with a single beefy CPU and one big memory is not always the optimal architecture. Sometimes you can get large speedups (more than a constant factor) by using parallel computation or other architectures. This is at least partly due to exactly the issue that you mentioned: it is much faster to access memory that is near you than to access memory that is far away. Caching (e.g., L1 cache, L2 cache, etc.) is also based upon the same principle.

Below is an example of a paper in the world of cryptography that considers design of special-purpose circuits for cryptanalytic tasks, and takes these issues into account. For instance, it has a bunch of processors, a big memory, and a sorting network in between the two to route memory accesses from the processors to memory. This enables a high number of memory operations to be "in flight" at once, even though it does not eliminate the latency.

If you want to dive further into this line of thinking, there is an interesting (but mostly theoretical) debate one can have over whether the right metric is area or volume, i.e., whether the right lower bound is $\sqrt{n}$ or $\sqrt[3]{n}$. Yes, the world is three-dimensional, but heat dissipation is two-dimensional; a massive cube of computers would be limited by its ability to dissipate heat, which scales with its surface area, not its volume. One can continue onward in this vein. If this sounds like a fascinating topic, see Q.9 (pp.45-46) of the following paper:

Best Answer from StackOverflow

Question Source :

3.2K people like this

 Download Related Notes/Documents


Post a Comment

Let us know your responses and feedback