Cheap and Secure Web Hosting Provider : See Now

Effective computation on linear data without random access

, , No Comments
Problem Detail: 

recently I've started thinking about caching problems in modern CPUs, where they struggle to adequately fetch program data (not instructions) in time, so that it can be computed further.
So then I came up with the idea, why don't we restrict random access on program data and make it all as a continuous stream (input and all eventual locals), which moves strictly in one direction through some "processing head", where you only have a possibility to read in some x amount of symols, write, let through and append symbols at the head position (this is a data stream, not an instruction stream).
The question is, whether you can effectively run computations in that system:
Is it possible to order all functions and allocate all local data/variables at compile time in such order that they will produce usable stream of data for all further functions, so that you will never need to "loop" the tape to access some particular element? (thus, emulating random access, by waiting for desired element to come by)
I've tried to look up this topic on the internet and found nothing. And maybe it is because this idea is obviously stupid, so no one bothers with it :)
Thank you for your help!

Asked By : artemonster
Answered By : D.W.

No. Consider an algorithm on a graph (e.g., network flow). It fundamentally requires random access to memory. The restriction you propose would be painful and make many standard algorithms impossible.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/25828

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback