Cheap and Secure Web Hosting Provider : See Now

[Solved]: If $\log xy=\log x+\log y$ then why multiplication is harder than addition?

, ,
Problem Detail:

Someone told me that the $\log$ function was introduced to make the calculation easier. If we have to calculate $xy$, we can calculate instead $\log x+\log y$ since $\log xy=\log x+\log y$. How this can make the calculation easier? Maybe from a mathematician point of view but what about a computer scientist's point of view?

If it makes the calcualtion easier then why people do not use it to simplify the complexity of the multiplication algorithms?

From my own thinking, this transformation makes the calculation more difficult. How can we calculate the $\log x$ and $\exp x$ functions in a computer?

If $\log xy = \log x + \log y$ then why is multiplication harder than addition?

That's not a fair comparison: you're not comparing like with like. If you, instead, phrase it as "If $xy = \exp(\log x + \log y)$ then why is multiplication harder than addition?" then the answer is obvious. Multiplication, done that way, is harder than addition because doing addition just involves doing addition, whereas multiplication involves doing addition, taking logs twice and exponentiating.

How can we calculate the $\log x$ and $\exp x$ functions in a computer?

The main methods are to either use something like a Taylor series or table look-up and interpolation. Taylor series express functions as sums, e.g., $\exp x = \sum_{i=0}^{\infty} x^i/i!$. Add up as many terms as you need to get the desired level of accuracy – note that this involves many additions and many multiplications. Table look-up and interpolation is essentially the same way that paper log tables work. To calculate, say, $\log 4.3$, you'd look up $\log 4$ and $\log 5$ and approximate $\log 4.3$ as being three-tenths of the way between them. (In reality, the table would have more decimal places.) This involves a few additions and multiplications, and a lot of memory.