Cheap and Secure Web Hosting Provider : See Now

[Answers] Which bound is better, a logarithmic or a polynomial with arbitrarily small degree?

, , No Comments
Problem Detail: 

I have a randomized approximation algorithm which can be tuned by selecting the randomization probabilities. I found out that:

  • For any $\epsilon >0$, there are probabilities for which the approximation factor is $O(n^\epsilon)$.
  • There are probabilities for which the approximation factor is $O(\ln n)$ (smaller factor is better).

I know that $O(\ln n)$ is asymptotic smallerer than any $O(n^\epsilon)$ when $\epsilon$ is fixed, but here $\epsilon$ can be made arbitrarily small, so I don't know, which of these two bounds is considered better? Specifically:

A. Theoretically, which of these two bounds should I report in a paper? Is the bound $O(n^\epsilon)$ interesting, or is it considered useless given the logarithmic bound?

B. Practically, if I have to run the algorithm on very-big-data, with no possibility of making tests in advance, which tuning is better?

(Just to get some feeling, I compared $n^{0.1}$ with $\ln n$. It turns out that $n^{0.1}$ is better even for very large $n$ - they become equal at about $n=4 \cdot 10^{15}$)

Asked By : Erel Segal-Halevi

Answered By : Yuval Filmus

Since $\log n = O(n^{\epsilon})$ for an $\epsilon > 0$, if you can prove an approximation ratio of $O(\log n)$, then approximation ratios of $O(n^{\epsilon})$ (for any $\epsilon > 0$) immediately follow. You should always prove the best approximation ratio that you can, unless:

  • The best approximation ratio holds only in expectation, and some other approximation ratio holds with high probability.
  • You have several incomparable approximation ratios (this is more common in expressions involving more than one parameter).

That is, your worse guarantee needs to have some advantage over your better guarantee for you to report both.

Regarding practical performance, you are highlighting the fact that asymptotic performance can be misleading when it comes to evaluating algorithms in practice. The most well-known instance is probably "fast" matrix multiplication, which is usually slower than the trivial algorithm. Here you have two options:

  • Prove non-asymptotic guarantees on the approximation ratio, say $100\log n$ and $(2/\epsilon)n^{\epsilon}$. This allows you to obtain concrete guarantees for every $n$.

  • Do experiments. The experiments reveal the actual approximation ratio on the average. If you can fit your results to a nice function, you can say that empirically, the average approximation ratio is (say) $10\log n$, though in the worst case all you know is (say) $100\log n$. Experiments, however, are not so welcome in theoretical papers, unfortunately.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/41700

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback