Cheap and Secure Web Hosting Provider : See Now

[Answers] Approximating sign function by tanh in Multi Layer perceptrons

, , No Comments
Problem Detail: 

This is an exercise problem from the book "Learning from Data".

Given $w_1$ and $\epsilon \gt 0$, find $w_2$ such that, $$\lvert \mbox{sign}(w_1^Tx_n)-\tanh(w_2^Tx_n)\rvert \le \epsilon$$ Where $w_1, w_2, x_n$ are all real vectors.

I know that for large $y$, $\mbox{sign}(y) \approx \tanh(y)$. But I am unable to proceed. Any help would be appreciated.

Thanks

Asked By : CSStudent

Answered By : Lieuwe Vinkhuijzen

Here's one solution. The central insight is that your error becomes exponentially smaller as $w^tx$ increases. In particular, $1-tanh(x)$ approximates $1$ exponentially fast. That is, for positive $x$:

$$\frac{1-tanh(x)}{1-tanh(x+1)} \geq \frac{tanh(0)}{tanh(1)} \approx 4.19\ldots = \alpha$$

In fact, its asymptote is at $e^2$. I didn't know that, but Wolfram Alpha did. You can use this fact to estimate your error. Let $\tau = sign(w_1^Tx)$ be the target, and let $E(w_2^Tx)=|\tau-tanh(w_2^Tx)|$ be the error. Figure out a weight vector $w$ that will give you $w^Tx=1$. Then $E(kw^Tx) \geq \alpha \cdot E((k+1)w^Tx)$ for any positive number $k$, because of the exponential improvement we just discovered. This is great; now we've reduced our problem to finding a good $k$. We know that for $k=0$, your estimate is $tanh(0)=0$ so $E(0)=1$, and every time you increment $k$ by $1$, your estimate becomes at least $\alpha \approx 4.19$ times as good. So, we solve the following simple equation:

$$\frac{1}{\alpha^k} \leq \varepsilon, \quad\text{so} \quad k \geq \ ^\alpha log\left(\frac{1}{\varepsilon}\right)$$

Hope this helps.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/48220

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback