Cheap and Secure Web Hosting Provider : See Now

[Solved]: How do you determine the number of errors in the Welch-Berlekamp algorithm?

, , No Comments
Problem Detail: 

In the Welch-Berlekamp algorithm for decoding Reed-Solomon codes, one is given a list of points $(a_i, b_i)$ representing a message with $e$ errors on the $b_i$ in unknown locations (and $e$ is given to the algorithm). The output is a polynomial passing through all of the given points except those in which errors occurred.

The method involves solving a system of linear equations of the form

$$b_i E(a_i) = Q(a_i)$$

for all $i$ where $E$ has degree $e$ and $Q$ has degree at most $e+k$. The variables are the coefficients of $E$ and $Q$.

To ensure that $E$ has degree $e$ one usually adds the constraint that the coefficient of $x^e$ is 1 to the linear system above. However, in practice one doesn't necessarily know $e$. One inefficient (but still polynomial time) way to deal with this is to try $e$ for all values starting with $(n+k-1)/2 - 1$ going down until a solution is found.

My question is: is there a more efficient way to determine $e$? Alternatively, is there a modification to the linear system that allows one to use an upper bound on $e$ instead of the exact value?

In particular I want to use this specific decoder for Reed-Solomon codes, and not a completely different algorithm based on other techniques.


In response to DW's answer, here is my working example. Everything is done modulo 7.

plain message is: [2, 3, 2] polynomial is: 2 + 3 t^1 + 2 t^2 encoded message is: [[0, 2], [1, 0], [2, 2], [3, 1], [4, 4]] corrupted message is: [[0, 2], [1, 0], [2, 3], [3, 1], [4, 4]] 

So the error is in the third point.

When $e=2$ the polynomial equation in question is

$$b_i(e_0 + e_1x + e_2x^2) - q_0 - q_1x - q_2 x^2 - q_3x^3 - q_4x^4 = 0$$

And plugging in $x = 0,1,2,3,4$ gives the system in matrix form:

[2, 0, 0, 6, 0, 0, 0, 0, 0] [0, 0, 0, 6, 6, 6, 6, 6, 0] [3, 6, 5, 6, 5, 3, 6, 5, 0] [1, 3, 2, 6, 4, 5, 1, 3, 0] [4, 2, 1, 6, 3, 5, 6, 3, 0] [0, 0, 1, 0, 0, 0, 0, 0, 1] 

The last row is the constraint that $e_2 = 1$. Applying Gaussian elimination we get

[1, 0, 0, 0, 0, 0, 1, 4, 0] [0, 1, 0, 0, 0, 0, 3, 3, 1] [0, 0, 1, 0, 0, 0, 0, 0, 1] [0, 0, 0, 1, 0, 0, 2, 1, 0] [0, 0, 0, 0, 1, 0, 2, 2, 5] [0, 0, 0, 0, 0, 1, 4, 5, 2] 

And picking 1 for both free variables we get a solution vector of

[2, 2, 1, 4, 1, 0, 1, 1] 

Which translates to

E is 2 + 2 t^1 + 1 t^2 Q is 4 + 1 t^1 + 0 t^2 + 1 t^3 + 1 t^4 

And $E$ does not divide $Q$. Note that $Q$ factors as $(t + 6) (t^3 + 2t^2 + 2t + 3) \mod 7$

For $e=1$ I get a good solution:

system is:     [2, 0, 6, 0, 0, 0, 0] [0, 0, 6, 6, 6, 6, 0] [3, 6, 6, 5, 3, 6, 0] [1, 3, 6, 4, 5, 1, 0] [4, 2, 6, 3, 5, 6, 0]  [0, 1, 0, 0, 0, 0, 1]  reduced system is:  [1, 0, 0, 0, 0, 0, 5] [0, 1, 0, 0, 0, 0, 1] [0, 0, 1, 0, 0, 0, 3] [0, 0, 0, 1, 0, 0, 3] [0, 0, 0, 0, 1, 0, 6] [0, 0, 0, 0, 0, 1, 2]  solution is [5, 1, 3, 3, 6, 2] Q is 3 + 3 t^1 + 6 t^2 + 2 t^3 E is 5 + 1 t^1 P(x) = 2 + 3 t^1 + 2 t^2 # this is correct! r(x) = 0 

Note that while the counterexample above was generated by code I wrote from scratch (it was basically the first thing I tried), one can check the solutions are valid by hand, so even if my code is buggy it's still a valid counterexample to the claim that using $e=2$ works.

Asked By : JeremyKun

Answered By : D.W.

The same procedure actually works to correct any number of errors up to $e$.

The requirement is that the error polynomial $E(x)$ has to be zero at every point $a_i$ where there was an error. Nothing says it has to be zero at only those points; you can have an $E(x)$ that is zero at other points too, and that's OK, as long as its degree is $e$.

So, if $e$ is an upper bound on the number of errors, there will exist a polynomial $E(x)$ with all the desired properties (i.e., has degree exactly $e$ and is zero at every point where there is an error). For instance, if there are fewer than $e$ errors, then there exists a polynomial $E(x)$ that's zero at every error, and zero at a few more points to get the number of zeros up to exactly $e$.

Finally, the correctness theorem says that if such a polynomial $E(x)$ exists, then the Berlekamp-Welch algorithm will be able to find it. So, even if there are fewer than $e$ errors, the procedure will still work correctly to identify $E(x)$. Once you have $E(x)$, you can identify $n-e$ positions that are error-free, and then you can decode in a straightforward way.


To document the result of the conversation about the "counterexample" in the question:

That's actually not a valid counterexample. The flaw was in the calculation of how many errors you should expect Berlekamp-Welch to be able to correct. The distance is $n-k+1$, so you should expect it to be able to correct up to $(n-k)/2$ errors (as Ran G. points out). In your counterexample $n=5$ and $k=3$, so $(n-k)/2=1$, so you should only expect this procedure to be able to correct one error, i.e., $e=1$. So, when you ran the procedure on an example with $e=2$, there's no reason to expect the procedure to work correctly.

So, the counterexample isn't actually a counterexample, and it doesn't contradict my answer above.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/45265

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback