**Problem Detail:**

I have an ordered set of $M$ software modules compiled together. The interaction of some $N$-tuple of these modules is causing a bug when the program is run.

I can run the program with any desired subset of the modules enabled/disabled, so I can test the program by running it with a certain subset of modules enabled and seeing whether it crashes or not (call that "a test"). I would like to identify which $N$ modules are causing this issue with the minimal number of tests. Assume that any particular test will lead to a crash if and only if it includes all of the $N$ problematic modules (and possibly others; but at least those $N$).

$N$ is known ahead of time.

For $N=1$, I can efficiently identify the culprit in $\log_2(M)$ tests by binary search.

For other values of $N$, what approach can I take to identify the $N$ culprits?

###### Asked By : mappu

#### Answered By : D.W.

Now that the question has been edited to make it clearer what you are asking, there is a simple and elegant answer. The solution is: delta debugging. Delta debugging solves exactly this problem.

Let me elaborate.

## Problem statement

I am interpreting your question as asking about the following problem:

We have $M$ software modules, and some way to run the program in a way where only a subset of them are active (and we can choose the subset freely).

There is a subset $S$ of $N$ of the modules. There's a fault that is triggered whenever we run the program with all of those modules active. In other words, if the set of active modules is a superset of $S$, a fault will occur; otherwise, no fault will occur.

The fault is perfectly reproducible and detectable.

Now, we want to find the set $S$ using as few tests (invocations of the program) as possible.

## Solution: delta debugging

Delta debugging will solve this problem. If you have $M$ software modules, and some subset of $N$ of them are responsible for the fault, delta debugging can find the minimal subset that trigger the fault.

Delta debugging works by starting with all of the modules active, and gradually removing some of them. It picks out some chunk of modules, and tries running the program with those modules inactive. If the fault still occurs, then we know that none of them are responsible for the fault, so they can be permanently deactivated in all future tests. Delta debugging starts with large chunks, and gradually reduces the chunk size: it first tries deactivating the first $M/2$ modules, then deactivating the second $M/2$ modules; if neither of those offers any progress, then it tries deactivating the first $M/4$ modules, then the second $M/4$ modules, then the third $M/4$ modules, and then the last $M/4$ modules; then it moves on to chunks of size $M/8$; and so on.

For more details, allow me to point you to a nice description of delta debugging.

## Delta debugging is efficient

Delta debugging is expected to take about $O(N \lg M)$ tests, to find the subset.

You might notice how this is a nice generalization of binary search. When $N=1$, binary search needs $O(\lg M)$ tests, just as delta debugging does. The difference is that delta debugging also scales nicely when $N>1$.

You might also notice that this is close to optimum. We can prove a $\Omega(N \lg(M/N))$ lower bound (roughly). In particular, there are ${M \choose N}$ possibilities for the subset $S$, and each test provides one bit of information (either it triggers a fault or not), so it is easy to see that we need at least $\lg {M \choose N}$ tests. Now

$$\lg {M \choose N} \approx \lg (M^N/N!) \approx N \lg M - N \log N \approx N \lg M - 0.7 N \lg N \ge N (\lg M - \lg N).$$

Therefore, delta debugging is approximately asymptotically optimal, in terms of the number of tests required.

For a more detailed performance analysis, see this analysis by Jesse Ruderman.

## Discussion

That said, I am somewhat skeptical of whether this will be useful in practice. A crucial assumption here is that you could run the program with a chosen subset of its modules active, and the rest artificially deactivated. I don't see how to do that in practice. Therefore, I do not see how this can be applied in practice.

I suspect your theoretical model of the problem probably needs to be adjusted, but since you didn't tell us anything about your particular application setting or the justification for your model, I don't have enough information to specify how to fix the model.

Typically, delta debugging is used for a different purpose. Normally, delta debugging is used to reduce the size of the test case (not the size of the program). Say you have a test case with $M$ lines that crashes your program, but only a subset of $N$ of the lines are actually necessary to trigger the crash (the rest are all irrelevant), and you want to find the minimal subset of lines needed. In other words, given a long complex test case that crashes your program, you want to "minimize" it: to find a smaller, shortest test case that also crashes the program -- because small test cases usually make the debugging process easier. Then delta debugging can be used to solve that problem; in fact, that's what it was invented for, and what it's most commonly used for.

###### Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/27911

**3.2K people like this**

## 0 comments:

## Post a Comment

Let us know your responses and feedback