Cheap and Secure Web Hosting Provider : See Now

[Answers] A clarification on the taxonomy of Evolutionary Algorithms

, , No Comments
Problem Detail: 

A rather basic question but I am confused about the characterization of a certain local search method which I want to describe in the framework of EAs. In particular, consider an EA which in every step of the evolution has a neighborhood of size 2; one element in the neighborhood is the current hypothesis (hypothesis in the traditional sense of learning theory) and the other element in the neighborhood is another hypothesis that has arisen from the current hypothesis after applying some transformation/mutation.

It is clear that for such algorithms in the neighborhood we can find at most 1 hypothesis that has strictly better fitness compared to our current hypothesis (our current hypothesis is neutral compared to our current hypothesis). Thus, my question is, how would one characterize an algorithm of this form that always picks among the beneficial set if the set is non-empty, otherwise it would pick among the neutral set (at random - or with some prescribed probability distribution)? Note that the neutral set is always non-empty as the current hypothesis is always there. So, can this algorithm be described as a $(1+1)$-EA or should it be described as a $(1, 1)$-EA? Is there a difference between a $(1+1)$-EA and a $(1, 1)$-EA?

And since we are here, if anyone would be able to clarify the following I would be indebted. What is the difference between a $(1+k)$-EA and a $(1, k)$-EA for $k > 1$. As far as I have understood the comma description (i.e. $(1, k)$-EA) refers to the fact that the algorithm picks the most fit hypothesis among the $k+1$ elements in the neighborhood. The understanding that I have for $(1+k)$-EAs is that they pick at random (however that will be defined) among the hypotheses that have strictly larger fitness values compared to the current hypothesis. Is this understanding correct?

Thank you for your time in advance.

Asked By : MightyMouse

Answered By : deong

I'm not 100% sure I understand your formulation (things like "our current hypothesis is neutral compared to our current hypothesis" are a bit confusing), but if I understand it correctly, it's this:

You have a current solution and at each time step, you generate a new one based on the current one and some random variation operator. If the new one is better than the current one, you accept the new one. Otherwise, you keep the current one.

That's a $(1+1)$-ES.

A $(1,1)$-ES always accepts the new hypothesis/solution, regardless of quality. A $(1,1)$-ES is just a random walk on the graph defined by your variation operator. For a $(\mu$, $\lambda)$-ES, the algorithm generates $\lambda$ hypotheses at each step, and selects the best $\mu$ of them to form the population for the next generation -- the current hypotheses never survive (unless your variation operator produces a copy of one of them). The $(\mu+\lambda)$ strategy takes the best $\mu$ from the union of the current population and the new one.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/41827

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback