Making Your Own Strategy : A Simple EDA

As seen in the Covariance Matrix Adaptation Evolution Strategy example, the eaGenerateUpdate() algorithm is suitable for algorithms learning the problem distribution from the population. Here we’ll cover how to implement a strategy that generates individuals based on an updated sampling function learnt from the sampled population.

Estimation of distribution

The basic concept concept behind EDA is to sample \lambda individuals with a certain distribution and estimate the problem distribution from the \mu best individuals. This really simple concept adhere to the generate-update logic. The strategy contains a random number generator which is adapted from the population. The following EDA class do just that.

class EDA(object):
    def __init__(self, centroid, sigma, mu, lambda_):
        self.dim = len(centroid)
        self.loc = numpy.array(centroid)
        self.sigma = numpy.array(sigma)
        self.lambda_ = lambda_
        self.mu = mu
    
    def generate(self, ind_init):
        # Generate lambda_ individuals and put them into the provided class
        arz = self.sigma * numpy.random.randn(self.lambda_, self.dim) + self.loc
        return map(ind_init, arz)
    
    def update(self, population):
        # Sort individuals so the best is first
        sorted_pop = sorted(population, key=attrgetter("fitness"), reverse=True)
        
        # Compute the average of the mu best individuals
        z = sorted_pop[:self.mu] - self.loc
        avg = numpy.mean(z, axis=0)
        
        # Adjust variances of the distribution
        self.sigma = numpy.sqrt(numpy.sum((z - avg)**2, axis=0) / (self.mu - 1.0))
        self.loc = self.loc + avg

A normal random number generator is initialized with a certain mean (centroid) and standard deviation (sigma) for each dimension. The generate() method uses numpy to generate lambda_ sequences in dim dimensions, then the sequences are used to initialize individuals of class given in the ind_init argument. Finally, the update() computes the average (centre) of the mu best individuals and estimates the variance over all attributes of each individual. Once update() is called the distributions parameters are changed and a new population can be generated.

Objects Needed

Two classes are needed, a minimization fitness and a individual that will combine the fitness and the real values. Moreover, we will use numpy.ndarray as base class for our individuals.

creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", numpy.ndarray, fitness=creator.FitnessMin) 

Operators

The eaGenerateUpdate() algorithm requires to set in a toolbox an evaluation function, an generation method and an update method. We will use the method of an initialized EDA. For the generate method, we set the class that the individuals are transferred in to our Individual class containing a fitness.

def main():
    N, LAMBDA = 30, 1000
    MU = int(LAMBDA/4)
    strategy = EDA(centroid=[5.0]*N, sigma=[5.0]*N, mu=MU, lambda_=LAMBDA)
    
    toolbox = base.Toolbox()
    toolbox.register("evaluate", benchmarks.rastrigin)
    toolbox.register("generate", strategy.generate, creator.Individual)
    toolbox.register("update", strategy.update)

    hof = tools.HallOfFame(1)
    stats = tools.Statistics(lambda ind: ind.fitness.values)
    stats.register("avg", tools.mean)
    stats.register("std", tools.std)
    stats.register("min", min)
    stats.register("max", max)
    
    algorithms.eaGenerateUpdate(toolbox, ngen=150, stats=stats, halloffame=hof)
    
    return hof[0].fitness.values[0]

The complete example : [source code].

Table Of Contents

Previous topic

Particle Swarm Optimization Basics

Next topic

A Pi Calculation with DTM

This Page