## Practical Evolutionary Algorithms

A practical book on Evolutionary Algorithms that teaches you the concepts and how they’re implemented in practice.

Get the book

## Preamble

import numpy as np                   # for multi-dimensional containers


## Introduction

In an earlier section, we briefly covered selection in single-objective problems. There is a fundamental difference between selection in single-objective problems when compared to multi-objective problems. In single-objective problems, we often look for a single solution which has the best objective value, whereas this is not possible in multi-objective problems because they often involve conflicts between multiple objectives. Because multi-objective problems consider more than one objective value, it is often the case that a solution with the best value for one objective will often have degradation in one or more other objective values. This is where a trade-off emerges in the objective space, and it is unlikely that there exists a single solution that can be considered optimal.

Therefore, the solution to a multi-objective optimisation problem is not a single solution vector, but instead an approximation set. This is a set of many candidate solutions that present trade-offs between the multiple objectives, where any improvement in one objective value will result in the degradation in one or more of the other objective values. This notion of "optimum" solutions is called Pareto-optimality.

Pareto-optimality and other approaches to determining dominance relationships between multiple solutions in a population are important during the selection stage of an optimisation algorithm, highlighted below.

## Notation

Let's define a multi-objective function $f\left(x\right)$ consisting of two objectives.

$\begin{array}{}\text{(1)}& f\left(x\right)=\left({f}_{1}\left(x\right),{f}_{2}\left(x\right)\right)\end{array}$

We have a population $\mathrm{X}$ of $\mathrm{N}$ candidate solutions.

$\mathbf{X}=⟨{\mathrm{X}}_{1},{\mathrm{X}}_{2},\dots ,{\mathrm{X}}_{\mathrm{N}}⟩$

where $\mathrm{N}$ refers to the number of solutions in the population, ${\mathrm{X}}_{n}$ refers to the $n$-th solution in the population, and ${x}_{dn}$ refers to the $d$-th decision variable of the $n$-th solution in the population.

${\mathrm{X}}_{n}=⟨{x}_{1n},{x}_{2n},\dots ,{x}_{\mathrm{D}n}⟩$

We will also define the corresponding objective values $\mathrm{Y}$ which are calculated when evaluating the function in Equation 1.

$\mathbf{Y}=⟨{\mathrm{Y}}_{1},{\mathrm{Y}}_{2},\dots ,{\mathrm{Y}}_{\mathrm{N}}⟩$

where $\mathrm{N}$ refers to the number of objective value sets, ${\mathrm{Y}}_{n}$ refers to the $n$-th set of objective values in the population, and ${y}_{mn}$ refers to the $m$-th objective value of the $n$-th set of objective values in the population.

${\mathrm{Y}}_{n}=⟨{y}_{1n},{y}_{2n}⟩$

## Dominance

When using or designing algorithms to solve multi-objective optimisation problems, we will often encounter the concept of domination. This concept is useful for comparing two solutions to determine whether one is better than the other.

We can now use our notation to define dominance relationships. Let's take two solutions to a two-objective problem: ${\mathrm{X}}_{1}$ and ${\mathrm{X}}_{2}$, with their corresponding objective values ${\mathrm{Y}}_{1}=⟨{y}_{1,1},{y}_{2,1}⟩$ and ${\mathrm{Y}}_{2}=⟨{y}_{1,2},{y}_{2,2}⟩$.

Definition 1: A solution ${\mathrm{X}}_{1}$ is said to dominate another solution ${\mathrm{X}}_{2}$, if both of the following conditions are satisfied:

1. The objective values of ${\mathrm{X}}_{1}$ are no worse than those of ${\mathrm{X}}_{2}$ in all objectives, i.e. for this two-objective problem ${f}_{m}\left({\mathrm{X}}_{1}\right)\le {f}_{m}\left({\mathrm{X}}_{2}\right)$ for all $m=\left(1,2\right)$.

2. The objective values of solution ${\mathrm{X}}_{1}$ are strictly better than at least one of those of solution ${\mathrm{X}}_{2}$, i.e. for this two-objective problem ${f}_{m}\left({\mathrm{X}}_{1}\right)<{f}_{m}\left({\mathrm{X}}_{2}\right)$ for at least one $m=\left(1,2\right)$.

If any of the two conditions are violated, the solution ${\mathrm{X}}_{1}$ does not dominate the solution ${\mathrm{X}}_{2}$ . Otherwise, we can claim ${\mathrm{X}}_{1}$ dominates ${\mathrm{X}}_{2}$.

Definition 2: Two solutions ${\mathrm{X}}_{1}$ and ${\mathrm{X}}_{2}$ are said to be non-dominating with respect to each other if the following conditions are satisfied:

1. The objective values of solution ${\mathrm{X}}_{1}$ are strictly better than at least one of those of solution ${\mathrm{X}}_{2}$, i.e. for this two-objective problem ${f}_{m}\left({\mathrm{X}}_{1}\right)<{f}_{m}\left({\mathrm{X}}_{2}\right)$ for at least one $m=\left(1,2\right)$.

2. The objective values of solution ${\mathrm{X}}_{1}$ are strictly worse than at least one of those of solution ${\mathrm{X}}_{2}$, i.e. for this two-objective problem ${f}_{m}\left({\mathrm{X}}_{1}\right)>{f}_{m}\left({\mathrm{X}}_{2}\right)$ for at least one $m=\left(1,2\right)$.

## Selecting Solutions

Generally, one of our goals throughout the optimisation process is to select the best solutions. This means that solutions that can be categorised as either "dominating" or "non-dominating" are solutions that we are interested in. To be clear, we are not interested in solutions that are dominated, because they do not offer any desirable performance with respect to any of the considered objectives.

Let's use Python to demonstrate these dominance relations that are often used for selection. Here, we will assume a minimisation problem, where smaller values are better. We will initialise four sets of solutions by synthetically assigning objective values that will demonstrate our dominance relations.

Our first set ${\mathrm{Y}}_{1}$ and ${\mathrm{Y}}_{2}$ will demonstrate the scenario where ${\mathrm{Y}}_{1}$ dominates ${\mathrm{Y}}_{2}$.

Y1 = np.array([0, 0.5])
Y2 = np.array([0.5, 0.5])


Our second set ${\mathrm{Y}}_{3}$ and ${\mathrm{Y}}_{4}$ will demonstrate the scenario where ${\mathrm{Y}}_{3}$ is identical to ${\mathrm{Y}}_{4}$.

Y3 = np.array([0.5, 0.5])
Y4 = np.array([0.5, 0.5])


Our third set ${\mathrm{Y}}_{5}$ and ${\mathrm{Y}}_{6}$ will demonstrate the scenario where ${\mathrm{Y}}_{5}$ and ${\mathrm{Y}}_{6}$ are non-dominated with respect to each other.

Y5 = np.array([0, 0.5])
Y6 = np.array([0.5, 0])


Our fourth set ${\mathrm{Y}}_{7}$ and ${\mathrm{Y}}_{8}$ will demonstrate the scenario where ${\mathrm{Y}}_{7}$ is dominated by ${\mathrm{Y}}_{8}$.

Y7 = np.array([0.5, 0.5])
Y8 = np.array([0, 0.25])


First, we will define a function to determine whether a solution dominates another or not.

def dominates(X1, X2):
if np.any(X1 < X2) and np.all(X1 <= X2):
return True
else:
return False


Now, let's test it with our four sets of objective values.

dominates(Y1, Y2)

True
dominates(Y3, Y4)

False
dominates(Y5, Y6)

False
dominates(Y7, Y8)

False

As expected, the only solution pairing that satisfies the criteria for dominance is our first set ${\mathrm{Y}}_{1}$ and ${\mathrm{Y}}_{2}$.

Next, let's define a function to determine whether two solutions are non-dominated.

def nondominated(X1, X2):
if np.any(X1 < X2) and np.any(X1 > X2):
return True
else:
return False


Again, we will test it with our four sets of objective values.

nondominated(Y1, Y2)

False
nondominated(Y3, Y4)

False
nondominated(Y5, Y6)

True
nondominated(Y7, Y8)

False

As expected, the only solution pairing that satisfies the criteria for dominance is our first set ${\mathrm{Y}}_{5}$ and ${\mathrm{Y}}_{6}$.

We can string these two functions together within a decision structure to determine the dominance relation between any two solutions.

def dominance_relation(X1, X2):
if np.all(X1 == X2):
print("The solutions are identical.")
elif dominates(X1, X2):
print("The first solution dominates the second.")
elif nondominated(X1, X2):
print("The two solations are nondominating")
else:
print("The first solution is dominated by the second.")


Finally, we will test it with our four sets of objective values.

dominance_relation(Y1, Y2)

The first solution dominates the second.

dominance_relation(Y3, Y4)

The solutions are identical.

dominance_relation(Y5, Y6)

The two solations are nondominating

dominance_relation(Y7, Y8)

The first solution is dominated by the second.


## Visualisation

Dominance relations can be clearly visualised when working in a two-objective space. Let's do this with some arbitrary solutions. We'll use the ParetoFront visualisation type from PlotAPI.

The figure presents arbitrary solutions to a hypothetical minimisation problem. That is, the lower the values for Objective 1 and Objective 2, the better. PlotAPI ParetoFront has colour-coded the solutions according to their rank, where solutions of the same colour can be considered non-dominated with respect to each other.

## Conclusion

In this section we introduced the concept of Pareto-optimality and looked at dominance relations in more detail, complete with examples in Python. The next challenge we will encounter is when we need to select a subset of solutions from a population that consists of entirely non-dominated solutions. For example, which 100 solutions do we select from a population of 200 non-dominated solutions? We will offer some solutions to this challenge in the later sections.

## Practical Evolutionary Algorithms

A practical book on Evolutionary Algorithms that teaches you the concepts and how they’re implemented in practice.

Get the book

## ISBN

978-1-915907-00-4

## Cite

Rostami, S. (2020). Practical Evolutionary Algorithms. Polyra Publishing.