Workshop Programme

ThRaSH 2013 runs over two days, starting on both days on 9am and ending on both days about 7pm. We suggest to arrive one day early (i.e., 12 September, 2013).

 

Below is a preliminary schedule. Please note that the schedule is not final and may change. 

Schedule

[Schedule at a Glance] [Schedule for Friday] [Schedule for Saturday] [Abstracts]


Schedule at a Glance

FridaySaturday
TimeWhat
9am Opening
9.10am Tutorial (P.K. Lehre: Drift Analysis)
10.40am Coffee Break
11.10am Talks (C. Witt; D.-C. Dang; A. Umre)
12.55pm Lunch
1.55pm Tutorial (C. Cannings: Evolutionary Conflicts)
3.25pm Talks (D. Sudholt; C. Zarges)
4.35pm Coffee Break
5.05pm Talks (A. Buzdalova; C. Gießen; F. Alanazi)
6.50pm Wrap-Up Day 1
8pm Dinner
TimeWhat
9am Tutorial (B. Doerr: Black-Box Complexity)
10.30am Talk (D. Çörüş)
11.05am Coffee Break
11.35am Talks (B. Mitavskiy; B. Doerr; C.M. Fonseca)
1.20pm Lunch
2.20pm Talks (M.-L. Cauwet; S. Astete-Moralis; C. Cannings)
4.05pm Coffee Break
4.35pm Talks (A. Mambrini; T. Kötzing)
5.35pm Wrap-Up Discussion
6.20pm Closing

 


Schedule for Friday

TimeWhoWhat
9am Organisers Opening
9.10am Per Kristian Lehre Tutorial: Drift Analysis
10.40am   Coffee Break
11.10am Carsten Witt Adding Tail Bounds to Fitness Levels and Variable Drift
11.45am Duc-Cuong Dang Improved Fitness-Levels for Non-elitist Populations with Applications
12.20pm Ashish Umre Local Search Heuristics and Social Learning Strategies in Stochastic Environments
12.55pm   Lunch
1.55pm Chris Cannings Tutorial: Evolutionary Conflicts
3.25pm Dirk Sudholt A Theoretical Runtime and Empirical Analysis of Different Alternating Variable Searches for Search-Based Testing
4pm Christine Zarges Randomised Search Heuristics for the Longest Common Subsequence Problem
4.35pm   Coffee Break
5.05pm Arina Buzdalova A First Step Towards the Runtime Analysis of Evolutionary Algorithm Adjusted with Reinforcement Learning
5.40pm Christian Gießen Hybridizing Evolutionary Algorithms with Opportunistic Local Search
6.15pm Fawaz Alanazi A Runtime Analysis of Two Learning Mechanisms Within a Selection Hyper-Heuristic Framework
6.50pm Organisers Wrap-Up Day 1
8pm   Dinner

 


Schedule for Saturday

TimeWhoWhat
9am Benjamin Doerr Tutorial: Black-Box Complexity
10.30am Doğan Çörüş Crossover Operator in Multi-Objective Setting
11.05am   Coffee Break
11.35am Boris Mitavskiy Generalized Schema Theorem, Fitness Levels and Complexity Analysis of Population-Based Hybrid EAs
12.10pm Benjamin Doerr Collecting Coupons with Random Initial Stake or Very Precise Runtime Estimates for Randomized Local Search on Monotonic Functions
12.45pm Carlos M. Fonseca Lyapunov Stability Analysis and Adaptation Law Synthesis for a Derandomized Self-Adaptive (1, 2)-ES
1.20pm   Lunch
2.20pm Marie-Liesse Cauwet Noisy Optimization with Second Order Information
2.55pm Sandra Astete-Morales Additive Noise in Discrete Optimization
3.30pm Chris Cannings Graphic Deviation
4.05pm   Coffee Break
4.35pm Andrea Mambrini Theory-Laden Design of Mutation-Based Geometric Semantic Genetic Programming for Basis Functions Regression
5.10pm Timo Kötzing Some Tight Bounds for Genetic Programming on MAJORITY and ORDER
5.45pm   Wrap-Up Discussion
6.20pm Organisers Closing

 


Abstracts

Friday, 9.10am-10.40am
Per Kristian Lehre: Drift Analysis (Tutorial)

Drift analysis is among the most powerful theoretical tools available for estimating the optimisation time of randomised search heuristics (RSHs) such as evolutionary algorithms, ant colony optimisation, simulated annealing and others. Informally, it shows how the challenging problem of predicting the long-term behaviour of a RSH, in particular its runtime distribution, can be reduced to the often trivial problem of describing how the state of the RSH changes during one generation.

Drift analysis has dramatically simplified the analysis of RSHs, and has led to significant advances in the theory of randomised search heuristics. Many of the most important results about the optimisation time of RSHs were obtained with the help of drift analysis.

This tutorial considers drift analysis from a new angle, that of concentration of measure (CoM). CoM is exploited elsewhere in the analysis of RSHs. We begin by revising concentration of measure, and then focus on Hajek's drift theorem. Illustrative applications showing how drift theorems can be applied in analysing the optimisation time of RSHs will be given throughout the tutorial.

top


Friday, 11.10am-11.45am
Carsten Witt: Adding Tail Bounds to Fitness Levels and Variable Drift
(partly joint work with Per Kristian Lehre)

We reconsider two famous methods from the running time analysis of randomized search heuristics: the fitness level method and drift analysis. Both methods have been successfully applied to bound the expected running time of various randomized search heuristics. Only in very particular cases, the techniques could be used to prove stronger statements such as tail bounds/concentration inequalities w.r.t. the running time.

To address this lack of general tail bounds, we propose two new theorems generalizing the fitness level method and variable drift analysis. The new theorems include conditions under which upper and lower tail bounds on the running time can be obtained. As a proof of concept, we apply the theorems to analyze the running time of RLS and (1+1) EA on OneMax, linear functions and LeadingOnes. In all cases, we obtain a very sharp concentration, revealing that the underlying stochastic process is "almost deterministic".

top


Friday, 11.45am-12.20pm
Duc-Cuong Dang: Improved Fitness-Levels for Non-elitist Populations with Applications
Per Kristian Lehre and Duc-Cuong Dang

A large number of powerful tools have been developed to analyse the runtime of evolutionary algorithms (EAs) [5]. However, many of these are tailored to the (1+1) EA or other simple algorithms without populations. Furthermore, most often elitist selection schemes are assumed, i.e., it is impossible to loose the best solution found so far. Recently, an easy-to-use fitness-level technique was introduced in [3, 4] to analyse the expected runtime of EAs with non-elitist populations and unary variation operators. A weakness in the theorem of [4] is that the upper bound on runtime in terms of generations grows linearly with the population size. This bound was proved using the additive drift theorem [2].

In this work, we simplify the proof of the theorem in [4], and improve it in two directions.

Firstly, we show that the number of generations only grows logarithmically with the population size, thus giving a much more precise characterisation of the runtime. The new result is obtained by a more detailed analysis of the process describing the fittest part of the population. This process is very similar to that in the multiplicative drift theorem [1], but in reverse*. In particular, the drift decreases instead of increases with the distance. Upper bounds on such processes can only be obtained by carefully analysing the distribution of the drift. In our case, the distribution is similar to a concentrated binomial one, allowing us to derive the improved result.

Secondly, the new theorem makes a weaker assumption about the selective pressure in the EA. In particular, the refined theorem describes the behaviour of EAs in the weak selection regime, where selection is close to uniform. This selection regime covers a large number of applications. As an example, we discuss partial fitness evaluation where only a small random fraction of the bit positions is evaluated.

References

  1. B. Doerr, D. Johannsen, and C. Winzen. Multiplicative drift analysis. Algorithmica, 64(4): 673-697, 2012.
  2. J. He and X. Yao. Drift analysis and average time complexity of evolutionary algorithms. Artificial Intelligence, 1(127):57-85, 2001.
  3. P. K. Lehre. Negative drifts in population. In Proceedings of 11th International Conference on Parallel Problem Solving From Nature (PPSN XI/LNCS 6238), pages 244-253, 2010.
  4. P. K. Lehre. Fitness-levels for non-elitist populations. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2011), 2011.
  5. P. S. Oliveto, J. He, and X. Yao. Time complexity of evolutionary algorithms for combinatorial optimization: A decade of results. International Journal of Automation and Computing, 1(4): 100-106, 2007.

Footnote

*The multiplicative drift theorem states that a process which satisfies $E[X_{t+1}\mid X_t] \leq (1 - \delta) X_t$ for $0 < \delta < 1$ needs in expectation at most $(1+\ln X_0)/\delta$ iterations to reach $X_t = 0$. In our case, the process satisfies $E[X_{t+1} \mid X_t] \geq (1+\delta)X_t$ for $\delta > 0$, and we would like an upper bound on the time to reach $X_t > b$. The issue in this case is that the drift decreases with the distance to the final state. Readers interested in this issue may also try to find a lower bound of the expected time until the process reaches $X_t = 0$ given that $E[X_{t+1} \mid X_t] \geq (1 - \delta) X_t$ with $0 < \delta < 1$.

top


Friday, 12.20pm-12.55pm
Ashish Umre: Local Search Heuristics and Social Learning Strategies in Stochastic Environments

Complex stochastic environments present a challenge for agents to make optimal decisions regarding effective strategies to forage for information pertaining to resources (e.g. food). The interplay between individual and social learning plays an important role to reduce uncertainty and allows adaptively switching between exploration and exploitation. The utility of social learning in unstable environments is not entirely clear, as there is relatively high probability of socially acquired information to be outdated. We are interested in scenarios where individual foraging might be a useful and effective strategy and how the topology and distribution of resources in the environment might affect this. We investigate the adaptive value of social learning in a dynamic uncertain environment both theoretically and empirically.

The social interactions secure a stable supply of resources by collectively buffering uncertainties associated with their acquisitions. In the biological systems, evolution has created a large number of distributed systems which balance utility and resource usage, while optimising energy expenditure. Can we improve and understand the incentive utility structures of distributed applications on the internet through examination of biologically inspired algorithms?

Foraging can be modelled as an optimisation process where an animal seeks to maximise the energy (information) obtained per unit time spent foraging. We have overviewed research in foraging theory and evolutionary game theory, relevant to both individual and collective foraging, and significance of these areas to optimisation.

We introduce the behavioural, evolutionary and game-theoretic dynamics of well-known biological social systems, and through simulation attempt to understand the similarities between existing and proposed adaptive information discovery and sharing protocols and the foraging behaviour of ants, bees and other social animals, is modelled to investigate the dynamics of social interaction in uncertain environments.

top


Friday, 1.55pm-3.25pm
Chris Cannings: Evolutionary Conflicts (Tutorial)

 

top


Friday, 3.2pm-4pm
Dirk Sudholt: A Theoretical Runtime and Empirical Analysis of Different Alternating Variable Searches for Search-Based Testing

The Alternating Variable Method (AVM) has been shown to be a surprisingly effective and efficient means of generating branchcovering inputs for procedural programs. However, there has been little work that has sought to analyse the technique and further improve its performance. This paper proposes two new local searches that may be used in conjunction with the AVM, Geometric and Lattice Search. A theoretical runtime analysis shows that under certain conditions, the use of these searches is proven to outperform the original AVM. These theoretical results are confirmed by an empirical study with four programs, which shows that increases of speed of over 50% are possible in practice.

top


Friday, 4pm-4.35pm
Christine Zarges: Randomised Search Heuristics for the Longest Common Subsequence Problem
joint work with Thomas Jansen

Computing a longest common subsequence of a number of strings is a classical NP hard combinatorial optimisation problem with many applications in computer science and bioinformatics. Evolutionary algorithms have been reported to be successful heuristics in practice, however, a theoretical analysis has proven that a large class of evolutionary algorithms using mutation and crossover fail to solve or approximate the problem efficiently--even for problem instances that are easy to solve using a problem-specific algorithm based on dynamic programming.

In this talk we will first briefly discuss the existing results for evolutionary algorithms. We afterwards consider the immune-inspired B-cell algorithm on the very same instances and show that it also fails to be efficient if started with random candidate solutions but that it is very efficient if started with trivial empty candidate solutions. This shows for the second time that immune-inspired methods can be a simple and very efficient alternative to evolutionary algorithms and that they are able to outperform these by far.

In order to investigate the performance of the considered algorithms further, we will finish the talk by presenting results from a preliminary experimental study on real-world instances.

top


Friday, 5.05pm-5.40pm
Arina Buzdalova: A First Step Towards the Runtime Analysis of Evolutionary Algorithm Adjusted with Reinforcement Learning (slides)
Maxim Buzdalov, Arina Buzdalova

Runtime analysis for the previously proposed EA + RL method [1] is performed. The method adaptively adjusts an evolutionary algorithm (EA) by selecting an efficient auxiliary fitness function for each population. The selection is performed using reinforcement learning (RL). There is no preliminary knowledge about an auxiliary fitness function properties, so it should be learned with reinforcement learning which fitness functions are efficient.

The EA + RL method was empirically shown to be efficient for a number of model problems [1], as well as for a real-world application [2]. However, there were no theoretical results for this method yet. There also exist a number of methods that adjust evolutionary algorithms using reinforcement learning. In some of them, reinforcement is used to select evolutionary operators; in others, some real valued parameters, such as mutation rate, are adjusted [3]. But there is no runtime analysis of these methods as well.

In this work a first step towards better theoretical understanding of how reinforcement learning adjusts evolutionary algorithms is made. We consider a modified OneMax problem and solve it with Random Mutation Hill Climber adjusted using Q-learning with greedy exploration strategy. In this problem, the only auxiliary fitness function is inefficient one, so reinforcement agent should learn to ignore it. We prove that EA + RL with an inefficient auxiliary fitness function performs on par with a conventional evolutionary algorithm in this case, namely in Θ(N log N) fitness function evaluations, where N is the problem size.

Further work should involve formulating a model problem with an efficient fitness function and showing that EA + RL outperforms the adjusted evolutionary algorithm on this problem. It is also very important to analyze broader range of evolutionary algorithms (first of all, (1 + 1)-evolutionary strategy), as well as reinforcement learning algorithms. The long-term goal is to work out a theoretical framework for the runtime analysis of evolutionary algorithms adjusted with reinforcement learning in different ways.

References

  1. Arina Buzdalova and Maxim Buzdalov. Increasing efficiency of evolutionary algorithms by choosing between auxiliary fitness functions with reinforcement learning. In 11th International Conference on Machine Learning and Applications, volume 1, pages 150-155, 2012.
  2. Maxim Buzdalov and Arina Buzdalova. Adaptive selection of helper-objectives for test case generation. In 2013 IEEE Congress on Evolutionary Computation, volume 1, pages 2245-2250, June 20-23 2013.
  3. A. E. Eiben, M. Horvath, W. Kowalczyk, and M. C. Schut. Reinforcement learning for online control of evolutionary algorithms. In Proceedings of the 4th international conference on Engineering self-organising systems ESOA'06, pages 151-160. Springer-Verlag, Berlin, Heidelberg, 2006.

top


Friday, 5.40pm-6.15pm
Christian Gießen: Hybridizing Evolutionary Algorithms with Opportunistic Local Search

There is empirical evidence that memetic algorithms (MAs) can outperform plain evolutionary algorithms (EAs). Recently the first runtime analyses have been presented proving the aforementioned conjecture rigorously by investigating Variable-Depth Search, VDS for short (Sudholt, 2008). Sudholt raised the question if there are problems where VDS performs badly. We answer this question in the affirmative in the following way. We analyze MAs with VDS, which is also known as Kernighan-Lin for the TSP, on an artificial problem and show that MAs with a simple first-improvement local search outperform VDS. Moreover, we show that the performance gap is exponential. We analyze the features leading to a failure of VDS and derive a new local search operator, coined Opportunistic Local Search, that can easily overcome regions of the search space where local optima are clustered. The power of this new operator is demonstrated on the Rastrigin function encoded for binary hypercubes. Our results provide further insight into the problem of how to prevent local search algorithms to get stuck in local optima from a theoretical perspective. The methods stem from discrete probability theory and combinatorics.

top


Friday, 6.15pm-6.50pm
Fawaz Alanazi: A Runtime Analysis of Two Learning Mechanisms Within a Selection Hyper-Heuristic Framework
Per Kristian Lehre, Fawaz Alanazi

Hyper-heuristics are search techniques used to solve optimisation problems by selecting and employing heuristics from a set of pre-defined low-level heuristics, or generating heuristics based on pre-existing components. Low-level heuristics are a set of easy to implement heuristics that work to solve problems partially, such as mutation heuristics, and crossover heuristics. Hyper-heuristics were categorised into a selection and generation hyper-heuristic. This paper focuses on selection hyper-heuristics. A number of so-called learning mechanisms have been proposed in the literature. They can be described as strategies used to choose a low-level heuristic based on some probability distribution. We seek to answer the question of whether the performance of a search process can be improved via learning.

Despite the great success of selection hyper-heuristic applications in various problem domains, the theoretical background of these heuristics is almost non-existent. Runtime analysis refers to theoretical studies which rigorously estimate the runtime and success probability of randomised search heuristics. It also assists in gaining insights into the behaviour of search heuristics and in having a deep understanding of when a search heuristic can be successful.

We analysed the runtime of a selection hyper-heuristic with two learning mechanisms; namely, Simple Random and Random Gradient. We consider a general model of fitness landscapes, and describe conditions when Random Gradient outperforms Simple Random. The analysis also shows that learning is not helpful if all low-level heuristics have the same performance. In addition, the runtime analysis of a selection hyper-heuristic on a classical optimisation example problem is presented. This paper is one of the initial studies on the runtime analysis of selection hyper-heuristics. It is not only define the relationship between the expected runtime and both problem and selection hyper-heuristic characteristics, but also provide a guideline for practitioners.

top


Saturday, 9am-10.30am
Benjamin Doerr: Black-Box Complexity (Tutorial)

Black-box complexity is one of the current hot topics in the theory of randomized search heuristics. It aims at providing a deeper understanding on how difficult a problem is when our aim is solving it via randomized search heuristics. Large black-box complexities tell us that there is no use in trying to find a good evolutionary algorithm for this problem, low black-box complexities indicate that there is room for good randomized search heuristics even where our current ones are not convincing yet.

In this tutorial, I will give a brief introduction to this field targeted at an audience with (moderate) experience in theory. Despite black-box complexity theory being a young field (the first journal paper by Droste, Jansen, and Wegener dates from 2006 only), I will not try to give a complete survey. Rather, I will use the example of jump functions to demonstrate in an interactive manner some characteristics of the field: its game-theoretic (fun!) flavor, the mostly non-standard (creative!) methods, and the surprising results. Towards the end of the tutorial, I will discuss how such results, beyond giving fundamental insights in the working principles of randomized search heuristics, may help us developing superior genetic algorithms. This tutorial is orthogonal to the one given at GECCO - if you attended the latter, you will mostly discover new material, if you didn't, you will still understand everything.

top


Saturday, 10.30am-11.05am
Doğan Çörüş: Crossover Operator in Multi-Objective Setting
Doğan Çörüş, Per Kristian Lehre

Moving from the recent works that justified the use of cross-over operator in single objective test function ONEMAX, we investigated the effects of cross-over operator in multi-objective optimization where the population of solutions consist of all the non-dominated solutions visited. Working with the Multi-objective function (Leading Ones, Trailing Zeroes) as in the preliminary works that covers the performance of Global Simple Evolutionary Multi-objective Optimizer (GSEMO), we have derived a bound on the expected change in population size where one-point cross-over helps keeping the population size of non-dominated solutions low without sacrificing any gain on the objective function values until reaching the Pareto front. We tested this effect in experiments and conjecture that an upper-bound on the expected time to reach the pareto-front, which is asymptotically better than the corresponding bound for the GSEMO, can be obtained with the help of this bounded population size.

top


Saturday, 11.35am-12.10pm
Boris Mitavskiy: Generalized Schema Theorem, Fitness Levels and Complexity Analysis of Population-Based Hybrid EAs

Unlike the traditional EAs, hybrid and mixed strategy EAs employ an entire collection of families of recombination, mutation and selection operators. Frequently, some of these operators are intended to implement a well-known greedy algorithm. In this talk I intend to exhibit the connection between families of recombination (or local mutation) operators, their corresponding families of invariant subsets, generalized schema theory and its relationship to auxiliary fitness levels and time complexity analysis. It is hoped that the proposed mathematical framework will provide deeper systematic understanding of the general circumstances under which this novel type of EAs are successful. This is an open-ended talk where I intend to find collaborators for future investigation in this direction.

top


Saturday, 12.10pm-12.45pm
Benjamin Doerr: Collecting Coupons with Random Initial Stake or Very Precise Runtime Estimates for Randomized Local Search on Monotonic Functions
joint work with Carola Doerr

We provide a very precise bound for the expected runtime of the randomized local search heuristic optimizing monotonic functions. This problem is equivalent to a variant of coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). We show that the expected number of rounds needed to solve this problem is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= 1/2 H_{\lfloor n/2 \rfloor} + 1/2 H_{\lceil n/2 \rceil}$ when $n$ is odd.

top


Saturday, 12.45pm-1.20pm
Carlos M. Fonseca: Lyapunov Stability Analysis and Adaptation Law Synthesis for a Derandomized Self-Adaptive (1, 2)-ES
Elizabeth F. Wanner, Carlos M. Fonseca, Rodrigo T. N. Cardoso and Ricardo H. C. Takahashi

A Lyapunov synthesis procedure for the adaptation parameters of a simple derandomized self-adaptive (1, 2)-ES is proposed, providing exponentially-bounded convergence to the function optimum. The proposed design methodology is based on a particular candidate function, which becomes a stochastic Lyapunov function through a suitable choice of the algorithm adaptation parameters. Theoretical convergence is investigated on a class of unidimensional, unimodal and symmetric functions. Through the appropriate setting of the algorithm parameters, the stochastic process becomes a supermartingale, which ensures that it converges to some random variable almost surely. Under additional conditions, it is also possible to prove that both the decision variable and the mutation step-size converge almost surely, the former to the optimum and the latter to zero. Simulation results support the theoretical conclusions.

top


Saturday, 2.20pm-2.55pm
Marie-Liesse Cauwet: Noisy Optimization with Second Order Information
Work done in collaboration with S. Astete-Morales, J. Liu, O. Teytaud

Various papers have recently analyzed the uniform rate, the simple regret, the cumulative regret, of noisy optimization of strictly convex functions. On sphere functions corrupted by noise, we propose an iterative optimization framework, a particular instance of which, using Hessian approximations, provably (i) reaches the same rate as stochastic gradient descent when the noise has constant variance (ii) reaches the same rate as evolution strategies when the noise variance decreases quadratically as a function of the simple regret (iii) reaches the same rate as racing-based optimization algorithms when the noise variance decreases linearly as a function of the simple regret.

top


Saturday, 2.55pm-3.30pm
Sandra Astete-Morales: Additive Noise in Discrete Optimization (slides)
This talk is a collaboration Y. Akimoto, S. Astete-Morales, A. Prugel-Bennett, J. Rowe, J. Shapiro, O. Teytaud, started at Dagstuhl seminar 13271.

We assume we have an evolutionary algorithm EA (possibly randomized) for optimizing fitness values of several instances on $\{0, 1\}^n$. In this case the instances will be called Gn: a set of fitness functions from $\{0, 1\}^n$ to ${\mathbb N}$.

We assume that EA solves Gn with complexity $f(n,\delta)$, in the sense that, with probability at least $1 - \delta$, EA finds the optimum $x^* := \text{arg max} g$ of any function $g \in G_n$ within $f(n, \delta)$ fitness evaluations.

We redefine the algorithm so it computes the fitness values by averaging over numerous evaluations. The number of evaluations $k$ is defined so that the probability of at least one misranking in the $f(n, \delta)$ comparisons is less than $\delta$. We will call the redefined algorithm $k$-Reevaluation-EA. We define Reevaluation-EA as $k$-Reevaluation-EA with $k = \lceil\sigma^2 \log\left(\frac{f(n, \delta)}{-\log(1-\delta)}\right)\rceil$.

We then show that if an algorithm EA solves $G_n$ with complexity $f(n, \delta)$, then $k$-Reevaluation-EA solves $G_n + \sigma {\cal N}$, i.e. $G_n$ corrupted by an additive Gaussian noise, with probability $(1 \delta)^2$ and a number of fitness evaluations $O\left(f(n, \delta) \left\lceil\sigma^2 \log\left(\frac{f(n, \delta)}{-\log(1-\delta)}\right)\right\rceil\right)$ thanks to an ad hoc choice of $k$. Notice that $G_n + \sigma {\cal N}$ means that each function $g \in G_n$ is perturbed by a Gaussian noise ${\cal N}$ with variance $\sigma^2$.

Then, we consider a wider family of instances, losing the restriction to Gaussian noise and replacing it by any noise, say $N$, with finite variance $\sigma^2$. For this case we define a new algorithm ManyRevaluation-EA, similarly to Revaluation-EA but with a different number of reevaluations. In this case we show that ManyRevaluation-EA solves $G_n + \sigma N$ ($N$ arbitrary noise with variance 1) with probability $(1-\delta)^2$ and a number of evaluations $O(f(n, \delta) \max(1, 4\sigma^2 f(n, \delta)/\delta))$. It is important also to make the comparison between these results and the performance of the algorithms without the reevaluations. In a result by Adam Prugel-Bennett, John Rowe, Jonathan Shapiro, it is proved that with high probability, the Random Local Search algorithm needs $\exp(cn)$ evaluations, for some $c > 0$, before finding the optimum of the OneMax function with additive Gaussian noise with variance 1 in dimension $n$. This shows that considering algorithms without the reevaluation presents far less good results.

In summary, to make a discrete evolutionary algorithm robust to Gaussian noise, it is enough to have a logarithmic number of reevaluations (leading to an overall complexity $O(f(n,\delta)\log(f(n,\delta)))$). In the case of robustness to any noise with finite variance, the number of reevaluations is linear in the complexity (leading to a roughly squared number of evaluations).

top


Saturday, 3.30pm-4.05pm
Chris Cannings: Graphic Deviation

A sequence of positive integers is said to be graphic if there exists a simple graph (i.e. one with no self-edges or multiple-edges) whose vertices have degrees matching the elements of the sequence. The is a simple algorithm (Havel-Hakimi) for checking whether any given sequence is graphic. A new question is examined. If we are given a sequence of positive integers how can we find the graphs which achieve the minimum deviation (in a well-defined sense) from that sequence.

We shall demonstrate how a variation on the Havel-Hakimi algorithm can supply the value of the minimum possible deviation, and how consideration of the Ruch-Gutman condition and the Ferrer diagram can yield the complete set of graphs achieving this minimum. We prove that the set of minimal graphs for a given sequence is connected under a certain system of transitions.

top


Saturday, 4.35pm-5.10pm
Andrea Mambrini: Theory-Laden Design of Mutation-Based Geometric Semantic Genetic Programming for Basis Functions Regression (slides)
Alberto Moraglio, Andrea Mambrini

Traditionally, the search operators used by genetic programming manipulate the syntactic representation of functions (programs) without considering their semantic. Geometric Semantic Genetic Programming (GSGP) is a recently introduced framework to design domain-specific search operators for Genetic Programming (GP) to search directly the semantic space of functions. The fitness landscape seen by GSGP is always for any domain and for any problem unimodal with a constant slope by construction, as the fitness of an individual is by definition its semantic distance to the optimum. Moreover the search performed by GSGP is formally equivalent to the search of a GA on a OneMax-like problem. This makes the search for the optimum much easier than for traditional GP, and it opens the way to analyse theoretically in a easy manner the optimisation time of GSGP in a general setting. Recent work proposed a design and runtime analysis of mutation-based GSGP on the class of Boolean functions and classification trees. In this talk the design and analysis of GSGP operators on all regression problems with generic basis functions (encompassing e.g., polynomial regression and trigonometric regression) is presented.

top


Saturday, 5.10pm-5.45pm
Timo Kötzing: Some Tight Bounds for Genetic Programming on MAJORITY and ORDER (slides)

Recently there have been better and better run time bounds for mutation-based genetic programming (GP) algorithms, with and without bloat control. Getting tight bounds in these settings is somewhat more complicated than in settings with static representations (such as bit strings). This work aims at getting tight run time bounds for the problems ORDER and MAJORITY.

In particular, using drift analysis, we show that the 1+1 GP takes (i) $\Theta(T_{\text{init}} + n\log n)$ iterations with bloat control on ORDER as well as MAJORITY and (ii) $\Theta(T_{\text{init}} \log T_{\text{init}} + n \log n)$ iterations without bloat control on MAJORITY.

Interestingly, getting a good bound for ORDER without bloat control seems to be more challenging than for MAJORITY, contrary to intuitions derived from other settings.

top