EVOLVE International Conference

EVOLVE 2014 July 1-4
Beijing, China


Evolutionary reinforcement learning or reinforcement evolutionary algorithms?


Instructor Madalina M. Drugan


A recent trend in evolutionary algorithms (EAs) transfers expertise from and to other areas of machine learning.An interesting novel symbiosis considers: i)reinforcement learning, which learns difficult dynamic elaborated tasks requiring lots of computational resources, and ii)evolutionary algorithmswith the main strengths in its eloquence and computational efficiency.These two techniques are addressing the same problem of maximization of the agent' reward in a potentially unknowndifficult environment that can include partial observations and/or abstract credit assessment.These machine learning methods exchange techniques in order to improve their theoretical and empirical efficiency, like computational speed for on-line learning, and robust behavior for the off-line optimization algorithms.


Reinforcement learning (RL) is considered the most general on-line/off-line learning technique that includes a long-term versus a short term reward trade-off.RL is successfully applied in disciplines likes game theory, robot control, control theory, operation research, etc.The on-line learning involves finding a balance between exploration of uncharted territories and exploitation of current knowledge. For example, a robot learns to act in an unknown and changing environment by receiving feedback from the environment.


There are few examples of usage of evolutionary algorithms into reinforcement learning and vice-versa.


Multi-objective reinforcement learning is a variant of reinforcement learning that uses tuples of rewards instead of a single reward.Multi-objective RL differs from standard RL in important ways since several actions can be considered to be the best according to their reward tuples. Techniques from multi-objective EAs should be used in the multi-objective RL framework to improve the exploration/exploitation trade-off for complex and large multi-objective environments.


The performance of EAs often depends on the optimal usage of genetic operators to explore/exploit promising parts of the search space. The problem of selecting the best genetic operator is similar to the problem that an agent faces when choosing between alternatives in achieving its goal of maximizing its cumulative expected reward.Practical approaches find among the various multi-armed bandit algorithms (or reinforcement learning) the ones that solve best the operator selection problem.


The scope of this tutorial is to discuss on resemblances and differences in learning with Reinforcement learning and Evolutionary Algorithms.


First, we introduce the use of RL to improve the performance of EAs, and second, to introduce the use of EAs to improve the performance of RL. Although both paradigms optimize some quantity of interest, the methodology, terminology and thebasic assumptions about the environment are quite different.During this tutorial, we will compare the exploitation/exploration trade-off that is important in both EAs and RL but with different meaning.


Madalina M.Drugan (biography)


She is senior researcher at the Artificial Intelligence Lab,VrijeUniversiteitBrussels,Belgium.She received a PhD (2006) from the Computer Science Department, University of Utrecht,TheNetherlands. Her PhD thesis "Conditional log-likelihood MDL and Evolutionary MCMC" is researching (designing, analyzing, experimenting) various Machine Learning and optimization algorithms in fields like Bayesian Network classifiers, Feature Selection, Evolutionary Computation, and Markov chain Monte Carlo. She did research in Evolutionary Computation related algorithmic design for Bioinformatics, Multi-objective optimization, Meta-heuristics, Operational Research, and Evolutionary Computation for more than 10 years.


Recently, she is involved in developing a theoretical and algorithmic framework of thenew branch of Reinforcement Learning using multi-objectiverewards. She has experience with research grants, reviewing services and a strong publication record in international peer-reviewed journals and conferences, various academic prices.


Joomla templates by Joomlashine