Mendel Conference
24th International Conference on Soft Computing, June 26-28 Brno, Czech Republic
 
 
Login
Username
 
 
   Google+
 
Invited Speakers and Tutorials: Mendel's Hall of Fame
It was our honour to welcome many interesting persons during several last years of the International Soft Computing MENDEL Conference. The invited speakers who could be seen in the conference since 2008 are here:


2017


Evolutionary Algorithms for Industrial Problems
foto

Prof. dr. Thomas Bäck
Professor for Natural Computing
Head of the Natural Computing Research Group
Leiden Inst of Advanced Computer Science (LIACS)
Leiden University
Netherlands

Industrial optimization problems often characterized by a number of challenging properties, such as time-consuming function evaluations, high dimensionality, a large number of constraints, and multiple optimization criteria. Working with Evolutionary Strategies, we have optimized them over the past decades for such optimization problems. In this presentation, we will illustrate these aspects by referring to industrial optimization problems, such as they occur in the automotive and many other industries. We will show that evolutionary strategies can be very effective even in case of very small numbers of functions evaluations. In the second part of the talk, some recent experiments on configuring evolutionary strategies are presented, showing that evolutionary strategies can be further improved by automatic search methods. This opens up a promising new direction towards constructing optimization algorithms based on a modularized evolutionary strategy superstructure. Moreover, combining this with data mining techniques, it is possible to characterize the relation between problem characteristics and beneficial algorithmic features.

Thomas Bäck is full professor of computer science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 200 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and is an elected fellow of the International Society for Genetic and Evolutionary Computation for his contributions to the field.

More information can be found here.



Competitive Co-Evolution of Multi-Layer Perceptron Neural Networks
foto

Dr Marco Castellani, Ph.D
Department of Mechanical Engineering
University of Birmingham
United Kingdom

Darwin recognised predator-prey mechanisms as a major driver of natural evolution. This talk discusses the competitive co-evolutionary training of multi-layer perceptron (MLP) neural networks. Classical evolutionary algorithms evolve a population of MLPs, measuring the fitness of the individuals from their capability to correctly map a fixed set of training examples. Competitive co-evolutionary algorithms pit a population of MLPs to a population of training patterns. The classifiers are regarded as predators that need to ‘capture’ (correctly categorise or map) the prey (training patterns). Success for the predators is measured on their ability to capture the prey. Success for the prey is measured on their ability to escape predation (be mapped incorrectly). The aim of the procedure is to create an evolutionary tug-of-war between the best classifiers and the most difficult data samples. Tested on different classification tasks, competitive co-evolution showed promise in terms of robustness to corrupted data patterns, accuracy of the solutions, and reduced computational costs. Thanks to its ability to focus yet unlearned examples, competitive co-evolution showed also great promise on tasks involving unbalanced data classes.

Marco Castellani is Lecturer in Advanced Robotics and Intelligent Systems at the Department of Mechanical Engineering of the University of Birmingham. He has 20 years of research experience in the private sector, universities, and research centres in various European countries. His work spanned a broad interdisciplinary area encompassing engineering, biology, and computer science, including machine learning, machine vision, pattern recognition, swarm intelligence, soft computing, intelligent control, optimisation, ecological modelling, natural language processing, and the general AI field. He published about 50 peer-reviewed research papers in scientific journals and international conferences, and is currently Editor of the Cogent Engineering journal.

More information can be found here.



Handling Data Irregularities in Classification: Some Recent Approaches and Future Challenges
foto

Dr. Swagatam Das, Ph.D.
Electronics and Communication Sciences Unit
Indian Statistical Institute, Kolkata
India

Data emerging from the real world may very often be plagued with prominent irregularities including class imbalance (under represented classes in the training sets), missing or absent features, and small disjuncts (under-represented sub-concepts within classes). Performances of the traditional classifiers are usually far from their theoretical limits in face of such data irregularities. This talk will outline some very effective recent approaches based on Support Vector Machines (SVMs), boosting, and k-Nearest Neighbor Classifiers (kNNs) to handle such irregularities. The talk will also discuss a few open-ended research issues in this direction.

Swagatam Das is currently serving as a faculty member at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India. His research interests include machine learning and non-convex optimization. Dr. Das has published more than 250 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of Swarm and Evolutionary Computation, an international journal from Elsevier. He has also served as or is serving as the associate editors of the IEEE Trans. on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, IEEE Access, Pattern Recognition (Elsevier), Neurocomputing (Elsevier), Engineering Applications of Artificial Intelligence (Elsevier), and Information Sciences (Elsevier). Dr. Das has 11500+ Google Scholar citations and an H-index of 53 till date. He has acted as guest editors for special issues in journals like IEEE Transactions on Evolutionary Computation, ACM Transactions on Adaptive and Autonomous Systems and IEEE Transactions on SMC, Part C. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE). He is also the recipient of the 2015 Thomson Reuters Research Excellence India Citation Award as the highest cited researcher from India in Engineering and Computer Science category between 2010 to 2014.

More information can be found here.



Theory of Evolutionary Computation - what is it and why bother?
foto

Dr. Carola Doerr, Ph.D.
French National Center for Scientific Research
Université Pierre et Marie Curie
France

Evolutionary algorithms (EAs) are bio-inspired heuristics, that–thanks to their high flexibility and their ability to produce high-quality solutions for a broad range of problems–are today well-established problem solvers in industrial and academic applications. EAs are often used as subroutines for particularly difficult parts of the optimization problem, as well as for several pre- or post-processing steps of state of the art optimization routines. The predominant part of research on EAs focuses on engineering and empirical work. This research is complemented by the theory of evolutionary computation, which aims at providing mathematically founded statements about the working principles of such optimization techniques. Historically, the role of theory of evolutionary computation was somewhat restricted to debunking common misconceptions about the performance of EAs. More recently, we see an emerging number of examples where theory has served a source of inspiration to design more efficient EAs. In this talk, we shall discuss some of these recent developments. Our main focus will be
(1) on the role of crossover in evolutionary computation, and
(2) on the importance of dynamic parameter choices.

Carola Doerr is a CNRS researcher at the Université Pierre et Marie Curie (Paris 6). She studied mathematics at Kiel University (Germany, Diploma in 2007) and computer science at the Max Planck Institute for Informatics and Saarland University (Germany, PhD in 2011). From Dec. 2007 to Nov. 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, mainly in the area of network optimization, where she has used randomized search heuristics to compute more efficient network layouts and schedules. Before joining the CNRS she was a post-doc at the Université Diderot (Paris 7) and the Max Planck Institute for Informatics. Carola Doerr's research interest is in the theory of randomized algorithms---both in the design of efficient algorithms as well as in finding a suitable complexity theory for randomized search heuristics. After contributing to the revival of black-box complexity, a theory-guided approach to explore the limitations of heuristic search algorithms, she recently started a series of works aimed at exploiting insights from the theory of evolutionary computation to design more efficient EAs, in particular such with a dynamic choice of parameters.

More information can be found here.



Evolutionary Computation for Dynamic Optimization Problems
foto

Prof. Shengxiang Yang
Centre for Computational Intelligence (CCI)
School of Computer Science and Informatics
De Montfort University
United Kingdom

Evolutionary Computation (EC) encapsulates a class of stochastic optimisation algorithms, which are inspired by principles from natural and biological evolution. EC has been widely used for optimisation problems in many fields. Traditionally, EC methods have been applied for solving static problems. However, many real world problems are dynamic optimisation problems (DOPs), which are subject to changes over time due to many factors. DOPs have attracted a growing interest from the EC community in recent years due to the importance in the real-world applications of EC. This talk will first briefly introduce the concept of EC and DOPs, then review the main approaches developed to enhance EC methods for solving DOPs, and describe several detailed approaches developed for EC methods for DOPs. Finally, some conclusions will be made based on the work presented and the future work on EC for DOPs will be briefly discussed.

Shengxiang Yang is now a Professor of Computational Intelligence (CI) and the Director of the Centre for Computational Intelligence, De Montfort University, UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 230 publications in these domains. He has 5800+ Google Scholar citations and an H-index of 40. His work has been supported by UK research councils (e.g., Engineering and Physical Sciences Research Council (EPSRC), Royal Society, and Royal Academy of Engineering), EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners (e.g., BT, Honda, Rail Safety and Standards Board, and Network Rail, etc.), with a total funding of over £2M, of which two EPSRC standard research projects have been focused on EC for DOPs. He serves as an Associate Editor or Editorial Board Member of eight international journals, including IEEE Transactions on Cybernetics, Evolutionary Computation, Neurocomuting, Information Sciences, and Soft Computing. He is the founding chair of the Task Force on Intelligent Network Systems (TF-INS) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs) of the IEEE CI Society (CIS). He has organised/chaired over 30 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has co-edited 12 books, proceedings, and journal special issues. He has been invited to give over 10 keynote speeches/tutorials at international conferences, and over 30 seminars in different countries.

More information can be found here.



Opening the Black Box: Alternative Search Drivers for Genetic Programming and Test-based Problems
foto

Prof. Krzysztof Krawiec
Institute of Computing Science
Poznan University of Technology
Poland

In genetic programming and other types of test-based problems, candidate solutions interact with multiple tests in order to be evaluated. The conventional approach involves aggregating the interaction outcomes into a scalar objective. However, passing different tests may require unrelated `skills' that candidate solutions may vary on. Scalar fitness is inherently incapable of capturing such differences and leaves a search algorithm largely uninformed about the diverse qualities of individual candidate solutions. In this talk, I will discuss the implications of this fact and present a range of new methods that avoid scalarization by turning the outcomes of interactions between programs and tests into 'search drivers' -- heuristic, transient pseudo-objectives that form multifaceted characterizations of candidate solutions. I will also demonstrate the feasibility of this approach with experimental evidence and embed this research into a broader context of behavioral program synthesis.  

Krzysztof Krawiec is an Associate Professor in the Institute of Computing Science at Poznan University of Technology, Poland, where he heads the Computational Intelligence Group. His primary research areas are genetic programming, machine learning, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO'16, is an associate editor of Genetic Programming and Evolvable Machines journal, and has been a visiting researcher at Computer Science and Artificial Intelligence Laboratory at MIT and Centre for Research in Intelligent Systems at University of California.

More information can be found here.



The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases
foto

Dr. Jiri Sima, DrSc.
Department of Theoretical Computer Science
Institute of Computer Science
The Czech Academy of Sciences
Czech Republic

(Artificial) neural networks (NN) are biologically inspired computational devices that are alternative to conventional computers, especially in the area of machine learning, with a plethora of successful commercial applications in AI. The limits and potential of particular NNs for general-purpose computation have been studied by classifying them within the Chomsky hierarchy (e.g. finite or pushdown automata, Turing machines) and/or more refined complexity classes (e.g. polynomial time). It has been shown that the computational power of NNs basically depends on the information contents of weight parameters. For example, the analysis is fined-grained when changing from rational to arbitrary real weights while the classification is still not complete between integer and rational weights. For this purpose, we introduce an intermediate model of integer-weight NNs with one extra analog neuron having rational weights and we classify this model within the Chomsky hierarchy roughly between context-free and context sensitive languages. Our analysis reveals an interesting link to an active research field on non-standard positional numeral systems with non-integer bases.
In our talk we will briefly survey the basic concepts and results concerning the computational power of neural networks. Then we will discuss the representations of numbers in non-integer bases (beta-expansions) using arbitrary real digits, which generalize the usual decimal or binary expansions. For example, a single number can typically be represented by infinitely many distinct beta-expansions. We will introduce so-called quasi-periodic beta-expansions which may be composed of different repeating blocks of digits. Finally, we will formulate a sufficient condition when an extra analog neuron does not bring additional computational power to integer-weight NNs, which is based on quasi-periodic weight parameters whose all beta-expansions are eventually quasi-periodic. We will illustrate the introduced intuitive concepts on numerical examples so that the presentation is accessible to a wide audience.  

Jirí Šíma is a senior scientist at the Institute of Computer Science, The Czech Academy of Sciences, in Prague where he has served in the past as the chair of the Scientific Council and the head of the Department of Theoretical Computer Science. He has lectured and supervised theses at Charles University (associate professor), Czech Technical University in Prague, and Masaryk University in Brno. He has been a program committee member of 30 international conferences, principal investigator of several successful grant projects including tens of researchers, and a member of grant evaluation panels (INTAS Brussels, Czech Science Foundation). His main research interests include neural networks, computational complexity, learning theory, alternative complexity measures, derandomization. He has achieved fundamental outcomes in the theory of neural networks regarding the time complexity of the most common practical learning algorithm---backpropagation and the computational characteristics of continuous, analog, and symmetric models. He has published ca. 100 papers including 2 monographs, 4 book chapters (e.g. MIT Press, Springer), 20 journal papers (e.g. JACM, Neural Computation, Neural Networks, Theoretical Computer Science), 30 papers in conference proceedings (e.g. STOC, LATA, ALT, ICANN, ICANNGA, ICONIP, IJCNN), which have attracted 843 Google Scholar citations with an H-index of 15. He has been awarded Otto Wichterle Award for the monograph on Theoretical Issues of Neural Networks.

More information can be found here.



 Tutorial:
Deep Learning for Computer Vision with MATLAB
foto

MSc. Jaroslav Jirkovsky
Senior Application Engineer
HUMUSOFT s.r.o.
International Reseller of MathWorks, Inc., U.S.A.
Czech Republic

Convolutional neural networks are essential tools for deep learning, and are especially useful for image classification, object detection, and recognition tasks. CNNs are implemented as a series of interconnected layers. In MATLAB, you can construct a CNN architecture, train a network, and use the trained network to predict class labels or detect objects using R-CNN, Fast R-CNN and Faster R-CNN object detectors. You can also extract features from a pretrained network, and use these features to train a classifier, or train convolutional neural networks for regression tasks. What the network learns during training is sometimes unclear. Deep Dream is a feature visualization technique in deep learning that synthesizes images that strongly activate network layers. By visualizing these images, you can highlight the image features learned by a network.

More information can be found here.



2015


Cartesian Genetic Programming
foto

Assoc. Prof. Julian Miller
Department of Electronics
University of York
United Kingdom

Cartesian Genetic Programming (CGP) is a well-known form of Genetic Programming developed by Julian Miller in 1999. In its classic form, it uses a very simple integer address-based genetic representation of computational structures in the form of directed graphs. Graphs are very useful program representations and can be applied to many domains (e.g. electronic circuits, neural networks, image object detectors, algorithm generation, evolutionary art). CGP has been shown to be comparatively efficient to other GP techniques and is also simple to program. The classical form of CGP has undergone a number of developments which have made it more useful, efficient and flexible in various ways. These include self-modifying CGP (SMCGP), cyclic connections (recurrent-CGP), encoding artificial neural networks and automatically defined functions (modular CGP).

Julian. F. Miller, has a BSc in Physics (Lond), a PhD in Nonlinear Mathematics (City) and a PGCLTHE (Bham) in Teaching. He is a Reader in the Department of Electronics at the University of York. He has chaired or co-chaired sixteen international workshops, conferences and conference tracks in Genetic Programming (GP) and Evolvable Hardware. He is a former associate editor of IEEE Transactions on Evolutionary Computation and an associate editor of the Journal of Genetic Programming and Evolvable Machines and Natural Computing. He is on the editorial board of the journals: Evolutionary Computation, International Journal of Unconventional Computing and Journal of Natural Computing Research. He has publications in genetic programming, evolutionary computation, quantum computing, artificial life, evolvable hardware, computational development, and nonlinear mathematics. He is a highly cited author with over 5,600 citations and over 230 publications in related areas. He has given ten tutorials on genetic programming and evolvable hardware at leading conferences in evolutionary computation. He received the prestigious EvoStar award in 2011 for outstanding contribution to the field of evolutionary computation. He is the inventor of a highly cited method of genetic programming known as Cartesian Genetic Programming and edited the first book on the subject in 2011.

More information can be found here.



Metaheuristic Approaches to Clustering and Image Segmentation – Some Recent Developments and Future Challenges
foto

Dr. Swagatam Das
Electronics and Communication Sciences Unit
Indian Statistical Institute, Kolkata
India

Cluster analysis aims at the organization of an unlabeled collection of objects or patterns into separate groups based on their similarity. Modern data mining tools that predict future trends and behaviors for allowing businesses to make proactive and knowledge-driven decisions, demand fast and fully automatic clustering of very large datasets with minimal or no user intervention. On the other hand, image segmentation, the process of partitioning an image into meaningful parts or objects, appears as a fundamental step in many image, video, and computer vision related applications. It is a critical step towards content analysis and interpretation of various types of images such as medical images, satellite images, and natural images. Segmentation is very often carried out based on pixel clustering. This talk will focus on some recent swarm and evolutionary computation based approaches to automatic pattern clustering and image segmentation. In particular the applications of Differential Evolution (DE) and Particle Swarm Optimization (PSO) algorithms will be discussed to clustering, which can be formulated as an optimization problem. We shall also discuss some evolutionary approaches to shape detection from images. Finally the talk will present a few challenging open problems in the context to the so called ‘big data’ clustering and noisy image segmentation.

Swagatam Das is currently serving as an assistant professor at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India. His research interests include evolutionary computing, pattern recognition, multi-agent systems, and wireless communication. Dr. Das has published one research monograph, one edited volume, and more than 200 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of “Swarm and Evolutionary Computation”, an international journal from Elsevier. He also serves as the associate editors of the IEEE Trans. on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, IEEE Access, Neurocomputing (Elsevier). He is an editorial board member of Progress in Artificial Intelligence (Springer), PeerJ Computer Science, International Journal of Artificial Intelligence and Soft Computing, and International Journal of Adaptive and Autonomous Communication Systems. Dr. Das has 7000+ Google Scholar citations and an H-index of 40 till date. He has been associated with the international program committees and organizing committees of several regular international conferences including IEEE CEC, GECCO, and SEMCCO. He has acted as guest editors for special issues in journals like IEEE Transactions on Evolutionary Computation. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE).

More information can be found here.



Scenario-Free Approximations of Stochastic Programs
foto

Assoc. Prof. Wolfram Wiesemann
KPMG Centre for Advanced Business Analytics
Imperial College Business School, London
United Kingdom

Traditional optimization models only involve deterministic parameters whose values are assumed to be known precisely. However, many practical decision problems involve uncertain parameters such as future prices and resource availabilities. It has been shown that treating these parameters as deterministic quantities can lead to severely suboptimal or even infeasible decisions. Stochastic programming overcomes this deficiency by faithfully treating the uncertain parameters as random variables.
Stochastic programs are usually solved by approximating the realizations of the random variables with a finite number of scenarios. Such scenario-based approaches suffer from a curse of dimensionality, that is, the optimization models scale exponentially with the number of uncertain parameters. In this talk, we survey recent scenario-free approximations to stochastic programming. Instead of discretising the random variables, these approximations employ low-dimensional representations of the problem’s decision variables. The resulting optimization models scale polynomially with the number of uncertain parameters and are thus computationally tractable. We illustrate the computational behavior of these techniques in the context of operations management.

Wolfram Wiesemann is an Associate Professor at the Imperial College Business School and a Fellow of the KPMG Centre for Advanced Business Analytics. He has been a visiting researcher at the Institute of Statistics and Mathematics at Vienna University of Economics and Business, the Computer-Aided Systems Laboratory at Princeton University, the Automatic Control Group at ETH Zurich and the Industrial Engineering and Operations Research Department at Columbia University. He holds a Joint Masters Degree in Management and Computing from Darmstadt University of Technology and a PhD in Operations Research from Imperial College. His current research focuses on the development of tractable computational methods for the solution of stochastic and robust optimization problems, as well as applications in energy systems and operations management.

More information can be found here.



Exploration of Topologies of Coupled Nonlinear Maps
(Chaos Theory 2/2)
foto

Prof. René Lozi
Laboratoire J.A. Dieudonné
Université de Nice Sophia-Antipolis
France

The tremendous development of new IT technologies, e-banking, e-purchasing, Internet of Things, etc. nowadays increases incessantly the needs for new and more secure cryptosystems. They are used for information encryption, pushing forward the demand for more efficient and secure pseudo-random number generators in the scope of chaos based cryptography. Indeed, chaotic maps show up as perfect candidates able to generate independent and secure pseudo-random sequences (used as information carriers or directly involved in the process of encryption/decryption). We explore several topologies of network of 1-D coupled chaotic mapping (mainly tent map and logistic map) in order to obtain good Chaotic Pseudo Random Number Generators (CPRNG). We focus first on two-dimensional networks. Two topologies are studied: TTLRC non-alternative, and TTLSC alternative. In this case, those networks are equivalent to 2-D maps which achieve excellent random properties and uniform density in the phase plane, thus guaranteeing maximum security when used for chaos based cryptography.
Moreover an extra new nonlinear CPRNG: MTTLSC is proposed. In addition, we explore topologies in higher dimension and the proposed ring coupling with injection mechanism enables us to achieve the strongest security requirements.

Professor René Lozi received his Ph.D. (on bifurcation theory) from the University of Nice in 1975 and the French State Thesis under the supervision of Prof. René Thom in 1983. In 1991, he became Full Professor at Laboratoire J. A. Dieudonné, University of Nice and IUFM (Institut Universitaire de Formation des Maîtres). He has served as the Director of IUFM (2001-2006). He is member of the Editorial Board of Indian J. of Industrial and Appl. Maths and J. of Nonlinear Systems and Appl., and member of the Honorary Editorial Board of Intern. J. of Bifurcation & Chaos. In 1977 he entered the domain of dynamical systems, in which he discovered a particular mapping of the plane producing a very simple strange attractor (now known as the "Lozi map"). Nowadays his research areas include complexity and emergences theories, dynamical systems, bifurcation and chaos, control of chaos and cryptography based chaos, and recently memristor. He is working in this field with renowned researchers, such as Professors Leon O. Chua (inventor of "Chua circuit" and Memristor) and Alexander Sharkovsky (who introduced the "Sharkovsky's order"). He received the Dr. Zakir Husain Award 2012 of the Indian Society of Industrial and Applied Mathematics during the 12th biannual conference of ISIAM at the university of Punjab, Patialia, January 2015.

More information can be found here.



 Special Talk:
J. G. Mendel and 150th anniversary of his discovery lecture
foto

Prof. Eva Matalová
Centrum Mendelianum
Czech Republic

Johann Gregor MENDEL (1822 – 1884) became world famous as a scientist, however, he was a truly miltifaceted personality. Mendel´s strong scientific background was built on physics, he was a Ch. Doppler´s student at the Institute of Experimental Physics at the University of Vienna in 1851-3, and also a teacher of physics at the technical high school in Brno 1854 – 1868. Additionally, Mendel was the pioneer of the statistical methods in the study of heredity. Mendel´s use of statistics in his now famous discovery was an exception quite independent of the academic tradition. Gregor Mendel was able to demonstrate in the garden pea the statistical regularities of inheritance which have since been verified. 150 years ago, on Feb 8 and March 8, 1865, Mendel published his famous work Experiments in Plant Hybrids in the form of a two-part lecture for the Nature Research Society (Naturforchender Verein) in Brno and as a printed paper in the Society´s journal a year later.

Eva Matalová is a research scientist at the Institute of Animal Physiology and Genetics of the Academy of Sciences. She is also a Professor at the University of Veterinary and Pharmaceutical Sciences in Brno. Eva Matalova cooperates more than 20 years with Mendelianum of the Moravian Museum. She is a scientific guarantee of several Mendel related projects and one of the key authors of the modern Centrum Mendelianum, covering the scientific and visitor centres, as well as Mendel´s interactive school of genetics. Eva Matalova spent several years abroad, particularly at the King´s College London and she further develops international collaboration not only in her research area but also in Mendel networking.

More information can be found here.




2014


Morphogenetic self-organisation of swarm robots for adaptive pattern formation
foto

Prof. Yaochu Jin
Department of Computing (Head of NICE group)
University of Surrey
United Kingdom

Morphogenesis is the biological process in which a fertilized cell proliferates, producing a large number of cells that interact with each other to generate the body plan of an organism. Biological morphogenesis, governed by gene regulatory networks through cellular and molecular interactions, can be seen as a self-organizing process. This talk presents a methodology that uses genetic and cellular mechanisms inspired from biological morphogenesis to self-organize swarm robots for adaptive pattern formation in changing environments. We show that the morphogenetic approach is able to self-organize swarm robots without a centralized control, which is nevertheless able to generate predictable global behaviors.

Professor Yaochu Jin is Professor of Computational Intelligence and heads the Nature Inspired Computing and Engineering (NICE) Group , Department of Computing, University of Surrey, Guildford, Surrey, UK. He was Principal Scientist and Group Leader at the Honda Research Institute Europe, Germany before he was appointed Chair in Computational Intelligence at University of Surrey in June 2010. He has published over 150 journal and conference papers and was granted 7 US/EU/Japan patents. His papers have reported over 6,000 citations. Since June 2010, he has successfully attracted funding from EU FP7, UK EPSRC, and industries, including Santander, Bosch UK, HR Wallingford, Intellas UK Ltd, Aero Optimal and Honda. Professor Jin is Vice President for Technical Activities of the IEEE Computational Intelligence Society (2014-2015) and an IEEE Distinguished Lecturer. He currently serves as an Associate Editor of BioSystems, the IEEE Transactions on Cybernetics, the IEEE Transactions on Nanobioscience, IEEE Computational Intelligence Magazine, the Soft Computing Journal and the International Journal of Fuzzy Systems. He is an Editorial Board Member of the Evolutionary Computation Journal (MIT Press).

More information can be found here.



Analysing Evolutionary Algorithms and Other Randomised Search Heuristics: From Run Time Analysis to Fixed Budget Computations
foto

Dr. Thomas Jansen
Dept. of Computer Science
Aberystwyth University
United Kingdom

Evolutionary algorithms and other randomised search heuristics are often used to tackle difficult optimisation problems. In this context the most important question we want to see answered by an analysis of these algorithms is that of their efficiency. How well does a specific heuristic perform on a specific problem? In the last 20 years the theory of evolutionary algorithms has developed run time analysis to answer a specific version of this question. How long does it take a specific heuristic on average to find an optimal solution of a specific problem? There is a large number of impressing successes, developing analytical tools and applying them to increasingly complex heuristics and problems. However, it is not clear how practitioners in the field have benefitted from these developments. One can argue that evolutionary algorithm theory has concentrated on an aspect of performance that is at odds with real applications of evolutionary algorithms. A rather novel perspective, fixed budget computations, are a better match with the way evolutionary algorithms are applied. The talk provides an overview of the state of the art in evolutionary algorithm theory, introduces fixed budget computations and explains how results from run time theory can systematically be transformed into results of this new and more useful type.

Thomas Jansen is Senior Lecturer at the Department of Computer Science at Aberystwyth University, Wales, UK (since January 2013). He studied Computer Science at the University of Dortmund, Germany, and received his diploma and Ph.D. there. From September 2001 to August 2002 he stayed as a Post-Doc at Kenneth De Jong's EClab at the George Mason University in Fairfax, VA. He has published 19 journal papers, 40 conference papers, contributed seven book chapters and authored one book on evolutionary algorithm theory. His research is centred around design and theoretical analysis of artificial immune systems, evolutionary algorithms and other randomised search heuristics. He is associate editor of Evolutionary Computation (MIT Press) and Artificial Intelligence (Elsevier), member of the steering committee of the Theory of Randomised Search Heuristics workshop series and co-track chair of the Genetic Algorithm track of GECCO 2013 and 2014. In 2015 he will be co-organising FOGA 2015.

More information can be found here.



Adaptation for Differential Evolution
foto

Assoc. Prof. Ponnuthurai Nagaratnam Suganthan
School of Electrical and Electronic Engineering
Nanyang Technological University
Singapore

Differential evolution has recently become a (or the most) competitive real-parameter optimizer in diverse scenarios. This talk will present some important parameter and operator adaptation methods used with differential evolution currently. The talk will also touch on a few different optimization problem scenarios such as single objective, multiobjective, dynamic, multimodal, etc. The talk will also identify some future research directions.

Ponnuthurai Nagaratnam Suganthan received the B.A degree and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK. He obtained his Ph.D. degree from the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press. He is an associate editor of the IEEE Trans on Cybernetics (formerly IEEE T-SMC-B), IEEE Trans on Evolutionary Computation, Information Sciences (Elsevier), Pattern Recognition (Elsevier). He is a founding co-editor-in-chief of Swarm and Evolutionary Computation, an Elsevier Journal. SaDE (April 2009) paper won "IEEE Trans. on Evolutionary Computation" outstanding paper award in 2012. His research interests include evolutionary computation, pattern recognition, multi-objective evolutionary algorithms, applications of evolutionary computation and neural networks. His publications have been well cited (Googlescholar Citations: 11k). His SCI indexed publications attracted over 1000 SCI citations in calendar year 2013 alone. He is a Senior Member of the IEEE and an elected AdCom member of the IEEE CIS (2014-2016).

More information can be found here.



Mathematical chaotic circuits: an efficient tool for shaping numerous architectures of mixed Chaotic/Pseudo Random Number Generators
(Chaos Theory 1/2)
foto

Prof. René Lozi
Laboratoire J.A. Dieudonné
Université de Nice Sophia-Antipolis
France

During the last decades, on one hand, it had been highlighted the duality between chaotic numbers and pseudo-random numbers (e.g. sometimes chaotic numbers used in particle swam optimisation are more efficient than pseudo-random numbers, sometimes high quality pseudo-random numbers are needed for cryptography). On the other hand, emergence of pseudo-randomness from chaos via various under-sampling methods has been recently discovered. Instead of opposing both qualities (chaos and pseudo-randomness) of numbers, it should be more interesting to shape mixed Chaotic/Pseudo-random number generators, which can modulate the desired properties between chaos and pseudo-randomness. Because nowadays there exist increasing demands for new and more efficient number generators of this type (these demands arise from different applications, such as multi-agents competition, global optimisation via evolutionary algorithms or secure information transmission ,etc.), it is important to develop new tools to shape more or less automatically various families of such generators. Mathematical chaotic circuits have been recently introduced for such a purpose. By analogy of electronic circuitry: i;e. the design of electronic circuits which are composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires through which electric current can flow; mathematical chaotic circuits are composed of individual components (generators, couplers, samplers, mixers, and reducers, ...) connected through streams of data. The combination of such mathematical components leads to several news applications such as improving the performance of well known chaotic attractors (Hénon, Chua, Lorenz, Rössler,...). They can be also used in larger scale to shape numerous architectures of mixed Chaotic/Pseudo Random Number Generators, as we will show in this talk.

Professor René Lozi received his Ph.D. from the University of Nice in 1975 and the French State Thesis under the supervision of Prof. René Thom in 1983. In 1991, he became Full Professor at Laboratoire J.A. Dieudonné, University of Nice and IUFM. He has served as the Director of IUFM (2001-2006) and as Vice-Chairman of the French Board of Directors of IUFM (2004-2006). He is member of the Editorial Board of Indian J. of Indust. and Appl. Maths and J. of Nonlinear Systems and Appl., and member of the Honorary Editorial Board of Intern. J. of Bifurcation and Chaos. In 1977 he entered the domain of dynamical systems, in which he discovered a particular mapping of the plane producing a very simple strange attractor (now known as the "Lozi map"). Nowadays his research areas include complexity and emergences theories, dynamical systems, bifurcation and chaos, control of chaos and cryptography based chaos. He is working in this field with renowned researchers, such as Professors Leon O. Chua (inventor of "Chua circuit") and Alexander Sharkovsky (who introduced the "Sharkovsky's order").

More information can be found here.




2013


Coarse Graining the Dynamics of Evolutionary Algorithms
foto

Prof. Riccardo Poli
School of Computer Science and Electronic Engineering
University of Essex
United Kingdom

The study of complex adaptive systems is among the key modern tasks in science. Such systems show radically different behaviours at different scales and in different environments, and mathematical modelling of such emergent behaviour is very difficult, even at the conceptual level. We require a new methodology to study and understand complex, emergent macroscopic phenomena. Coarse graining, a technique that originated in statistical physics, involves taking a system with many microscopic degrees of freedom and finding an appropriate subset of collective variables that offer a compact, computationally feasible description of the system, in terms of which the dynamics looks “natural”. This talk will present the key ideas of the approach and will show how it can be applied to evolutionary dynamics.

Riccardo Poli is a professor in the School of Computer Science and Electronic Engineering at Essex, UK. His research interests include genetic programming, particle swarm optimisation, the theory of evolutionary algorithms, and brain-computer interfaces. He was a recipient of the Evo* award for outstanding contributions to the field of evolutionary computation. He has published over 300 refereed papers on evolutionary algorithms, biomedical engineering, neural networks and image/signal processing. He has co-authored the books Foundations of Genetic Programming, Springer, 2002 and A Field Guide to Genetic Programing, Lulu, 2008. He has been chair of numerous international conferences. He is an advisory board member of the Evolutionary Computation journal and an associate editor of the Genetic Programming and Evolvable Machines and of Swarm Intelligence journals.

More information can be found here.



Capabilities of Radial and Kernel Networks
foto

RNDr. Věra Kůrková, DrSc.
Institute of Computer Science
Academy of Sciences of the Czech Republic
Czech Republic

Originally, artificial neural networks were built from biologically inspired units called perceptrons. Later, other types of units became popular in neurocomputing due to their good mathematical properties. Among them, radial-basis-function (RBF) units and kernel units became most popular. The talk will discuss advantages and limitations of networks with these two types of computational units. Higher flexibility in choice of free parameters in RBF will be compared with benefits of geometrical properties of kernel models allowing applications of maximal margin classification algorithms, modelling of generalization in learning from data in terms of regularization, and characcterization of optimal solutions of learning tasks. Critical influence of input dimension on behavior of these two types of networks will be described. General results will be illustrated by the paradigmatic examples of Gaussian kernel and radial networks.

RNDr. Věra Kůrková, DrSc., received the Ph.D. in topology from Charles University, Prague. Since 1990, she has been a scientist with the Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague. From 2002 to 2008, she was the Head of the Department of Theoretical Computer Science. Her research interests include mathematical theory of neurocomputing and softcomputing, machine learning, and nonlinear approximation theory. In 2010 she was awarded by the Czech Academy of Sciences the Bolzano Medal for her contributions to mathematics. She is a member of the Editorial Boards of the journals Neural Networks, Neural Processing Letters, and in 2008-2009 also of IEEE Transactions on Neural Networks. She is a member of the Board of the European Neural Networks Society and was the general chair of conferences ICANNGA 2001 and ICANN 2008.

More information can be found here.



Reliable Learning by Conformal Predictors
foto

Prof. Alex Gammerman
Co-Director of the Computer Learning Research Centre
Royal Holloway and Bedford New College, University of London
United Kingdom

The talk describes a new machine learning technique called Conformal Predictors. The technique is based on recently developed computable approximations of Kolmogorov’s algorithmic notion of randomness, and it allows us to make reliable predictions using valid measures of confidence in both ”batch” and ”online” modes of learning. The advantages are as follows:

  • it can control the number of erroneous predictions by selecting a suitable confidence level;
  • unlike many conventional techniques the approach does not make any additional assumption about the data beyond the iid assumption: the examples are independent and identically distributed;
  • it allows to make estimation of confidence of the prediction for individual examples;
  • can be used as a region predictor with a number of possible predicted values;
  • can be used in high-dimensional problems where number of attributes greatly exceeds the number of objects;
  • it gives well-calibrated predictions that can be used in online and off-line learning as well as in ”intermediate” types of learning e.g. ”slow”, ”lazy”.

Professor Gammerman was appointed to the established Chair in Computer Science at University of London (Royal Holloway and Bedford New College) in 1993 and served as Head of the Computer Science Department from 1995 until 2005. He is also Founding Director of the Computer Learning Research Centre at Royal Holloway, University of London. Professor Gammerman is a Fellow of the Royal Statistical Society, Fellow of the Royal Society of Arts, Fellow of British Computer Society. He chaired and participated in organising committees of many international conferences and workshops on Machine Learning and Bayesian methods in Europe, Russia and in the United States. Professor Gammerman's current research interest lies in field of Algorithmic Randomness Theory with its applications to machine learning and conformal predictors. Areas in which these techniques have been applied include medical diagnosis, forensic science, genomics, proteomics and environment. Professor Gammerman has published over a hundred research papers and several books on computational learning and probabilistic reasoning.

More information can be found here.




2012


Genetic Improvement Programming
foto

Dr. William B. Langdon
Research Fellow
University College London
United Kingdom

Evolutionary computing, particularly genetic programming, can optimise software and software engineering, including evolving test benchmarks, search meta-heuristics, protocols, composing web services, improving hashing and garbage collection, redundant programming and even automatically fixing bugs. Often there are many potential ways to balance functionality with resource consumption. But a human programmer cannot try them all. Also the optimal trade off may be different on each hardware platform and it vary over time or as usage changes. It may be genetic programming can automatically suggest different trade offs for each new market.

Dr. Langdon worked in the power supply industries and as a software consultant before returning to university to gain a PhD on evolving software with genetic programming. Bill has worked both on application and theoretical foundations of GP and has written three books on GP () and given presentations in five continents. Applications include scheduling, e-commerce, data mining, evolving combinations of classifiers (MCS), swarm systems (PSO) and Bioinformatics (eg non-human contaminants in the human genome databases). Whilst theory includes GP schema theory, markov analysis, the halting probability and elementary fitness landscapes. Recent work includes using grammar based GP to re-engineer nVidia CUDA software.

More information can be found at  here.



Nature-Inspired Metaheuristics: Theory and Applications
foto

Dr. Xin-She Yang
Senior Research Scientist
National Physical Laboratory
United Kingdom

Nowadays, nature-inspired metaheuristics have become an integrated part of soft computing and computational intelligence, and they have been applied to solve a wide range of tough optimization problems. Seemingly simple algorithms can often deal with complex, even NP-hard, optimization problems with surprisingly good performance and results. In this talk, I will review some of the recent metaheuristic algorithms such as firefly algorithm and cuckoo search and their differences from particle swarm optimization and other metaheuristics. We will try to analyse the key components of metaheuristic methods in terms of convergence and search characteristics. We will also give a few examples in real-world applications and suggest some open problems for further research.

Dr Xin-She Yang received his DPhil in Applied Mathematics from University of Oxford, then worked at Leeds University and Cambridge University for a few years, and now is a Senior Research Scientist at National Physical Laboratory. He is a Guest Professor at Shandong University and Harbin Engineering University. He has authored a dozen books and published more than 110 papers. He is the Editor-in-Chief of Int. J. Mathematical Modelling and Numerical Optimisation (IJMMNO, Inderscience), serves as an editorial board member of several international journals, including Elsevier's Journal of Computational Science (JoCS). He is also the vice chair of the IEEE CIS task force on business intelligence and knowledge management.

More information can be found at here.



Differential Evolution: from Theoretical Analysis to Practical Insights
foto

Prof. Daniela Zaharie
Faculty of Mathematics and Computer Science
West University of Timisoara
Romania

Differential Evolution (DE) is currently one of the most used population based stochastic metaheuristic. Its popularity is mainly due to its simplicity and effectiveness in solving various types of problems, including multi-objective, multi-modal, dynamic and constrained optimization problems. Since Rainer Storn and Kenneth Price proposed the first DE versions, more than fifteen years ago, dozens of differential evolution flavors, involving changes in the main operators, hybridization with other optimization methods, automated parameters tuning, (self)-adaptation schemes, structured populations and so on, have been proposed.
Despite the large number of reported applications of DE and of the huge volume of experimental results it is still difficult to give answers to questions like "Why is DE successful for a class of problems and why does it fail for other problems?" or "How could we control the DE behavior?". The theoretical analysis of DE is still well behind the experimental results, most of the current knowledge on differential evolution being based on empirical observations.
This presentation will review the existing theoretical results concerning the convergence properties of DE and the influence of the choice of DE parameters on the population evolution and will focus on the usage of these results in deriving practical insights for designing effective and efficient optimization tools.

Prof. Daniela Zaharie received her PhD degree in Probability and Statistics at the West University of Timisoara, Romania in 1997 and currently is professor at the Department of Computer Science of the same university. Her main research interests are: evolutionary computing, machine learning, data mining, statistical modelling, image processing and high performance computing.

More information can be found at here.



Emergent Phenomena in Evolution of Shapes and in Problem Solving
foto

Prof. Jiri Bila
Department of Instrumentation and Control Engineering
Czech Technical University in Prague
Czech Republic

The proposed lecture maps the emergent phenomena in the following fields:
  – In the evolution of natural shapes and processes.
  – In multi agent systems.
  – In the behavior of the ant colonies.
  – In urban and non urban development of large cities.
  – In conceptual design.
  – In problem solving.

The lecture contains a part of the essential and breakthrough opinions to the emergence and emergent phenomena till nowadays. It introduces Complex Systems as one of the field appropriate for investigation of conditions of emergent phenomena initiation. Principal obstacles that obstruct a real investigation of emergent phenomena in the level of nowadays analytical science are introduced. The cases of emergent phenomena in problem solving are illustrated.

Prof. Jiri Bila finished his study on Faculty of mechanical engineering, CTU Prague in 1969. His CSc. thesis in the field of Technical Cybernetics he defended in 1977, and in 1987 he achieved the degree DrSc in the same field of science. He is currently full professor of Technical Cybernetics in Faculty of Mechanical Engineering, CTU in Prague, since 1989. The kernel of his scientific activities is in analysis of non-linear systems with deterministic chaos, in application of artificial intelligence and neural networks for biosignals and in intelligent methods in diagnostics. He is author and co-author of 17 grant projects in various levels 1991-2011, author/co-author of 5 books and over 250 journal and conference papers.

More information can be found at here.



 Special Invited Talk:
On the nature of the electron and other particles
foto

Dr John G Williamson
Senior Lecturer
University of Glasgow
Scotland

For many thousands of years man has wondered about the underlying nature of the universe. Much progress has been made but many mysteries remain. One such is the underlying structure of the “elementary” particles. The quantum-mechanical spin of the electron would suggest, for example, that it must have an underlying structure. In one of our most advanced theories however, quantum electrodynamics, it is taken to be structureless. In ordinary quantum mechanics it has a wave nature, but what this wave looks like is very variable, large in the solid state, miniscule in high-energy scattering. How does it do this? What is it? What is in there? One gets used to not understanding. This lack of understanding is even raised to the level of principle on which further development is based (the uncertainty principle). The talk will outline a new theory, developed using a new kind of mathematics, which allows the underlying structure of elementary particles to be considered.

Dr. John Williamson worked for many years at CERN. He joined the advanced theoretical and experimental physics group at Philips in 1985 where he was responsible for the design of the quantum point contact and the worlds first single electron pump. These devices were revolutionary at the time and continue to be the subject of much experiment even now. Since 1991 he has been working on the development of a new theoretical basis where light and matter may be treated on the same footing. He has (co) authored around a hundred papers.

More information can be found at  here.



2011


Visualization of Digital High Dynamic Range Images
foto

Professor Miloslav Druckmüller
Head of Dept. of Computer Graphics and Geometry
Brno University of Technology
Czech Republic

The human vision is able to distinguish less than 200 brightness levels on nowadays computer displays. However the modern digital cameras produce images with 16 or more bits per pixel dynamic range. That is why without sophisticated image processing these images contain information invisible for human vision. The extreme example are images of solar corona with extreme contrast gradient. The lecture gives information about mathematical tools for solving the problem.

Prof. Druckmüller specializes in functional analysis and analysis in complex domain, image processing and image analysis. Recently, he focuses on image processing of images of the solar corona taken during total solar eclipses and physics of the solar corona. He developed a special adaptive filter for solar corona structure enhancement and a whole algorithm for processing solar corona images taken both by digital and classical cameras. He was an author or co-author of several important papers on processing of solar corona images and the physics of the solar corona. He was a member of scientific expeditions that successfully observed total solar eclipses in 2005, 2006, 2008, 2009 and 2010. An image of the 2008 total solar eclipse appeared on the cover page of Nature.

More information can be found at here.



Fuzzy Rule-Based Classification: Theory and Applications
foto

Assoc. Prof. Tomoharu Nakashima
Dept. of Computer Science and Intelligent Systems
Osaka Prefecture University
Japan

Fuzzy systems based on fuzzy if-then rules have been shown to be effective especially in the field of control and classification. Fuzzy systems can be constructed by various modes of learning such as supervised learning, unsupervised learning, and reinforcement leaning. The main difference between fuzzy and non-fuzzy rule-based classification is that fuzzy systems can produce non-linear classification while non-fuzzy ones are not able to do so. In this presentation, first, various learning modes for fuzzy rule-based systems are explained. Then secondly, the application of fuzzy rule-based systems to classification and decision making problems are described. Specifically, examples of medical diagnosis and autonomous game agents are illustrated.

Tomoharu Nakashima received his Ph.D degrees in engineering from Osaka Prefecture University in 1995, 1997, and 2000, respectively. His research interest includes fuzzy rule-based system, RoboCup soccer simulation, pattern classification, agent-based simulation of financial engineering, evolutionary computation, neural networks, reinforcement learning and game theory.

More information can be found at here.



Evolutionary Optimization for Decision Making under Uncertainty
foto

Assistant Prof. Ronald Hochreiter
Institute of Statistics and Mathematics
WU Viena University
Austria

The scientific discipline of Decision Making under Uncertainty is an important tool to solve a vast range of quantitative management problems, especially in the area of risk management. There exist a range of optimization methodologies to numerically solve optimization models with integrated real-world uncertainty, e.g. Stochastic Programming and Robust Optimization. However, to solve real-world problems within an uncertain setting and including real-world constraints, standard optimization solvers are known to be problematic, i.e. the computational challenge is an important factor why companies are reluctant to apply optimization methods in general. Custom solvers have to be developed. Evolutionary optimization (especially Genetic Algorithms for Optimization) can be used to create fast and highly flexible solvers. Examples from the application areas of Finance and Energy will show the applicability of Evolutionary Optimization.

Ronald Hochreiter received his PhD in Computational Management Science from the University of Vienna in 2005. His research is focused on computational optimization under uncertainty for applications in risk management. Furthermore he is concerned with the implementation of large-scale decision support systems to solve complex decision problems for real-world problems. Before joining the WU Vienna University of Economics and Business in 2009 he was working as a risk manager for a large pension fund in Austria. Besides his research, he is actively consulting banks, insurance companies and pension funds to improve risk management processes using latest results from his research.

More information can be found at here.



2010


Recent Adventures with Grammar-based Genetic Programming
foto

Dr. Michael O'Neill
Director, UCD NCRAG
University College Dublin
Ireland

Following an introduction to grammar-based Genetic Programming, with a particular emphasis on Grammatical Evolution, we outline recent adventures in this domain. Highlights of both the research and real-world applications of grammar-based Genetic Programming by UCD's Natural Computing Research & Applications group are presented.

Dr. O'Neill is a founder of the UCD Natural Computing Research Applications group with Prof. Anthony Brabazon, and is a Senior Lecturer in the UCD School of Computer Science & Informatics. He is the lead author of the seminal book on Grammatical Evolution, and is independently ranked as one of the top 10 researchers in Genetic Programming.

Dr. O'Neill has published in excess of 200 peer-reviewed publications including 3 Monographs.  In 2009 he published the second book on Grammatical Evolution, "Foundations in Grammatical Evolution for Dynamic Environments" with Ian Dempsey and Anthony Brabazon, and in 2006 he published the book on "Biologically Inspired Algorithms for Financial Modelling" with Prof. Brabazon. He has over 1500 citations, and a H-index of 18.  Dr. O'Neill has co-authored a number of successful funding applications with a total value over €7 Million.

More information can be found at http://www.ucd.ie/research/people/computerscienceinformatics/drmichaeloneill



Fuzzy rule-based classification systems and their application in the medical domain
foto

Dr. Gerald Schaefer
Department of Computer Science
Loughborough University
United Kingdom

Many medical applications contain a decision making process which can be regarded as a pattern classification problem. In the literature many pattern classification techniques have been introduced ranging from statistical methods to intelligent soft computing techniques. In my talk I will focus on the latter, and in particular on fuzzy rule-based classifiers. I will highlight a particular group of fuzzy classifiers, show how a fuzzy rule base can be turned into a cost-sensitive classifier and present how a compact yet effective rule base can be derived through the application of genetic algorithms. Finally, I will show how these classifiers can be employed in a number of medical applications including the analysis of breast cancer data and gene expression analysis.

Dr. Schaefer gained his BSc. in Computing from the University of Derby and his PhD in Computer Vision from the University of East Anglia. He worked at the Colour & Imaging Institute, University of Derby (1997-1999), in the School of Information Systems, University of East Anglia (2000-2001), in the School of Computing and Informatics at Nottingham Trent University (2001-2006), and in the School of Engineering and Applied Science at Aston University (2006-2009) before joining the Department of Computer Science at Loughborough University. His research interests are mainly in the areas of image analysis, computer vision and computational intelligence. He has published extensively in these areas with a total publication count exceeding 200. He is a member of the editorial board of several international journals, reviews for over 50 journals and served on the programme committee of about 150 conferences.



Fractals in Physics
foto

Prof. Oldrich Zmeskal
Professor, Institute of Physical and Applied Chemistry
Brno University of Technology
Czech Republic

This contribution is concerned with using of fractal theory for the description of elementary stationary physical (e.g. gravitational, electric) of spherical and torus fields. This theory, defined generally in E-dimensional Euclidean space, was applied for description of stationary effects in one, two and three dimensional space respectively. Other extension of fractal theory is presented in the area of so – called pseudo-Euclidean coordinates, where E-dimensional space consists of p Euclidean and q pseudo Euclidean dimensions (space-time).

Prof. Oldrich Zmeskal is author of about 15 program products, 12 titles of textbooks (Physics, Physical Chemistry, Computer Science), 7 chapters in books (Computer Science), 30 articles published in international journals, about 15 lectures and 30 posters on Czech and international Computing Research & Applications group are presented.

Cooperation with Czech and international research and industrial enterprises, research in the field of characterization and application of organic semiconductors and thermoelectric properties of material, fractal physics.



2009


Differential Evolution Algorithm in Practical Applications
foto

D.Sc. Jouni Lampinen
Professor D.Sc. - Department of Computer Science
University of Vaasa
Finland

Differential Evolution algorithm was first introduced 14 years ago, in 1995, by Kenneth V. Price and Rainer Storn. Since then Differential Evolution has emerged from a research topic of a few pioneers among the most frequently applied evolutionary computation approaches. Currently Differential Evolution is widely recognized among the research community in the field, as well as widely adopted by the practitioners both in academy and industry. A remarkably high number of practical applications of Differential Evolution that has been reported in literature suggest that the algorithm is particularly suitable for practical applications and solving real-life global optimization problems. It seems, that the algorithm can be applied successfully also by those, who can be characterized as experts of an application area (e.g. in industry) rather than experts in global optimization.
The presentation is focused on the properties of Differential Evolution algorithm that could explain its success on practical applications, making it particularly easy to apply and use even by those who have their main expertise in the application area and who have focused on the problem to be solved rather than in the tools used for solving it. The current limitations and weaknesses of the Differential Evolution algorithm are also discussed, especially from the practical applications point of view, by recognizing some points, where further research is needed to further improve the applicability and ease of usage of Differential Evolution algorithm.



Evolutionary synthesis of ordinary and polymorphic digital circuits
foto

Dr. Lukas Sekanina
Associate Professor - Faculty of Information Technology
Brno University of Technology
Czech Republic

In the first part of the talk, fundamental concepts of evolutionary circuit design and evolvable hardware will be introduced. Examples of evolved innovative designs will be presented in domains of small combinational circuits, middle-size circuits and large circuits, covering thus circuit complexity from a few gates to millions of gates. The second part of the talk will be devoted to evolutionary synthesis of polymorphic circuits. Polymorphic circuits contain polymorphic gates that are able to change the logic function according to the status of external environment (light, temperature, power supply voltage - Vdd, ...). An experimental polymorphic reconfigurable ASIC (developed at FIT) will be described which contains configurable ordinary gates and polymorphic NAND/NOR gates controlled by Vdd. This chip enables to investigate the electrical properties of polymorphic circuits and demonstrate the applications of polymorphic electronics.



Projective Geometry and the Law of Mass Action
foto

Dr. Ales Gottvald
Senior Researcher, Institute of Scientific Instruments, v.v.i.
Academy of Sciences of the CR
Czech Republic

The law of chemical equilibrium is one of the most general and best verified laws of nature. Its algebraic expression, also called the Law of Mass Action, was pioneered by Guldberg and Waage at about the same time when J. G. Mendel published his seminal findings in genetics. Thus it is very surprising that a hidden geometrical meaning and geometrical characterization of the law of chemical equilibrium remained unrecognized until very recently. We postulate that chemical equilibria and chemical kinetics are governed by fundamental principles of projective geometry. The equilibrium constans of chemical reactions are the fundamental invariants of projective geometry in disguise. Chemical reactions may geometrically be represented by incidence structures, which are preserved under projective transformations. Intrinsically projective Riccati‘s differential equation, being also generic to many other equations of mathematical physics, governs parametric dependence of the equilibrium constants. These findings show germs of many remarkable generalizations and extensions of the Law of Mass Action, including a new projective-geometric approach to soft computing of very complex problems. It appears that surprisingly many phenomena of nature can advantageously be treated as abstract chemical reactions, and given a remarkably simple rationale in terms of projective geometry. Consequently, we claim a new fundamental law of nature: chemical reactions are governed by projective geometry.



2008


SASS: a Control Parameter - Less Black Box Optimisation Algorithm
foto

Dr. Lars Nolle
Senior Lecturer, School of Computing and Informatics
Nottingham Trent University
United Kingdom

Many scientific and engineering problems can be viewed as search or optimisation problems, where an optimum input parameter vector for a given system has to be found in order to maximize or to minimize the system response to that input vector. Often, auxiliary information about the system, like its transfer function and derivatives, etc., is not known and the measures might be incomplete and distorted by noise. This makes such problems difficult to be solved by traditional mathematical methods. Here, heuristic optimisation algorithms, like Genetic Algorithms (GA) or Simulated Annealing (SA), can offer a solution. But because of the lack of a standard methodology for matching a problem with a suitable algorithm, and for setting the control parameters for the algorithm, practitioners often seem not to consider heuristic optimisation. The main reason for this is that a practitioner, who wants to apply an algorithm to a specific problem, and who has no experience with heuristic search algorithms, would need to become an expert in optimisation algorithms before being able to choose a suitable algorithm for the problem at hand. Also, finding suitable control parameter settings would require carrying out a large number of experiments. This might not be an option for a scientist or engineer, who simply wants to use heuristic search as a tool. In such a case, Self-Adaptive Step-size Search (SASS), a new, population-based adaptation scheme with a self-adaptive step size, can offer an alternative.


Optimization of network topology
foto

Professor Jiøí Pospíchal
Professor DrSc. - Department of Mathematics, FCHPT
Professor DrSc. - Institute of Applied Informatics, FIIT
Slovak University of Technology in Bratislava
Slovakia

The lecture studies a problem of optimization of networks with vertices on toroidal grid, which have the smallest possible number of connections, which are as short as possible. This requirement crucial to the cost of building the network is in direct opposition to the second requirement of achieving smallest possible graph distance between vertices, which is substantial to the cost of communication. Relation of the resulting networks to small world networks is discussed.


Evolution, Chaos, Fractals and Information - Relations and Perspectives
foto

Dr. Ivan Zelinka
Associate Professor, Faculty of Applied Informatics
Tomas Bata University in Zlín
Czech Republic

Lecture will be focused on principles of evolutionary algorithms, fractal geometry, chaos and information theory from new possible relations point of view. Possible relations will be mentioned amongst evolution, information and deterministic chaos, possible use of deterministic chaos in optimization will also be discussed. Evolutionary computation will be discussed from point of view of information transmission via communication channel. At the end of this lecture will be mentioned limits given by our universe to computers even to hypothetical computer based quantum technology.

Brno University of Technology    
Indexed in:
Scopus Scopus
Brno City homepage