Mendel Conference
25th International Conference on Soft Computing, July 10-12 Brno, Czech Republic
 
 
Login
Username
 
 
   Google+
 
Invited Speakers 2017 (for 2018 will be announced soon)

Evolutionary Algorithms for Industrial Problems
foto

Prof. dr. Thomas Bäck
Professor for Natural Computing
Head of the Natural Computing Research Group
Leiden Inst of Advanced Computer Science (LIACS)
Leiden University
Netherlands

Industrial optimization problems often characterized by a number of challenging properties, such as time-consuming function evaluations, high dimensionality, a large number of constraints, and multiple optimization criteria. Working with Evolutionary Strategies, we have optimized them over the past decades for such optimization problems. In this presentation, we will illustrate these aspects by referring to industrial optimization problems, such as they occur in the automotive and many other industries. We will show that evolutionary strategies can be very effective even in case of very small numbers of functions evaluations. In the second part of the talk, some recent experiments on configuring evolutionary strategies are presented, showing that evolutionary strategies can be further improved by automatic search methods. This opens up a promising new direction towards constructing optimization algorithms based on a modularized evolutionary strategy superstructure. Moreover, combining this with data mining techniques, it is possible to characterize the relation between problem characteristics and beneficial algorithmic features.

Thomas Bäck is full professor of computer science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas Bäck has more than 200 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and is an elected fellow of the International Society for Genetic and Evolutionary Computation for his contributions to the field.

More information can be found here.



Competitive Co-Evolution of Multi-Layer Perceptron Neural Networks
foto

Dr Marco Castellani, Ph.D
Department of Mechanical Engineering
University of Birmingham
United Kingdom

Darwin recognised predator-prey mechanisms as a major driver of natural evolution. This talk discusses the competitive co-evolutionary training of multi-layer perceptron (MLP) neural networks. Classical evolutionary algorithms evolve a population of MLPs, measuring the fitness of the individuals from their capability to correctly map a fixed set of training examples. Competitive co-evolutionary algorithms pit a population of MLPs to a population of training patterns. The classifiers are regarded as predators that need to ‘capture’ (correctly categorise or map) the prey (training patterns). Success for the predators is measured on their ability to capture the prey. Success for the prey is measured on their ability to escape predation (be mapped incorrectly). The aim of the procedure is to create an evolutionary tug-of-war between the best classifiers and the most difficult data samples. Tested on different classification tasks, competitive co-evolution showed promise in terms of robustness to corrupted data patterns, accuracy of the solutions, and reduced computational costs. Thanks to its ability to focus yet unlearned examples, competitive co-evolution showed also great promise on tasks involving unbalanced data classes.

Marco Castellani is Lecturer in Advanced Robotics and Intelligent Systems at the Department of Mechanical Engineering of the University of Birmingham. He has 20 years of research experience in the private sector, universities, and research centres in various European countries. His work spanned a broad interdisciplinary area encompassing engineering, biology, and computer science, including machine learning, machine vision, pattern recognition, swarm intelligence, soft computing, intelligent control, optimisation, ecological modelling, natural language processing, and the general AI field. He published about 50 peer-reviewed research papers in scientific journals and international conferences, and is currently Editor of the Cogent Engineering journal.

More information can be found here.



Handling Data Irregularities in Classification: Some Recent Approaches and Future Challenges
foto

Dr. Swagatam Das, Ph.D.
Electronics and Communication Sciences Unit
Indian Statistical Institute, Kolkata
India

Data emerging from the real world may very often be plagued with prominent irregularities including class imbalance (under represented classes in the training sets), missing or absent features, and small disjuncts (under-represented sub-concepts within classes). Performances of the traditional classifiers are usually far from their theoretical limits in face of such data irregularities. This talk will outline some very effective recent approaches based on Support Vector Machines (SVMs), boosting, and k-Nearest Neighbor Classifiers (kNNs) to handle such irregularities. The talk will also discuss a few open-ended research issues in this direction.

Swagatam Das is currently serving as a faculty member at the Electronics and Communication Sciences Unit of the Indian Statistical Institute, Kolkata, India. His research interests include machine learning and non-convex optimization. Dr. Das has published more than 250 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of Swarm and Evolutionary Computation, an international journal from Elsevier. He has also served as or is serving as the associate editors of the IEEE Trans. on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, IEEE Access, Pattern Recognition (Elsevier), Neurocomputing (Elsevier), Engineering Applications of Artificial Intelligence (Elsevier), and Information Sciences (Elsevier). Dr. Das has 11500+ Google Scholar citations and an H-index of 53 till date. He has acted as guest editors for special issues in journals like IEEE Transactions on Evolutionary Computation, ACM Transactions on Adaptive and Autonomous Systems and IEEE Transactions on SMC, Part C. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE). He is also the recipient of the 2015 Thomson Reuters Research Excellence India Citation Award as the highest cited researcher from India in Engineering and Computer Science category between 2010 to 2014.

More information can be found here.



Theory of Evolutionary Computation - what is it and why bother?
foto

Dr. Carola Doerr, Ph.D.
French National Center for Scientific Research
Université Pierre et Marie Curie
France

Evolutionary algorithms (EAs) are bio-inspired heuristics, that–thanks to their high flexibility and their ability to produce high-quality solutions for a broad range of problems–are today well-established problem solvers in industrial and academic applications. EAs are often used as subroutines for particularly difficult parts of the optimization problem, as well as for several pre- or post-processing steps of state of the art optimization routines. The predominant part of research on EAs focuses on engineering and empirical work. This research is complemented by the theory of evolutionary computation, which aims at providing mathematically founded statements about the working principles of such optimization techniques. Historically, the role of theory of evolutionary computation was somewhat restricted to debunking common misconceptions about the performance of EAs. More recently, we see an emerging number of examples where theory has served a source of inspiration to design more efficient EAs. In this talk, we shall discuss some of these recent developments. Our main focus will be
(1) on the role of crossover in evolutionary computation, and
(2) on the importance of dynamic parameter choices.

Carola Doerr is a CNRS researcher at the Université Pierre et Marie Curie (Paris 6). She studied mathematics at Kiel University (Germany, Diploma in 2007) and computer science at the Max Planck Institute for Informatics and Saarland University (Germany, PhD in 2011). From Dec. 2007 to Nov. 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, mainly in the area of network optimization, where she has used randomized search heuristics to compute more efficient network layouts and schedules. Before joining the CNRS she was a post-doc at the Université Diderot (Paris 7) and the Max Planck Institute for Informatics. Carola Doerr's research interest is in the theory of randomized algorithms---both in the design of efficient algorithms as well as in finding a suitable complexity theory for randomized search heuristics. After contributing to the revival of black-box complexity, a theory-guided approach to explore the limitations of heuristic search algorithms, she recently started a series of works aimed at exploiting insights from the theory of evolutionary computation to design more efficient EAs, in particular such with a dynamic choice of parameters.

More information can be found here.



Evolutionary Computation for Dynamic Optimization Problems
foto

Prof. Shengxiang Yang
Centre for Computational Intelligence (CCI)
School of Computer Science and Informatics
De Montfort University
United Kingdom

Evolutionary Computation (EC) encapsulates a class of stochastic optimisation algorithms, which are inspired by principles from natural and biological evolution. EC has been widely used for optimisation problems in many fields. Traditionally, EC methods have been applied for solving static problems. However, many real world problems are dynamic optimisation problems (DOPs), which are subject to changes over time due to many factors. DOPs have attracted a growing interest from the EC community in recent years due to the importance in the real-world applications of EC. This talk will first briefly introduce the concept of EC and DOPs, then review the main approaches developed to enhance EC methods for solving DOPs, and describe several detailed approaches developed for EC methods for DOPs. Finally, some conclusions will be made based on the work presented and the future work on EC for DOPs will be briefly discussed.

Shengxiang Yang is now a Professor of Computational Intelligence (CI) and the Director of the Centre for Computational Intelligence, De Montfort University, UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 230 publications in these domains. He has 5800+ Google Scholar citations and an H-index of 40. His work has been supported by UK research councils (e.g., Engineering and Physical Sciences Research Council (EPSRC), Royal Society, and Royal Academy of Engineering), EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners (e.g., BT, Honda, Rail Safety and Standards Board, and Network Rail, etc.), with a total funding of over £2M, of which two EPSRC standard research projects have been focused on EC for DOPs. He serves as an Associate Editor or Editorial Board Member of eight international journals, including IEEE Transactions on Cybernetics, Evolutionary Computation, Neurocomuting, Information Sciences, and Soft Computing. He is the founding chair of the Task Force on Intelligent Network Systems (TF-INS) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs) of the IEEE CI Society (CIS). He has organised/chaired over 30 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments. He has co-edited 12 books, proceedings, and journal special issues. He has been invited to give over 10 keynote speeches/tutorials at international conferences, and over 30 seminars in different countries.

More information can be found here.



Opening the Black Box: Alternative Search Drivers for Genetic Programming and Test-based Problems
foto

Prof. Krzysztof Krawiec
Institute of Computing Science
Poznan University of Technology
Poland

In genetic programming and other types of test-based problems, candidate solutions interact with multiple tests in order to be evaluated. The conventional approach involves aggregating the interaction outcomes into a scalar objective. However, passing different tests may require unrelated `skills' that candidate solutions may vary on. Scalar fitness is inherently incapable of capturing such differences and leaves a search algorithm largely uninformed about the diverse qualities of individual candidate solutions. In this talk, I will discuss the implications of this fact and present a range of new methods that avoid scalarization by turning the outcomes of interactions between programs and tests into 'search drivers' -- heuristic, transient pseudo-objectives that form multifaceted characterizations of candidate solutions. I will also demonstrate the feasibility of this approach with experimental evidence and embed this research into a broader context of behavioral program synthesis.  

Krzysztof Krawiec is an Associate Professor in the Institute of Computing Science at Poznan University of Technology, Poland, where he heads the Computational Intelligence Group. His primary research areas are genetic programming, machine learning, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO'16, is an associate editor of Genetic Programming and Evolvable Machines journal, and has been a visiting researcher at Computer Science and Artificial Intelligence Laboratory at MIT and Centre for Research in Intelligent Systems at University of California.

More information can be found here.



The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases
foto

Dr. Jiri Sima, DrSc.
Department of Theoretical Computer Science
Institute of Computer Science
The Czech Academy of Sciences
Czech Republic

(Artificial) neural networks (NN) are biologically inspired computational devices that are alternative to conventional computers, especially in the area of machine learning, with a plethora of successful commercial applications in AI. The limits and potential of particular NNs for general-purpose computation have been studied by classifying them within the Chomsky hierarchy (e.g. finite or pushdown automata, Turing machines) and/or more refined complexity classes (e.g. polynomial time). It has been shown that the computational power of NNs basically depends on the information contents of weight parameters. For example, the analysis is fined-grained when changing from rational to arbitrary real weights while the classification is still not complete between integer and rational weights. For this purpose, we introduce an intermediate model of integer-weight NNs with one extra analog neuron having rational weights and we classify this model within the Chomsky hierarchy roughly between context-free and context sensitive languages. Our analysis reveals an interesting link to an active research field on non-standard positional numeral systems with non-integer bases.
In our talk we will briefly survey the basic concepts and results concerning the computational power of neural networks. Then we will discuss the representations of numbers in non-integer bases (beta-expansions) using arbitrary real digits, which generalize the usual decimal or binary expansions. For example, a single number can typically be represented by infinitely many distinct beta-expansions. We will introduce so-called quasi-periodic beta-expansions which may be composed of different repeating blocks of digits. Finally, we will formulate a sufficient condition when an extra analog neuron does not bring additional computational power to integer-weight NNs, which is based on quasi-periodic weight parameters whose all beta-expansions are eventually quasi-periodic. We will illustrate the introduced intuitive concepts on numerical examples so that the presentation is accessible to a wide audience.  

Jirí Šíma is a senior scientist at the Institute of Computer Science, The Czech Academy of Sciences, in Prague where he has served in the past as the chair of the Scientific Council and the head of the Department of Theoretical Computer Science. He has lectured and supervised theses at Charles University (associate professor), Czech Technical University in Prague, and Masaryk University in Brno. He has been a program committee member of 30 international conferences, principal investigator of several successful grant projects including tens of researchers, and a member of grant evaluation panels (INTAS Brussels, Czech Science Foundation). His main research interests include neural networks, computational complexity, learning theory, alternative complexity measures, derandomization. He has achieved fundamental outcomes in the theory of neural networks regarding the time complexity of the most common practical learning algorithm---backpropagation and the computational characteristics of continuous, analog, and symmetric models. He has published ca. 100 papers including 2 monographs, 4 book chapters (e.g. MIT Press, Springer), 20 journal papers (e.g. JACM, Neural Computation, Neural Networks, Theoretical Computer Science), 30 papers in conference proceedings (e.g. STOC, LATA, ALT, ICANN, ICANNGA, ICONIP, IJCNN), which have attracted 843 Google Scholar citations with an H-index of 15. He has been awarded Otto Wichterle Award for the monograph on Theoretical Issues of Neural Networks.

More information can be found here.



 Tutorial:
Deep Learning for Computer Vision with MATLAB
foto

MSc. Jaroslav Jirkovsky
Senior Application Engineer
HUMUSOFT s.r.o.
International Reseller of MathWorks, Inc., U.S.A.
Czech Republic

Convolutional neural networks are essential tools for deep learning, and are especially useful for image classification, object detection, and recognition tasks. CNNs are implemented as a series of interconnected layers. In MATLAB, you can construct a CNN architecture, train a network, and use the trained network to predict class labels or detect objects using R-CNN, Fast R-CNN and Faster R-CNN object detectors. You can also extract features from a pretrained network, and use these features to train a classifier, or train convolutional neural networks for regression tasks. What the network learns during training is sometimes unclear. Deep Dream is a feature visualization technique in deep learning that synthesizes images that strongly activate network layers. By visualizing these images, you can highlight the image features learned by a network.

More information can be found here.



Brno University of Technology    
Indexed in:
Scopus Scopus
Brno City homepage