Skip to main content
Top

2024 | Book

Machine Learning Assisted Evolutionary Multi- and Many- Objective Optimization

Authors: Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman

Publisher: Springer Nature Singapore

Book Series : Genetic and Evolutionary Computation

insite
SEARCH

About this book

This book focuses on machine learning (ML) assisted evolutionary multi- and many-objective optimization (EMâO). EMâO algorithms, namely EMâOAs, iteratively evolve a set of solutions towards a good Pareto Front approximation. The availability of multiple solution sets over successive generations makes EMâOAs amenable to application of ML for different pursuits.

Recognizing the immense potential for ML-based enhancements in the EMâO domain, this book intends to serve as an exclusive resource for both domain novices and the experienced researchers and practitioners. To achieve this goal, the book first covers the foundations of optimization, including problem and algorithm types. Then, well-structured chapters present some of the key studies on ML-based enhancements in the EMâO domain, systematically addressing important aspects. These include learning to understand the problem structure, converge better, diversify better, simultaneously converge and diversify better, and analyze the Pareto Front. In doing so, this book broadly summarizes the literature, beginning with foundational work on innovization (2003) and objective reduction (2006), and extending to the most recently proposed innovized progress operators (2021-23). It also highlights the utility of ML interventions in the search, post-optimality, and decision-making phases pertaining to the use of EMâOAs. Finally, this book shares insightful perspectives on the future potential for ML based enhancements in the EMâOA domain.

To aid readers, the book includes working codes for the developed algorithms. This book will not only strengthen this emergent theme but also encourage ML researchers to develop more efficient and scalable methods that cater to the requirements of the EMâOA domain. It serves as an inspiration for further research and applications at the synergistic intersection of EMâOA and ML domains.

Table of Contents

Frontmatter
Chapter 1. Introduction
Abstract
The formulation of an optimization problem, in generic terms, can be given by Equation 1.1.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 2. Optimization Problems and Algorithms
Abstract
This chapter starts by highlighting some domains of practical problems where optimization is or can be commonly applied. Then, the focus is shifted to different problem classes based on the number of objectives, and also on the popular point- and population-based optimization algorithms. Finally, to contextualize the suitability of different optimization algorithms for different problem types, the No-free-lunch (NFL) theorem is discussed.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 3. Foundational Studies on ML-Based Enhancements
Abstract
Many efficient evolutionary multi- and many-objective optimization algorithms, jointly referred to as EMâOAs, have been proposed in the last three decades. However, while solving complex real-world problems, EMâOAs that rely only on natural variation and selection operators may not produce an efficient search [14, 33, 45]. Therefore, it may be desirable or essential to enhance the capabilities of EMâOAs by introducing synergistic concepts from probability, statistics, machine learning (ML), etc. This chapter highlights some of the key studies that have laid the foundations for ML-based enhancements for EMâOAs and inspired further research that has been shared in subsequent chapters.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 4. Learning to Understand the Problem Structure
Abstract
This chapter focuses on an important aspect of learning the preference structure of the objectives, inherent in multi- and many-objective optimization problem formulations. This involves identifying the non-essential (redundant) objectives, and also determining the relative importance of the essential objectives. Such an approach to knowledge discovery is based on the following rationale. Modeling an optimization problem, analytically or through experiments, involves a lot of time and physical resources, possibly from multiple disciplines, in conjunction or isolation from each other. Often, it can be intriguing for analysts or decision makers (DMs) to know if the developed model represents the underlying problem in a minimal form or is marked by redundancy. Any redundancy among objectives, if revealed, could shed insightful light on the physics of the underlying problem, in addition to reducing its complexity and promising greater search efficiency for evolutionary multi- and many-objective optimization algorithms (EMâOAs). Furthermore, the revelation of the relative preferences among the essential objectives that are inherent in the problem models could also be significantly useful, as highlighted below.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 5. Learning to Converge Better: IP2 Operator
Abstract
In  the context of online innovization (Section 3.1.2, Chapter 3), it has been discussed that inter-variable relationships with pre-specified structures can be extracted in any intermediate generation of an evolutionary multi- and many-objective optimization algorithm (EMâOA) run. Subsequently, these relationships can be used for offspring repair, within the same EMâOA run, to help induce better convergence [7,8]. Any attempt to eliminate the a priori specification of the relationship structure would require alternative criteria that could guide the improvement in convergence.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 6. Learning to Diversify Better: IP3 Operator
Abstract
It was emphasized earlier that evolutionary multi- and many-objective optimization algorithms, jointly referred to as EMâOAs, pursue the dual goals of convergence to and diversity across the true Pareto front (\(P\!F\)). In that, diversity must be interpreted in terms of the extent of spread (coverage of \(P\!F\)) and uniformity of distribution within a given spread.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 7. Learning to Simultaneously Converge and Diversify Better: UIP Operator
Abstract
It has been highlighted earlier that all evolutionary multi- and many-objective optimization algorithms (EMâOAs), including the reference vector (RV)-based EMâOAs or RV-EMâOAs, pursue the dual goals of convergence-to and diversity-across the true Pareto front (\(P\!F\)). In previous chapters, IP2 and IP3 operators have been discussed with a focus solely on convergence enhancement and diversity enhancement, respectively.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 8. Investigating Innovized Progress Operators with Different ML Methods
Abstract
Chapters 5 and 6 have shown how learning efficient search directions from the intermittent generations’ solutions could be utilized to create pro-convergence and pro-diversity offspring, enabling better convergence and diversity, respectively. The entailing steps of dataset preparation, training of ML models, and utilization of these models have been encapsulated as Innovized Progress operators, namely IP2 for convergence improvement and IP3 for diversity improvement. In these chapters, the goal was to establish that ML-based operators can potentially enhance the performance of RV-EMâOAs. In doing so, major emphasis was laid on the design of these operators adhering to the key considerations of convergence–diversity balance and ML risk–reward trade-off, and avoiding ad hoc parameter fixations and extra solution evaluations. Noticeably, the impact of the choice of the specific ML methods used in these operators was not discussed. However, to endorse the robustness of the proposed (IP2, IP3, and UIP) operators, it is imperative to investigate how significantly their performance can be influenced when the underlying ML methods are varied.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 9. Learning to Analyze the Pareto-Optimal Front
Abstract
As mentioned in the previous chapters, evolutionary multi- and many-objective optimization algorithms (EMâOAs) attempt to find a set of well-converged and well-diversified solutions to approximate the true Pareto front (\(P\!F\)). In general, a uniform distribution of solutions across \(P\!F\) is desired. However, this cannot be guaranteed due to the stochasticity involved in EMâOAs. In a contrasting scenario, even a biased distribution of solutions across \(P\!F\), with a higher concentration of solutions in specific parts of \(P\!F\), may be desired by the decision maker for a subsequent multi-criterion decision-making (MCDM) task. indexMulti-criterion decision-making (MCDM) To meet such requirements, this chapter presents a machine learning (ML)-based approach, which treats a given \(P\!F\)-approximation as input and trains an ML model to capture the relationship between pseudo-weight vectors derived from the objective vectors in the \(P\!F\)-approximation (F in \(\mathcal{Z}\)), and their underlying variable vectors (X in \(\mathcal{X}\)). Subsequently, the trained ML model is applied to predict the solution’s X vector for any desired pseudo-weight vector. In other words, the trained ML model is used to create new non-dominated solutions in any desired region of the obtained \(P\!F\)-approximation. Such new solutions could be created to fill apparent gaps in the input \(P\!F\)-approximation toward a more uniform distribution or to enhance the concentration of solutions as desired by the decision maker. The working and usefulness of the above post-optimality analysis basis approach have been demonstrated over several problem instances. However, this approach also has the potential to be integrated within an EMâOA to arrive at the desired distribution.
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Chapter 10. Conclusion and Future Research Directions
Abstract
This chapter shares the authors’ concluding perspectives and points out some potential research directions that could help consolidate the emerging theme of machine learning (ML)-assisted evolutionary multi- and many-objective optimization (EMâO).
Dhish Kumar Saxena, Sukrit Mittal, Kalyanmoy Deb, Erik D. Goodman
Backmatter
Metadata
Title
Machine Learning Assisted Evolutionary Multi- and Many- Objective Optimization
Authors
Dhish Kumar Saxena
Sukrit Mittal
Kalyanmoy Deb
Erik D. Goodman
Copyright Year
2024
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-9920-96-9
Print ISBN
978-981-9920-95-2
DOI
https://doi.org/10.1007/978-981-99-2096-9

Premium Partner