Mini workshop: Machine Learning meets Optimization
16 december 2021 09:00 - Locatie: HYBRIDE / BUILDING PULSE, HALL 9
Machine learning and Optimization are closely related. On the one hand, many machine learning tasks can be solved using optimizers, on the other hand combinatorial optimization problems (static, dynamic, stochastic) are increasingly solved effectively with the help of machine learning. The aim of the half-day workshop is to present talks and have open discussions by researchers from Machine Learning and Operations Research in order to identify how techniques from the two fields cross-fertilize.
Registration for both in person or online attendance required, at no cost via: https://survey.ewi.tudelft.nl/s/#lfeop.
|09:00||Patrick De Causmaecker (KU Leuven) |
A New Class of Hard Problem Instances for the 0-1 Knapsack Problem and their position in Instance Space.
|09:25||Laurens Bliek (TU Eindhoven) |
EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on Expensive Black-box Functions
|09:50||Herke van Hoof (University of Amsterdam) |
Learning New Heuristics for Combinatorial Problems
|10:40||Hoong Chuin Lau (Singapore Management University) |
Machine Learning meets Combinatorial Optimization: Real World Applications in Routing and Scheduling
|11:05||Wendelin Böhmer (TU Delft) |
Learning from Failure: Deep Reinforcement Learning for Optimization
- Patrick De Causmaecker (KU Leuven, BE)
- Neil Yorke-Smith (TU Delft, NL)
- Yingqian Zhang (TU Eindhoven, NL)
Patrick De Causmaecker (KU Leuven)
The 0-1 knapsack problem is an important optimization problem, because it arises as a special case of a wide variety of optimization problems and has been generalized in several ways. Very powerful algorithms solve large knapsack problem instances with thousands of decision variables very efficiently, and current problem instances in the literature no longer challenge these algorithms. We propose a new class of hard problem instances for the 0-1 knapsack problem and provide theoretical support that helps explain why these problem instances are hard to solve to optimality. A large dataset of 3240 hard problem instances was generated and subsequently solved on a supercomputer, using approximately 810 CPU-hours. The analysis of the obtained results shows to which extent different parameters influence the hardness of the problem instances. This analysis demonstrates that the proposed problem instances are a lot harder - despite being order of magnitudes smaller - than the previously known hardest instances. Instance space analysis is applied and these new instances are positioned in the problem's instance space which leads to some open questions for future work. Joint work with Jorik Jooken and Pieter Leyman.
Laurens Bliek (TU Eindhoven)
Surrogate algorithms such as Bayesian optimisation are especially designed for black-box optimisation problems with expensive objectives, such as hyperparameter tuning or simulation-based optimisation. In the literature, these algorithms are usually evaluated with synthetic benchmarks which are well established but have no expensive objective, and only on one or two real-life applications which vary wildly between papers. There is a clear lack of standardisation when it comes to benchmarking surrogate algorithms on real-life, expensive, black-box objective functions. A new benchmark library, EXPObench, provides first steps towards such a standardisation. The library is used to provide an extensive comparison of six different surrogate algorithms on four expensive optimisation problems from different real-life applications. This has led to new insights regarding the relative importance of exploration, the evaluation time of the objective, and the used model.
Herke van Hoof (University of Amsterdam)
Large-scale combinatorial problems can usually not be solved exactly. Engineered heuristics are a popular alternative. I will present joint work with Wouter Kool and Max Welling on learning such heuristics from data rather than hand-engineering them. In our work, we focus on finding methods and models that suit the characteristic properties of such problems, such as symmetries and determinism. The resulting approach shows strong performance on routing problems such as the TSP. I will also present preliminary ideas and results on challenges such as scaling up to larger instances and solving dynamic and stochastic routing problems.
Hoong Chuin Lau (Singapore Management University)
Traditionally Machine Learning belongs to the field of AI, while Combinatorial Optimization is a key area in Operations Research.
The line is blurring today as the communities come together to tackle new problems in both disciplines. In this broad-level talk, I will discuss my recent works on tackling real world problems arising in urban settings -- last-mile logistics and emergency response. The underlying challenge is in generating robust routes and schedules in dynamic environments by combining learning and planning in interesting ways.
Time permitting, I will take a quantum leap to talk about an emerging research topic on solving constrained optimization problems with quantum/quantum-inspired annealing, where machine learning is used to tune both problem and algorithm parameters.
Wendelin Böhmer (TU Delft)
The talk will contrast reinforcement learning (RL) and combinatoric optimization at the example of the knapsack and the traveling salesman problem. We implemented a deep RL method that scales to arbitrary number of objects/locations and compare it with heuristics and OR methods. The talk will summarize the problems RL methods are face in optimization domains and the lessons we learned from our experiments.