Objective of a virtual seminar on scheduling research and applications is to discuss both the field's
newest
advancements and survey traditional areas. Seminars take place typically on every second Wednesday
through
three different time zones
(Europe, the Middle East & Africa,
North America & South America, and
Asia, Australia & Oceania).
Flow-shop and job-shop robust scheduling problems with budgeted uncertainty
Abstract
We study different solution methods for two two-stage robust, multi-machine scheduling problems under uncertainty budget. Compact formulations of the problems are proposed and two decomposition approaches are presented: a logic Benders decomposition approach and a column and constraint generation approach. Computational experiments show that for small-sized instances, a compact formulation of the problem quickly yields optimal solutions. However, for larger instances, decomposition methods, particularly the column and constraint generation method with a master problem solved using constraint programming, provide better quality solutions. An acceleration method for the column and constraint generation algorithm is proposed. This method is generic and can be applied to any two-stage robust optimisation problem.
At night, trains are parked on shunting yards. Here cleaning and small maintenance takes place, after which the trains have to leave in the morning at the planned departure time and in the desired composition, which may require splitting and coupling during their stay at the yard. As NS (Dutch Railways) continues to expand its fleet while the available yard space remains fixed, the shunting yards are becoming increasingly congested, with occupation rates reaching up to 90%. Consequently, planning all train movements and related activities has become a complex challenge. In this talk, I will discuss the main challenges encountered in planning the shunting yards and the algorithms that we have designed to tackle these problems. First, I want to discuss the shunting problem on a single yard. We solve this problem using local search; our algorithm is the first one that is capable of solving real-world problem instances of the complete shunting and scheduling problem. Next, I want to extend the problem to the entire station area, which requires that we distribute the trains over two (or more) shunting yards, while avoiding interference with through traffic and respecting the capacity of the separate shunting yards. The main difficulty here is that the capacity of the shunting yard is not known; we use data analysis to estimate it. Finally, I want to discuss the problem of assigning shunting tasks to train drivers. This is a variant of the technician routing problem without skills, but with synchronization. We have developed an algorithm for this that is based on a novel decomposition approach.
Shunt yard planningAILocal SearchConstraint Programming
+6 more
This paper studies outpatient appointment scheduling with waiting time limits, in the presence of uncertain service times, patient no-shows and unpunctual arrivals. To tackle the problem of excessively long waiting times, policymakers may impose a waiting time limit. The introduction of waiting time limits increases the complexity of model formulation by significantly increasing the number of problem scenarios. To address this challenge, we introduce the concept of virtual waiting time, representing the additional waiting time that a patient would have to incur to see the doctor beyond the imposed waiting time limit. Using this construct, we are able to unify the modeling of system dynamics of all different scenarios into one stochastic program. We develop a tailored integer L-shaped method to solve this model and test its effectiveness against two benchmarks. Specifically, the subproblem is a mixed integer nonlinear program with good properties, which allow us to deduce its optimal value without using optimization solver. We find that the presence of waiting time limits increases the job allowance between two adjacent patients, and the optimal schedule does not necessarily exhibit the well-known dome-shaped pattern. We also find that waiting time limits help reduce variation in patient waiting times across different positions in the schedule, thereby enhancing fairness in the schedule. Furthermore, our results indicate that in the presence of waiting time limits, the total cost of the system is minimized when patients tend to arrive slightly late on average. Finally, we find that when a social planner sets the limit, the clinic has incentives to misreport its true cost of serving each diverted patient, and an additional fine on top of the time limits helps improve social welfare.
Appointment schedulingWaiting time limitsUncertain service times
We study the problem of optimizing Large Language Model (LLM) inference scheduling to minimize total completion time. LLM inference is an online and multi-task service process and also heavily energy consuming by which a pre-trained LLM processes input requests and generates output tokens sequentially. Therefore, it is vital to improve its scheduling efficiency and reduce the power consumption while a great amount of prompt requests are arriving. There are two key challenges: (i) each request has heterogeneous prefill and decode lengths. In LLM serving, the prefill length corresponds to the input prompt length, which determines the initial memory usage in the KV cache. The decode length refers to the number of output tokens generated sequentially, with each additional token increasing the KV cache memory usage by one unit. We show that minimizing total completion time is NP-hard due to the interplay of batching, placement constraints, precedence relationships, and linearly increasing memory usage. We then analyze commonly used scheduling strategies in practice, such as First-Come-First-Serve (FCFS) and Shortest-First (SF), and prove that their competitive ratios are unbounded. To address this, we propose a novel algorithm based on a new selection metric that efficiently forms batches over time. We prove that this algorithm achieves a constant competitive ratio. (ii) the output length, which critically impacts memory usage and processing time, is unknown. We first design a conservative algorithm, Amax, which schedules requests based on the upper bound of predicted output lengths to prevent memory overflow. However, this approach is overly conservative: as prediction accuracy decreases, performance degrades significantly due to potential overestimation. To overcome this limitation, we propose Amin, an adaptive algorithm that initially treats the predicted lower bound as the output length and dynamically refines this estimate during inferencing. We prove that Amin achieves a log-scale competitive ratio.
SchedulingOptimization for LLM inferenceApproximation online algorithms
Combinatorial optimization problems arising in transportation are often NP-hard, making them computationally challenging to solve at scale. Recent advances in machine learning have opened new avenues for tackling such problems, either as standalone solution strategies or by enhancing traditional optimization algorithms. This talk surveys a spectrum of learning-based approaches for transportation optimization, including: (i) end-to-end learning models, (ii) integration within exact algorithms, (iii) learning to guide local search, (iv) accelerating metaheuristics, (v) embedding within optimization formulations, and (vi) test-time search strategies. This talk will discuss the principles behind each approach, highlight representative applications, and reflect on both their current potential and open challenges for the future of transportation optimization.
It is by now well understood that fairness plays a key role in customer satisfaction. Yet, there is still a lack of models that help organizations make fair operational decisions, in particular when it comes to scheduling customers’ jobs. In this talk, I will present a novel framework for fair decision-making in repetitive scheduling environments. We study a setting with n clients, where in each of m consecutive periods (e.g., days), every client submits a job to be processed, and the scheduler must guarantee each client a minimum quality of service (QoS). I will demonstrate how this framework can be applied in different scheduling contexts and discuss some of the algorithmic challenges it raises. Joint work with Dvir Shabtay, Michael Pinedo, Rolf Niedermeier, Hendrik Molter, Klaus Heeger, and Danny Segev.
FairnessRepetitive schedulingTardy jobsTotal completion time
+2 more
We consider a large family of problems in which an ordering (or, more precisely, a chain of subsets) of a finite set must be chosen to minimize some weighted sum of costs. This family includes variations of min sum set cover, several scheduling and search problems, and problems in Boolean function evaluation. We define a problem, called the min sum ordering problem (MSOP), which generalizes all these problems using a cost and a weight function defined on subsets of a finite set. By making certain assumptions on the structure of the cost and weight functions, we derive general approximation results that can be applied to several problems. This talk will be based on two joint works with Robbert Fokkink, László Végh, Felix Happach and Lisa Hellerstein.
SchedulingSearch theorySubmodular functionsMin sum problems
As a growing number of companies adopt make-to-order business model or sell their products online, we are seeing numerous decision problems that can be modeled as integrated production and distribution scheduling (IPDS) problems. In such problems, order processing and delivery must be scheduled jointly in order to achieve an optimal balance between total operational costs and overall customer service. Offline IPDS problems, in which the information about every order is known in advance with certainty, are extensively studied. However, research on online IPDS problems, in which orders arrive randomly with their information unknown until they arrive, is relatively recent but is growing rapidly. In this talk, we first describe two real-world applications to illustrate the importance of studying online IPDS problems from a practical point of view, and highlight the challenges in deriving good algorithms for online IPDS problems. We then focus on one specific problem and analyze an online algorithm for the problem. Finally, we provide an overview of this problem area and discuss possible topics for future research.
Make-to-orderE-commerceOrder processingLast mile delivery
+2 more
In this talk, we consider the single-machine hierarchical scheduling problems with release dates and preemption, where the primary criterion is the total completion time and the secondary criterion is an arbitrarily regular scheduling criterion, which is of either the sum-form or the max-form. We aim to find a feasible preemptive schedule that minimizes the secondary criterion, subject to the condition that the primary criterion is minimized. We show that the variants of the problems under study are polynomially solvable. To address these problems, we establish some hereditary properties for the feasible schedules and instances, and present a complete description of the feasible schedules through some elaborately constructed job-permutations.
SchedulingHierarchical criteriaRelease datePreemption
+1 more
Problem decomposition refers to general techniques for efficiently solving large instances. The fact that decomposition splits the problem into smaller subproblems and then combines their solutions into a solution to the original problem opens up many possibilities for applying machine learning. There are two main advantages why it is suitable. The first is that the subproblems are solved repeatedly/recursively, so similar instances are solved multiple times. The second is that the subproblems are smaller and thus easier to combine with machine learning. In this talk, we show two successful applications of machine learning to speed up scheduling algorithms based on decomposition techniques, namely branch and price and Lawler's decomposition.
SchedulingMachine learningDecompositionBranch and price
+1 more
Vehicle routing is a hard and extensively studied combinatorial optimization problem which has numerous practical applications in transportation and logistics. In the last two decades several effective solution methods were proposed for the heuristic solution of the vehicle routing problem (VRP) and its many variants but most of these methods do not scale well with respect to the computing time when the size of the problem grows. We discuss a family of approaches, originated from the FILO framework, which were explicitly designed to obtain a linear growth of the computing time making it possible to solve very large instances with up to one million customers within a very limited computing time.
Vehicle routingLogistics optimizationMetaheuristicsPruning and acceleration of local search
We propose an exact decomposition approach to solve the Quay Crane Scheduling Problem (QCSP) in container terminals. The problem is decomposed into two simpler problems: a sequencing master problem and a scheduling sub-problem. The master problem explores possible operation sequencing solutions for the tasks, while the sub-problem determines the completion times for each task to avoid collisions among quay cranes. A series of combinatorial cuts based on minimum infeasible systems and minimum sub-optimal systems are generated to accelerate the algorithm convergence. Furthermore, we provide a cut selection strategy to generate tighter multiple cuts. Computational tests on benchmark instances verify the effectiveness of the proposed approach.
The widely discussed Resource-Constrained Project Scheduling Problem (RCPSP) consists in devising a schedule for the execution of the project activities such that (a) the project duration is minimized, (b) the completion-start precedence between given pairs of activities is respected, and (c) at no time does the total demand of the in-progress activities exceed the available capacity of the various resource types required for the execution of the activities. In the literature, different formulations of the RCPSP have been presented as binary or mixed-binary linear optimization problems; according to the time representation, the two groups of discrete-time and continuous-time formulations can be distinguished. In many applications, resource types represent pools of equipment units or teams of people with specific skills. In this talk, we consider two novel variants of the RCPSP in which additional constraints related to the individual units of the various resource types have to be considered. In the first variant, the execution of the project is distributed across multiple sites, i.e., for the execution of each activity, a particular site must be selected; moreover, while some resource units are available only at a particular site, other resource units can be moved between sites, requiring some transportation time. In the second variant, the workload should be balanced among the individual units of each resource type. For both variants, we present a continuous-time assignment-based formulation as a mixed-binary linear optimization problem.
Project schedulingMulti-site projectsWorkload balancingMixed-integer linear programming
Hexaly Optimizer is a model-and-run mathematical optimization solver that addresses a broad range of industrial optimization problems in the areas of supply chain and workforce management, such as routing, scheduling, packing, clustering, matching, assignment, or facility location. Its mathematical formalism extends classical Mixed-Integer Linear Programming with set, permutation and interval variables on which any usual algebraic operator (arithmetic, logic, relational, etc.) can be applied. Hexaly Optimizer is widely used in industry today, has performances often comparable to the best dedicated algorithms, allows compact modeling, scales well (with problem size and complexity) and is constantly improving. This seminar focuses on the use of Hexaly Optimizer to model and solve industrial scheduling problems. We show how to exploit the mathematical concepts of the input formalism to model several classic scheduling problems in an elegant and compact manner and give an idea of the solver's performance compared to the state of the art. Next, we outline the various techniques employed under the hood to produce good-quality primal and dual solutions like constraint propagation, local search, large neighborhood search, linear relaxations, scheduling heuristics, or exact scheduling algorithms on particular sub-problems.
Mathematical optimization solverAlgebraic modelIndustrial problemsScalability
+4 more
Artelys has developed a scheduler service for Toyota Motor Europe that creates plannings for post-production workshops operations across Europe. The problem consists in scheduling operations on vehicles on different production lines with available resources. This case study details how this industrial problem differs from the classical Resource Constrained Project Scheduling Problem (RCPSP) with its additional operational constraints (multiple shifts, preemptive breaks, sequence constraints, etc.) and its multi-objectives (minimizing late tasks, late vehicles, maximizing efficiency and workers ergonomics, etc.). Artelys has worked closely with Toyota to deploy this scheduler service as micro-service in order to solve this complex problem in a real-time context and as a flexible decision-aid software. The associated constraint programming model is implemented with Fico Mosel modelling language and the model is solved using Artelys Kalis solver. This micro-service has replaced a manual tedious task that was taking place differently in all workshops by delivering faster, higher quality solutions.
In this talk divisible load theory (DLT) will be introduced. The theory is a scheduling and computer performance model applicable in data-parallel applications. Its basic assumption is that the computing work can be divided into pieces of arbitrary sizes and these pieces can be processed independently in parallel. In the talk we will proceed from the basic formulation of divisible load scheduling on heterogeneous system to a formulation for multi-installment divisible load processing in heterogeneous system with hierarchical memory and energy constraints. NP-hardness of various DLT scheduling problem variants will be considered. Application of DLT as isoefficiency maps visualizing parallel processing performance relationships will be demonstrated.
Hyper-heuristics are powerful search methodologies that operate on low level heuristics or heuristic components to tackle computationally hard optimisation problems. The current state-of-the-art in hyper-heuristic research contains classes of algorithms that focus on intelligently selecting or generating a suitable heuristic for a given situation. Hence, there are two main types of hyper-heuristics: selection and generation hyper-heuristics. A typical selection hyper-heuristic chooses a low-level heuristic and applies it to the current solution at each step of a search, before deciding whether to accept or reject the newly created solution. Generation hyper-heuristics, in contrast, automatically build heuristics or heuristic components during the search process. Machine learning is revolutionising various fields, and its integration with hyper-heuristics holds immense potential. This talk will first offer a concise overview of hyper-heuristics, followed by illustrative case studies demonstrating how we have successfully applied machine learning to automatically design more effective selection hyper-heuristics.
Mixed-integer linear programming (MILP) is at the core of many advanced algorithms for solving fundamental problems in combinatorial optimization. The complexity of solving MILPs directly correlates with their support size, which is the minimum number of non-zero integer variables in an optimal solution. A hallmark result by Eisenbrand and Shmonin (Oper. Res. Lett., 2006) shows that any feasible integer linear program (ILP) has a solution with support size s ≤ 2m · log(4m∆), where m is the number of constraints, and ∆ is the largest absolute coefficient in any constraint. - Our main combinatorial result are improved support size bounds for ILPs. We show that any ILP has a solution with support size s ≤ m · (log(3∥A∥_1) + \sqrt{log(∥A∥_1)}), where ∥A∥_1 denotes the 1-norm of the constraint matrix A. Our upper bounds also hold with ∥A∥_1 replaced by \sqrt{m∆}, which improves on the previously best constants in the linearized form. - Our main algorithmic result are the fastest known approximation schemes for fundamental scheduling problems, which use the improved support bounds as one ingredient. We design an efficient approximation scheme (EPTAS) for makespan minimization on uniformly related machines (Q||Cmax). Our EPTAS yields a (1 + ε)-approximation for Q||Cmax on N jobs in time 2^{O(1/ε log^3(1/ε) log(log(1/ε)))} + O(N), which improves exponentially over the previously fastest algorithm by Jansen, Klein and Verschae (Math. Oper. Res., 2020). Arguably, our approximation scheme is also simpler than all previous EPTASes for Q||Cmax, as we reduce the problem to a novel MILP formulation with small support size.
Uniformly related machine schedulingMakespan minimizationEfficient approximation schemesMixed-integer linear programming
We consider a variant of the simple assembly line balancing problem in which the number of workstations and the cycle time are fixed, but some tasks are subjected to uncertainty (processing times can increase). The purpose is to maintain the schedule feasibility when the duration of tasks increases. To achieve this, we consider two stability measures (to be maximized): the stability radius, and the stability factor, that correspond to different types of task duration increase. The scheduling problem is formulated as a mixed integer linear program, and this formulation is strengthened by an upper bound and a heuristic. More specifically, the allocation intervals of the tasks, that are based on precedence constraints, are narrowed, which in turn are used to improve the results of the heuristic and the mixed integer linear programming formulation.
Operations researchInteger linear programmingDecomposition methods
In this talk, we study the problem of generating personnel rosters that are robust with respect to disruptions caused by employee absenteeism. If an employee unexpectedly becomes absent, they must be replaced by another employee. This in turn may affect the working hours of other employees, creating an undesirable ripple effect that changes a large part of the roster. We investigate how making use of on-call duties may avoid such large perturbations. We begin by first introducing a metric to quantify the robustness of a roster, which is then used to generate sufficiently robust rosters. In the second part of this talk, we discuss the conditions under which machine learning can lead to better solutions. A novel methodology is presented to determine the minimum prediction performance required to outperform the non-data-driven rostering approach.
Personnel rosteringRobustnessOn-call shift rosteringOperations research
+1 more
The CP-SAT solver is developed by the Operations Research team at Google and is part of the OR-Tools open-source optimization suite. It is an implementation of a purely integral Constraint Programming solver on top of a SAT solver using Lazy Clause Generation. It draws its inspiration from the chuffed solver, and from the CP 2013 plenary by Peter Stuckey on Lazy Clause Generation. The CP-SAT solver improves upon the chuffed solver in two main directions. First, it uses a simplex alongside the SAT engine. Second, it implements and relies upon a portfolio of diverse workers for its search part. The use of the simplex brings the obvious advantages of a linear relaxation on the linear part of the full model. It also started the integration of MIP technology into CP-SAT. This is a huge endeavour, as MIP solvers are mature and complex. It includes presolve -- which was already a part of CP-SAT --, dual reductions, specific branching rules, cuts, reduced cost fixing, and more advanced techniques. It also allows the tight integration of the research from the Scheduling on MIP community along with the most advanced scheduling algorithms. This has enabled breakthroughs in solving and proving hard scheduling instances of the Job-Shop problems and Resource Constraint Project Scheduling Problems. Using a portfolio of different workers makes it easier to try new ideas and to incorporate orthogonal techniques with little complication, except controlling the explosion of potential workers. These workers can be categorized along multiple criteria like finding primal solutions -- either using complete solvers, Local Search or Large Neighborhood Search --, improving dual bounds, trying to reduce the problem with the help of continuous probing. This diversity of behaviors has increased the robustness of the solver, while the continuous sharing of information between workers has produced massive speedups when running multiple workers in parallel. All in all, CP- SAT is a state-of-the-art solver, with unsurpassed performance in the Constraint Programming community, breakthrough results on Scheduling benchmarks (with the closure of many open problems), and competitive results with the best MIP solvers (on purely integral problems).
Constraint programmingSAT solverInteger linear programming
Driven by the success of e-commerce, today's warehouses are evolving into fully automated fulfillment factories. Many of them follow the parts-to-picker paradigm, using mobile shelf-lifting robots or conveyors to deliver stock keeping units (SKUs) to stationary order pickers working in picking workstations. This talk aims to structure and review the family of fulfillment-related scheduling problems that arise in this environment: On the input side, the sequence in which totes of SKUs are delivered to a workstation must be synchronized with the orders that are being assembled there at the same time on the output side. In this way, a more efficient fulfillment process can be achieved, and the bin supply system can be relieved. This talk classifies the family of slightly different order fulfillment scheduling problems that arise with different workstation setups in alternative warehouses. This classification scheme is used to survey the existing literature, analyze the computational complexity, and systematically quantify the gains of alternative workstation setups. The talk also highlights valuable future research ideas for this emerging area of scheduling research.
WarehousingE-commerceOrder fulfillment schedulingSynchronization
+1 more
Firms operating on a make-to-order basis may not satisfy the entire demand due to limited capacity and tight delivery times. This necessitates selecting only part of customer orders to maximize the total revenue, which gives rise to the order acceptance and scheduling (OAS) problems. In this study, we investigate a generalized version of the OAS (GOAS) problem originating from a real- life setting. Due to several components of the problem, such as release times, due dates, deadlines and sequence dependent setup times, finding an exact solution to GOAS problem, that determines which orders to accept and how to schedule them simultaneously to maximize the revenue, in reasonable time even in a single machine environment is difficult. Hence, we develop an effective and efficient matheuristic, which consists of a time-bucket based mixed integer linear programming model, a variable neighborhood search algorithm and a tabu search algorithm, for the GOAS problem. Computational results show that the proposed matheuristic outperforms the state-of-the- art algorithms developed for the GOAS problem. The boundary of optimally solved instance size is pushed further and near optimal solutions are obtained in reasonable time for instances falling beyond this boundary. Joint work with İstenç Tarhan.
Order acceptance and schedulingMatheuristicVariable neighborhood searchTabu search
We consider the 1 || ∑ wj Uj problem, the problem of minimizing the weighted number of tardy jobs on a single machine. This problem is one of the most basic and fundamental problems in scheduling theory, with several different applications both in theory and practice. We prove that 1 || ∑ wj Uj is W[1]-hard with respect to the number of different processing times in the input, as well as with respect to the number of different weights in the input. This, along with previous work, provides a complete picture for 1 || ∑ wj Uj from the perspective of parameterized complexity, as well as almost tight complexity bounds for the problem under the Exponential Time Hypothesis (ETH). Based on joint work with Danny Hermelin.
Single-machine schedulingParameterized complexityNumber of numbers
In a classical scheduling problem, we are given a set of n jobs of unit length along with precedence constraints, and the goal is to find a schedule of these jobs on m identical parallel machines that minimizes the makespan. Using the standard 3-field notation, it is known as Pm|prec, p_j=1|Cmax. Settling the complexity of Pm|prec, p_j=1|Cmax even for m=3 machines is, together with Subgraph Isomorphism, among the last two open problems from the book of Garey and Johnson [GJ79] for which the computational complexity remains a mystery. We present an algorithm for this problem that runs in (1 + n/m)^{O(√(nm))} time, which is subexponential when m=o(n). For m=3, this equals a runtime of 2^{O(√(n)log(n))} time.
The onset of the pandemic and the conflict in Europe have disrupted our society, supply chains, and economies. The effects of these global shocks are evident in every type of project - whether related to the energy transition, the renewal of transport infrastructure, or the construction industry- impacted by the highest rates of inflation in more than 30 years and by a significant cash flow crisis. In this complex landscape, delays in payments have become a common risk factor for projects, since they create a time lag between the expenses incurred by the contractor and the progress payments received from the client. The challenges associated with obtaining continuous project finances often place undue financial strain on contractors that may seek loans from financial institutions to maintain their daily operations. These loans must be returned with interest, increasing financing costs, and considerably lowering the Net Present Value. In this talk, we delve into cash flow and project scheduling strategies to mitigate late payment impacts and enhancing project resilience, presenting a distributionally robust risk-averse model that minimizes the financing cost by accurately estimating the amount and timing of the expenses and revenues throughout the project life cycle and foreseeing possible cost overruns and cash flow fluctuations. Joint work with Oncu Hazir
Scheduling under uncertainty and riskPayment delaysNet Present ValueCash flow analysis
This talk discusses the parameterized complexity of scheduling problems, assuming precedence constraints, time windows and typed tasks resource constraints. We recall the usual parameters used for scheduling problems and focus on a parameter suitable for problems with time windows, the pathwitdth of the underlying interval graph. We present three results involving this parameter. First we show a fixed parameter tractable (FPT) algorithm for a scheduling problem with unit processing times. Then, a para-NP-Hardness result assuming arbitrary processing times and finally we outline a FPT algorithm for this problem by considering two parameters. Authors: Claire Hanen and Alix Munier-Kordon
We study a single-machine scheduling problem with an external resource, which is rented for a non-interrupted period. Jobs that need this external resource are executed only when the external resource is available. There is a cost associated with the scheduling of jobs and a cost associated with the duration of the renting period of the external resource. We look at four classes of problems with an external resource: a class of problems where the renting period is budgeted and the scheduling cost needs to be minimized, a class of problems where the scheduling cost is budgeted and the renting period needs to be minimized, a class of two-objective problems where both, the renting period and the scheduling cost, are to be minimized, and a class of problems where a linear combination of the scheduling cost and the renting period is minimized. We provide a thorough complexity analysis (NP-hardness proofs and (pseudo-)polynomial algorithms) for different members of these four classes.
SchedulingSingle-machine schedulingExternal resourceComplexity
+1 more
In this talk, we focus on a single machine scheduling in additive manufacturing, in which parts can be produced simultaneously in a batch with two-dimensional packing constraints, and the objective is to minimize the makespan. In order to solve this problem, we propose an approximation algorithm with constant approximation ratio, and develop a combinatorial Benders decomposition based exact algorithm with various Benders cuts and acceleration strategies. We also conduct extensive computational experiments to test the performance of our proposed solution approaches.
SchedulingAdditive manufacturingTwo-dimensional bin packingApproximation algorithm
+1 more
We consider a natural, yet challenging variant of the parallel machine scheduling problem in which each machine imposes a preferential order over the jobs and schedules the jobs accordingly once assigned to it. We study the setting in which a centralized authority assigns the jobs to machines, as well as the associated non-cooperative game in which jobs are controlled by selfish agents who independently choose which machine to use. In the talk, I will define the model, highlight the differences from other well-analyzed job-scheduling optimization problems and coordinated mechanisms, and present some of our results for several natural objectives (e.g., Makespan, lateness, total completion-time minimization) As we show, having machine-dependent priority lists dramatically affects both the computational complexity of the problem and the equilibrium inefficiency. Joint work with Vipin Ravindran Vijayalakshmi and Marc Schroder
Scheduling of surgeries is a complex process that involves simultaneous scheduling of not only several resources (staff, room, equipment, supplies, instruments), but also building flexibility in capacity-reservation policies to accommodate most types of patient classes. In the case of trauma centers this complexity increases even more due to the need for dynamic rescheduling of elective surgeries as emergency surgeries arrive randomly. In practice, these issues are tackled every day in a ‘non-optimal / heuristic’ way. Recent research in this area has shown the potential of implementing modified priority rules. In contrast to trauma centers, ambulatory surgery centers only perform elective surgeries and have a lower cost structure. Their profitability is therefore dependent upon efficient use of capacity. Recent research has modeled these as Hybrid Flow Shops and solved the capacity planning problem using easy to implement heuristics. This talk will also discuss some new avenues of operating room scheduling that have not yet been researched by academics.
In this talk, we focus on a container dispatching and conflict-free yard crane routing problem that arises at a storage yard in an automated, maritime container terminal. A storage yard serves as an intermediate buffer for import/export containers and exchanges containers between water- and landside of a maritime terminal. The problem is in which order and by which crane the containers are transported in order to minimize the makespan and prevent crane interferences. First, we limit our attention to incoming containers only that are positioned by twin cranes. Containers are assigned to the cranes according to different policies. We show that some cases are polynomially solvable. Approximation algorithms with guaranteed absolute and relative deviations from the optimum are devised for others. The results translate for the case of outgoing containers. In the second part we consider two rail mounted gantry cranes of different sizes, with the possibility to cross each other, that perform inbound, outbound and housekeeping requests. We solve this problem to optimality by a branch-and-cut approach that decomposes the problem into two problem classes and connects them via logic-based Benders constraints.
LogisticsSchedulingStacking cranesContainer terminal
+1 more
Any sports competition needs a schedule, specifying when and where teams meet each other. Apart from a number of pioneering theoretical results, most sports timetabling contributions in the literature read as case studies, describing a single problem instance for which a tailored algorithm is developed and compared to a manual solution. While the reported problems are challenging, and the algorithms made an impact in practice, it is hard to assess algorithmic performance. Indeed, real-life problem instances are rarely shared, and few realistic benchmark instances are available. In this talk, we discuss our efforts to obtain insights in the strengths and weaknesses of several state-of-the-art sport scheduling algorithms, and to predict which algorithm to select for which type of problem. The story covers the development of a problem classification and unifying data format, the generation of a set of diverse and realistic benchmark instance, the organization of a timetabling competition, and an instance space analysis for sports scheduling.
Sports schedulingUnified data formatInternational timetabling competitionBenchmarking
+2 more
The resource-constrained project scheduling problem with flexible resource profiles (FRCPSP) is a generalization of the RCPSP where for each activity a work content is given, which has to be allocated between the start and the finish time of the activity. Hence, next to the start time of activities, a schedule comprises the decision about the duration and the allocation of the work content between the start and the finish time for the activities. The FRCPSP has been introduced in 2003 in the context of real-world application of pharmaceutical research projects. Since, then different models and methods as well as applications have been proposed in the literature. In this talk we will present the FRCPSP, discuss available MIP-formulations as well as heuristics and will present work on the use of FRCPSP to solve different real-world optimization problems.
Project schedulingFlexible resource profilesWork contentMIP-formulations
+2 more
The main disadvantage of calculations on real quantum computers is their non-determinism. For optimization problems, it is possible to get surprisingly good results using Quantum Annealing approach, but without a guarantee of the optimality of the result. Simply put, the quantum machine has not found anything better. In the presentation an approach that provides such a guarantee of optimality is proposed. A solution that is optimal in the strict mathematical sense is generated, without probabilistic considerations. For this purpose, a D-Wave quantum machine is used working as a sampler implementing quantum annealing -- an approach considered as a hardware metaheuristic -- to obtain upper and lower bounds on the value of the objective function of the problem under consideration. Then the mechanism of a Branch and Bound scheme is used and controlled by quantum annealing, which allows us to obtain very quickly -- in constant time for considered instances -- the boundaries of the considered subproblems. The whole idea is an alternately combination of calculations realized on QPU and CPU, allowing us to generate optimal solutions to the NP-hard problems of task scheduling on a single machine with a total weighted tardiness as well as with total number of tardy jobs criteria. The main result is the formulation of the lower bound in a "language" (i.e. mathematical model) understandable by a quantum machine.
Cooperative game theory focuses on schemes that lead to a global collaboration among multiple independent decision makers. In cooperative game theory, one basic concept is the allocation in the core that characterizes how the players shall share the cost/benefit in a way acceptable to all sub-coalitions. Unfortunately, it is well known that many cooperative games have an empty core, including games concerning scheduling problems. For such games the global collaboration will not be sustainable. We consider a situation where an outside party has the need to stabilize the ground coalition because, for example, the best social welfare can be achieved only when all players collaborate. We introduce a few economic treatments that can be used by the outside party such as providing subsidy and charging penalty. These treatments, including their concepts and implementations, are demonstrated by games related to scheduling problems.
Manufacturing companies have recently shown a growing interest in using machine learning to improve scheduling problems. In this talk, we will present three real-life industrial scheduling problems faced by industries with a specific focus on the application of machine learning. First, in semiconductor manufacturing, multiple weighted dispatching rules are used to determine a sequence of jobs. Engineers assign these weights based on their previous experience. We propose a machine learning approach to determine the best weight set for all rules, especially when there is not enough time to derive it. Second, we propose an integration method of machine learning and mathematical formulation for scheduling problems in steel manufacturing. This approach reflects the engineers’ preferences and improves the performance of scheduling at the same time. Finally, we will present a hybrid flow shop scheduling problem for insulation manufacturing where machine learning with the NEH algorithm has been applied. We will also discuss the challenges of implementing machine learning or other heuristic algorithms in practical settings.
This talk is about the heuristic solution of scheduling problems by means of heuristics based on mathematical programming, well-known under the name of matheuristics. Whatever they are used to build or improve a solution, they always take advantage of mathematical programming in order to efficiently solve some subproblems. Matheuristics have been introduced in the literature during the last decade especially to solve routing problems. Few is known about their application to machine scheduling problems. I will introduce to different forms of matheuristics and give a feedback on the solution of some scheduling problems, making also an outing in the land of machine learning.
The aim of this talk is to present some new results on constructive and destructive bounds for the m-machine scheduling problem. Recently we have characterized mathematically the three main constructive bounds which are the preemptive bound, the energetic bound and the JPPS makespan. These characterizations give insights to their similarities and differences. It explains why these bounds are generally equal in practice. Moreover our characterization of the energetic bound introduced by Erschler, Lopez and Thuriot permits to build a 0(n alpha(n) logn) ( alpha(n) Ackermann coefficient) checker. It is the best one in literature. We have compared it to the checkers of Baptiste Lepape and Nuijten (O(nsquare)) and to the checker of Ouellet and Quimper (O(nlognlogn)). The checker of Baptiste, Lepape and Nuijten is based on an identification of useful intervals and on incremental evaluations of intervals energy. Ouellet and Quimper prove that the energy matrix is a Monge matrix and evaluate the energy of some interval thanks to a pre-calculated data structure based on range trees. We characterize mathematically the useful intervals, then we use the data structures introduced by Ouellet and Quimper and a nice algorithm for partial Monge Matrix. Our checker is also the best one in practice in literature, as it is confirmed by the numerical results we report. Work in collaboration with Abderrahim Sahli, Antoine Jouglet1 and Eric Pinson.
SchedulingCumulative resourceLower boundAlgorithms
+1 more
Flow time is one of the most natural metrics to optimize in scheduling, but algorithmically it can be notoriously difficult to handle. I will talk about some recent advances in this topic, focusing on two results by myself and co-authors: a PTAS for the sum of weighted flow times on a single machine and improved approximation guarantees for parallel unrelated machines. The first result is enabled by a study of structural properties of constraints in a natural ILP formulation and the second result relies on a novel connection to discrepancy theory.
SchedulingFlow timeInteger programmingDiscrepancy
+1 more
A synchronous flow shop is a variant of a non-preemptive permutation flow shop where transfers of jobs from one machine to the next take place at the same time. The processing is organized in synchronized cycles which means that in a cycle all current jobs start at the same time on the corresponding machines. Then all jobs are processed and have to wait until the last one is finished. Afterwards, all jobs are moved to the next machine simultaneously. As a consequence, the processing time of a cycle is determined by the maximum processing time of the operations contained in it. Furthermore, only permutation schedules are feasible, i.e., the jobs have to be processed in the same order on all machines. The goal is to find a permutation of the jobs such that the makespan is minimized. Motivated by a practical application in production planning at a company assembling shelf boards for kitchen elements, we investigate different aspects of synchronous flow shop problems. Especially, we consider the situation of dominating machines, additional resources, setup times and leaving machines idle.
SchedulingFlow shopSynchronized movementShelf board production
Typical scheduling problems deal with a set of activities (jobs) requiring various resources (machines) to be performed. In most scheduling scenarios, it is assumed that machines are continuously avalable (possibly except in some scheduled maintenance intervals), and this gives rise to problems in which the typical scheduling objectives (makespan, total weighted completion time etc) are pursued. A significantly different scenario arises if machines may actually fail, i.e., (i) while performing a job i, a machine becomes unavailable (e.g. breaks down) with probability \pi_i, and (ii) such failures are unrecoverable, in the sense that from then onwards the machine is lost and so are the jobs already allocated and not yet processed on that machine. If a job i is successfully completed, a reward r_i is attained. In this context, the basic problem is how to assign the jobs to the machines and how to sequence them so that the expected reward is maximized. In this talk we review the main results, discuss relationships with other sequencing problems and point out some open problems. We address the following scenarios. 1) m parallel (identical) machines. While the single-machine case is easy, for two or more machines the problem is hard and various approaches have been proposed to address it. For general m, list scheduling yields a 0.8531-approximate solution. The argument of the proof is similar to the one used by Schwiegelshohn to prove Kawaguchi and Kyan's bound for the minimization of total weighted completion time. 2) In order to hedge against machine failures, one can use job replication. In this case, copies of the same job can be scheduled on different machines, and the reward r_i is attained if at least one copy is successfully completed. Although also this problem is hard for m>=2, relatively simple algorithms provide solutions which are provably close to optimality. 3) This class of sequencing problems is also related to testing problems, as follows. A system consists of n components, each of which can be either functioning or not. Only if all components are functioning, the system is "up". Component i is functioning with probability \pi_i, and testing it costs c_i. As soon as a component that is not functioning is detected, the testing stops (concluding that the system is "down"). The problem is to decide in which order should the components be tested, in order to minimize the expected costs. While the single-tester problem is solved by a simple priority rule, various problem variants can be considered. In particular, if several testers operate in parallel, under time constraints, the problem gets more complicated. While it is NP-hard for three or more testers, its complexity with two testers is still open.
Algorithms with predictions is a recent framework that has been used to overcome pessimistic worst-case bounds in incomplete information settings. In the context of scheduling, very recent work has leveraged machine-learned predictions to design algorithms that achieve improved approximation ratios in settings where the processing times of the jobs are initially unknown. We study the speed-robust scheduling problem where the speeds of the machines, instead of the processing times of the jobs, are unknown and augment this problem with predictions. In this talk, we give an algorithm that simultaneously achieves, for any x < 1, a 1 + x approximation when the predictions are accurate and a 2+ 2/x approximation when the predictions are not accurate. We also study special cases and evaluate our algorithms performance as a function of the error. Joint work with Eric Balanski, TingTing Ou and Hao-Ting Wei, all at Columbia.
Local search-based algorithms have tended towards incorporating an ever-increasing number of heuristics for different problem classes, for example all sorts of vehicle routing generalizations. These heuristics range from all-purpose `swap' and `insert' to complicated made-to-measure operators. It has become a challenge to determine the impact of individual components on an algorithm's performance. In contrast to targeting generalizing problem extensions, it may be worthwhile to focus on a problem's core when designing a basic optimization heuristic. This talk introduces a recently published local search operator for vehicle routing problems: SISRs. This heuristic is unique insofar as it seeks to induce `spatial' and `capacity' slack during a ruin phase which may subsequently be exploited in an almost-greedy recreate phase. SISRs emerged after a dedicated attempt towards solving the vehicle routing problem's most basic special case, that is the `capacitated VRP'. SISRs' quality is validated by way of demonstrating its performance across a wide and diverse range of VRP generalizations. This confirms that the basic CVRP ruin & recreate heuristic is also effective when applied to more general vehicle problems, including fleet minimization, without the need to design additional problem-specific components.
Vehicle routingLocal search operatorsRuin and recreateSpatial slack
+2 more
Synchronous Dataflow (SDF in short) were introduced in 1987 by Lee and Messerschmitt to model data exchanges in embedded systems. A SDF is usually defined by a directed graph, where each node is associated to a task that may be executed infinetly often. Each arc represents a buffer between two tasks. Moreover, the number of data samples produced or consumed by each task on each execution is specified a priori. Nowadays, SDF are considered by several scientific communities. It allows for example to model the exchanges for the design of Digital Signal Processings, or in real-time applications to run on a complex architecture. The purpose of this talk is to present a set of mathematical results developed on SDF, and to show how to use them to solve concrete problems posed by their users. We will also do the link with classical cyclic scheduling problems by demonstrating that the buffers can be associated to usual precedence constraints between successive task executions. We will conclude by some challenging open questions.
Synchronous DataFlowPrecedence constraintsNormalizationLiveness
+1 more
The simultaneous consideration of appointment day (interday scheduling) and time of day (intraday scheduling) in dynamic scheduling decisions is a theoretical and practical problem that has remained open. We introduce a novel dynamic programming framework that incorporates jointly these scheduling decisions in two timescales. Our model is designed with the intention of bridging the two streams of literature on interday and intraday scheduling and to leverage their latest theoretical developments in tackling the joint problem. We establish theoretical connections between two recent studies by proving novel theoretical results in discrete convex analysis regarding constrained multimodular function minimization. Grounded on our theory, we develop a practically implementable and computationally tractable scheduling paradigm with performance guarantees. Numerical experiments demonstrate that the optimality gap is less than 1% for practical instances of the problem.
The formulation of scheduling problems as mathematical optimization problems is a useful step in deriving exact solutions, or approximate solutions with performance guarantees. We give a brief overview of polyhedral approaches, which aim to apply the power of linear and mixed-integer optimization to certain classes of scheduling problems, in particular those with min-sum type of objectives such as to minimize weighted sums of completion dates. The choice of decision variables is the prime determinant of such formulations. Constraints, such as facet inducing inequalities for corresponding polyhedra, are often needed, in addition to those just required for the validity of the initial formulation, in order to derive useful dual bounds and structural insights. Alternative formulations are based on various types of decision variables, such as: start date and completion date variables, that simply specify when a task is performed; linear ordering variables, that prescribe the relative order of pairs of tasks; traveling salesman variables, which capture immediate succession of tasks and changeovers; assignment and positional date variables, which specify the assignment of tasks to machine or to positions; and time-indexed variables which rely on a discretization of the planning horizon, in particular machine switch-on and switch-off variables in production planning and unit commitment in power generation. We point out relationship between various models, and emphasize the role of supermodular polyhedra and greedy algorithms.
Online optimization refers to solving problems where an initially unknown input is revealed incrementally, and irrevocable decisions must be made not knowing future requests. The assumption of not having any prior knowledge about future requests seems overly pessimistic. Given the success of machine-learning methods and data-driven applications, one may expect to have access to predictions about future requests. However, simply trusting them might lead to very poor solutions, as these predictions come with no quality guarantee. In this talk we present recent developments in the young line of research that integrates such error-prone predictions into algorithm design to break through worst case barriers. We discuss different prediction models and algorithmic challenges with a focus on online scheduling and routing and give an outlook to network design problems.
Scheduling under uncertaintyAlgorithms with predictionsLearning-augmented algorithmsNon-clairvoyant scheduling
+1 more
In the lecture, we will present a general landscape of time-dependent scheduling which is one of the main research domains in modern scheduling theory. This lecture will be divided into three parts. In the first part, we will sketch the main dates in time-dependent scheduling development, specify the most important forms of time-dependent processing times and formulate the basic assumptions of time-dependent scheduling. Next, we will present the main results from that area, paying a special attention to applied proof techniques and mutual relations between different time-dependent scheduling problems. Finally, we will discuss selected open problems in time-dependent scheduling, summarizing known results for each open problem and indicating possible ways of its further research.
Time-dependent schedulingVariable job processing timesApproximation algorithmsApproximation schemes
+4 more
The impact of techniques from data science and machine learning on scheduling is investigated. We review a number of recently emerged applications of these techniques that can shed a new light on combinatorial optimization in general. We give concrete examples for scheduling in particular. We distinguish on-line techniques, that is, data science techniques integrated into advanced algorithms, off-line techniques which can be used to improve, select of construct algorithms as well as techniques that consider the problem as living in a space of which the dimensions are set by specific properties of its instances. We give examples of recent results obtained for specific problems in the scheduling domain. Meeting works in two directions. We give an example where a recent theoretical result for a combinatorial optimization problem provides new insights in the structures on which the data science techniques can operate. In this case, the theoretical result allowed to locate a region of hard problem instances in the instance space.
Data ScienceMachine learningCombinatorial optimizationScheduling
+6 more
Real-life industrial scheduling problems, especially in the process industries, are very complex as they contain many problem-specific features or rules that are sometimes even difficult to be expressed mathematically. Nonetheless, often the requirement to reach optimality or close-to-optimality is critical for the competitiveness and the survival of the company. Due to this, mixed-integer linear programming (MILP) has become the most common tool of choice and can be said to be the "backbone" for many practical scheduling problems. In this presentation, we take an engineering perspective of selected problems and discuss few examples where energy plays a significant role. The role of energy is in fact growing and most process industries will in the future be more closely integrated into the energy supply chain. We will discuss few MILP formulations and methods to improve their performance in practical cases.
EnergyMILPBilevel optimizationReal-world application
+2 more
This talk reviews the start of the art in solving scheduling problems with constraint programming, and examines novel research directions, with a focus on the integration of constraint programming and mathematical programming, and the use of machine learning. It reviews applications where constraint programming provides an integrated approach to complex problems, the hybridization of constraint programming and mathematical programming, learning-based constraint programming, and optimization proxies.
Mixed-Integer linear programming (MILP) is one of the generic modeling and algorithmic solution framework for NP-hard scheduling problems, along with Constraint Programming (CP) and SAT solvers. However, the literature often reports poor results of MILP solvers for resource-constrained scheduling problems compared to CP or SAT-based approaches such as Lazy Clause Generation. However, even if this is partly true because of the powerful dedicated scheduling algorithms embedded in constraint propagators, MILP approaches can reach very good results in terms of primal and dual bounds if the right formulation and specialized MILP components such as valid inequalities and column generation are chosen for the problem at hand. This talk first reviews the standard MILP formulations for resource-constrained scheduling problems and a few recent advances in the field. In particular, we focus on basic polyhedral results, on the relative relaxation strength of compact and extended formulations augmented with valid inequalities. Finally, we provide examples, including industrial ones where MILP, possibly integrated in hybrid CP/SAT/MILP methods, appears as a technique of choice.
Mixed-integer linear programming (MILP)Resource-constrained project scheduling problem (RCPSP)solversrelaxation
+2 more
Semiconductors enable the systems and products that we use to work, communicate, travel, entertain, harness energy, treat illness, make new scientific discoveries, and more. Semiconductor manufacturing is among the most complex manufacturing systems in existence today. This complexity makes scheduling semiconductor manufacturing systems extremely difficult. In this two-part seminar, we discuss scheduling problems in this challenging domain. In the second part, we discuss batch scheduling problems, multiple orders per job scheduling problems, and scheduling problems that include time lags for the jobs. Moreover, scheduling problems related to cluster tools will be also briefly considered. Solution techniques, mainly based on problem decomposition and metaheuristics, will be also discussed. Finally, future research direction for scheduling semiconductor wafer fabrication facilities will be identified.
Semiconductors enable the systems and products that we use to work, communicate, travel, entertain, harness energy, treat illness, make new scientific discoveries, and more. Semiconductor manufacturing is among the most complex manufacturing systems in existence today. This complexity makes scheduling semiconductor manufacturing systems extremely difficult. In this two-part seminar, we discuss scheduling problems in this challenging domain. In the first part, we describe the manufacturing process and identify typical scheduling problems found in semiconductor manufacturing systems at the workstation, work area, factory, and supply chain levels. We also discuss approaches for scheduling problems with secondary resources and for factory level scheduling.
Steelmaking-Continuous Casting (SCC) is a bottleneck in the steel production process and its scheduling has become more challenging over time. In this paper, we provide an extensive literature review that highlights challenges in the SCC scheduling and compares existing solution methods. From the literature review, we collect the essential features of an SCC process, such as unrelated parallel machine environments, stage skipping, and maximum waiting time limits in between successive stages. We consider an SCC scheduling problem with as objective the minimisation of the weighted sum of cast break penalties, total waiting time, total earliness, and total tardiness. We formulate the problem as a mixed-integer linear programming model and develop an iterated greedy matheuristic that solves its subproblems to find a near-optimal solution. Through numerical experiments, we show that our algorithm outperforms two types of genetic algorithms when applied to test instances.
The public transit operations planning process commonly includes the following activities: network route design, service planning (frequency setting and timetabling) and scheduling (vehicle scheduling, crew scheduling and rostering). However, network route design is generally the only one widely recognized, whilst service planning and scheduling are often ignored in China. This leads to the lack of elaborate timetables and schedules, hence, transit operation is often in disorder with high operating costs. To raise the service level and the utilization of resources, an applied study for three cities in China has been conducted, focusing on the enhancement of the recognition and execution of public transit planning and scheduling. A comprehensive framework of public transit planning is first proposed, which is composed of three traditional Chinese items (i.e. network route design, land use for depots, and deployment of vehicles) and the following newly added items: intelligent public transit system (iPTS) planning, service planning, and scheduling. This is pioneering work in China, during which an iPTS plan is conceived and a new vehicle scheduling approach based on AVL data is developed. Experiments during actual projects show that vehicle schedules with high on-time probability and low cost were compiled, while the essential input parameters such as headways and trip times were set automatically. It is anticipated that the research fruits and practical experiences obtained would be of great benefit in improving service and management levels and resource use in public transport in China and some other developing countries.
Public transit planningVehicle schedulingTimetablingAVL data
+1 more
The main goal of parameterized complexity is to try to design algorithms that are capable of solving (in reasonable time) hard problems in cases where some predefined problem parameters are of limited size. This theory was developed in the early 90s, contributing to many new techniques in the area of algorithmic design ever since. In this talk we survey the main aspects of parametrized complexity, and highlight its applicability to the area of scheduling. We also discuss some challenges and open problems for future research.
SchedulingNP-hardFixed parameterized tractability (FPT)Algorithmic design
+1 more
Many scheduling problems are simply too hard to be solved exactly, especially for instances of medium or large size or when realistic constraints are present. As a result, the literature on heuristics and metaheuristics for scheduling is extensive. More often than not, metaheuristics are capable of generating solutions close to optimality or to tight lower bounds for instances of realistic size in a matter of minutes. Metaheuristics have been refined over the years and there are literally hundreds of papers published every year with applications to most domains in many different journals. Most regrettably, some of these methods are complex in the sense that they have many parameters that affect performance and hence need careful calibration. Furthermore, many times published results are hard to reproduce due to specific speed-ups being used or complicated software constructs. These complex methods are difficult to transfer to industries in the case of scheduling problems. Another important concern is the recently recognized “tsunami” of novel metaheuristics that mimic the most bizarre natural or human processes, as for example intelligent water drops, harmony search, firefly algorithms and the like. See K. Sörensen “Metaheuristics—The Metaphor exposed” (2015), ITOR 22(1):3-18. In this presentation, we briefly review different flowshop problems and variants. From the basic flowshop problem with makespan minimization to other objectives like flowtime minimization, flowshops with sequence-dependent setup times, no-idle flowshops, all the way up to complex hybrid flexible flowline problems. We will show how simple Iterated Greedy (IG) algorithms often outperform much more complex approaches. IG methods are inherently simple with very few parameters. They are easy to code and results are easy to reproduce. We will show that for all tested problems so far they show state-of-the-art performance despite their simplicity. As a result, we will defend the choice of simpler, yet good performing approaches over complicated metaphor-based algorithms.
SchedulingFlowshopHeuristicsMetaheuristics
+1 more
We consider a single machine scheduling problem, where every job has a processing time and a priority weight and the objective is to minimize the total weighted sum of completion times. The novelty is that the job characteristics are initially given in an imprecise manner to the algorithm. Tests can be performed for chosen jobs to learn their precise values, allowing for a better ordering of the jobs in the schedule. These tests however take some time, delaying the subsequent schedule. The algorithm needs to produce a schedule consisting of executions of all jobs and tests of some jobs. We will present three different models that have been studied in this context, as well as the results obtained for each of them. The talk covers papers authored by Levi, Magnanti and Shaposhnik, by C.D., Thomas Erlebach, Nicole Megow, Julie Meißner, and by Fanny Dufossé, C.D., Noël Nadal, Denis Trystram and Óscar C. Vásquez.
SchedulinguncertaintySum of completion timesProcessing time oracle
+1 more
Travel times inside cities often vary quite a lot during a day and significantly impact the duration of delivery routes. Some authors have proposed time-dependent (TD) variants of several vehicle routing problems (VRPs), including the VRP with time windows (VRPTW). In most papers, time-dependency is defined on customer-based graphs. Thus, a major impact of travel time variations is missed: in an urban environment, not only do travel times change, but also the paths used to travel from one customer to another. To address this issue, we work directly with the road network and consider travel time (or travel speed) variations on each road segment. We present a solution approach, based on tabu search, for a TDVRPTW in which travel speeds are associated with segments in the road network. Computational results on instances with up to 200 nodes and 580 arcs are reported and assessed. (Joint work with Maha Gmira, Andrea Lodi, and Jean-Yves Potvin)
RoutingTime windowsTime-dependent travel timesRoad network
+1 more
Train scheduling is one of the most critical planning tasks required to run a railway, with most rail operators and managers having large departments devoted to this task. Depending on the time scale, we have two main scheduling problems. At the strategic and tactical levels, the train timetabling problem consists in finding feasible, robust schedules that are usable for months or years into the future. At the operational level, we have the train re-scheduling problem, where one wants to schedule trains in real-time in order to tackle deviations from the original timetable, minimizing delays and knock-on effects. Both problems share a common core-model, which is a job-shop scheduling model with no-wait and blocking constraints. The core problem can be modeled as a disjunctive program. After an illustration of the train scheduling application, I will present a basic MILP formulation for the disjunctive program. It turns out, however, that even small to medium size real-life instances cannot be solved by simply instantiating this formulation and invoking a state-of-the-art MILP solver. Next, therefore, I will go through two recent reformulations, which allow us to significantly increase the size of tractable instances. The first is obtained from the classical Benders' reformulation by strengthening its standard constraints. The second is often referred to as "Logic Benders' Reformulation" and exploits a natural, spatial decomposition of the railway network. I will finally show the strong link between these reformulations. I will conclude the talk by presenting a practical application of the described approaches to a traffic management system controlling trains in the greater Oslo region network. The system is currently undergoing a field-test campaign at Oslo control center.
Train schedulingJob-shop schedulingCombinatorial optimizationInteger programming
+1 more
This talk will discuss a model for augmenting algorithms with useful predictions that go beyond worst-case bounds on the algorithm performance. The model ensures predictions are formally learnable and instance robust. Learnability guarantees that predictions can be efficiently constructed from past data. Instance robustness formally ensures a prediction is robust to modest changes in the problem input. This talk will discuss predictions that satisfy these properties for scheduling and resource augmentation. Algorithms developed break through worst-case barriers with accurate predictions and have a graceful degradation in performance when the error in the predictions grows.
SchedulingLearning augmentedOptimizationSample complexity
+4 more
We consider the P||Cmax scheduling problem where the goal is to schedule n jobs on m identical parallel machines to minimize makespan. We revisit the famous Longest Processing Time (LPT) rule proposed by Graham in 1969. LPT requires sorting jobs in non-ascending order of processing times and then assigning one job at a time to the machine whose load is smallest so far. We provide new insights into LPT and discuss the approximation ratio of a modification of LPT that improves Graham’s bound. We use linear programming to analyze the approximation ratio of our approach. This performance analysis can be seen as a valid alternative to formal proofs based on analytical derivation. Also, we derive from the proposed approach an O(n log n) time complexity heuristic called SLACK. The heuristic splits the sorted job set in tuples of m consecutive jobs (1,...,m; m+1,...,2m; etc.) and sorts the tuples in non-increasing order of the difference (SLACK) between the largest and smallest job in the tuple. Given this new ordering of the job set, list scheduling is applied. This approach strongly outperforms LPT on benchmark literature instances and is competitive with more involved approaches such as COMBINE and LDM.
Given an input of a scheduling problem, any non-preemptive solution for it can be used as a preemptive solution. Thus, the optimal cost of a preemptive solution is not larger than that of an optimal non-preemptive solution. As preemption comes at a cost in real-life applications, it is of interest to find the worst-case ratio between the two costs. For a given problem, the supremum ratio over all possible inputs of the ratio between the two costs (of an optimal solution without preemption and an optimal solution that possibly uses preemption) is called the power or benefit of preemption. While many scheduling variants can be studied with respect to this measure, we will focus on the cases of a single machine, parallel identical machines, and uniformly related machines, and we will discuss the objectives of makespan and total (weighted) completion time. We will exhibit how one can benefit from preemption, and we will analyze the resulting worst case ratios for several basic models.
Preemptive schedulingUniformly related machinesTotal completion timeMakespan
We propose an alternative design for tournaments that use a preliminary stage, followed by several rounds of single elimination play. Most U.S. major sports, for example, are organized in this way. However, the conventional "bracket" design of these tournaments suffers from several deficiencies. First, top ranked players randomly incur unfortunate matchups against other players, which introduces an unnecessary element of luck. Second, as documented in the tournament design literature, various reasonable criteria such as stronger ranked players having a higher probability of winning, are not satisfied. Third, the probability that the top two players meet is not maximized. Fourth, there is the widely observed issue of shirking at the preliminary stage, where a player loses deliberately to obtain an easier path through the tournament. Finally, the use of a conventional fixed bracket fails to allow players to consider information that develops during the tournament, such as injuries to other players. To address all these issues, we allow higher ranked players at the single elimination stage to choose their next opponent at each round. We allow each player's ranking either to remain static, or to improve from beating a higher ranked player. Using data from 1,902 men's professional tennis tournaments from 2001-2016, we demonstrate the reasonableness of the results obtained. We also perform sensitivity analysis for the effect of increasing irregularity in the pairwise win probability matrix on three traditional performance measures. Finally, we show that our opponent choice design reduces shirking, and could have eliminated it in some notorious situations. In summary, compared with the conventional design, the opponent choice design provides higher probabilities that the best player wins and also that the two best players meet, reduces shirking, and performs well for preservation of ranking.
Multiple round sports tournamentChoice of opponentPerformance criteriaProfessional tennis
Educational timetabling problems consist in scheduling a sequence of events (lectures, seminars, or exams) involving teachers and students in a prefixed period of time, satisfying a set of constraints of various types. In this talk, we critically review different formulations, public datasets, and search methods. In particular, we illustrate local search methods, their parameter tuning procedure, and their results. Finally, we discuss practical issues involved in the actual solution of timetabling problems.
TimetablingSchedulingOptimizationMetaheuristics
+1 more
In the immortal words of Monty Python, “… and now for something completely different!” Over the past three decades, I have spent much of my time working on practical healthcare applications. Typically, the projects are done with healthcare collaborators. Virtually all of the scheduling problems are highly stochastic, and scheduling approaches focus on managing variability. In this talk, I will describe several healthcare applications including: diagnostic imaging, cancer treatment (chemotherapy and radiation), nurse/physician scheduling, surgical scheduling, 911 call centres, home care routing, medical resident scheduling and primary care appointments (e.g. advanced access). In each case, I will describe the underlying uncertainties and briefly review recent approaches.
Healthcare scheduling applicationsDiagnostic imagingCancer treatment (chemotherapy and radiation)Nurse/physician scheduling
+4 more
In the primary-secondary scheduling problem, we have a primary scheduling criterion and a secondary scheduling criterion. The goal of the problem is to find a schedule which minimizes the second criterion, subject to the restriction that the primary criterion is minimized. Lee and Vairaktarakis [LV1993] presented a comprehensive review for the computational complexity of the single-machine primary-secondary scheduling problems, where all the jobs are released at time zero. When both of the two criteria are regular, more than twenty problems were posed as open in [LV1993]. This talk will report the research progress of these open problems. Jinjiang Yuan (J.J. Yuan) is a Professor in the School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan, The People's Republic of China. He received his doctorate in science from Sichuan University in 1995. His principal research interests include Scheduling, Graph Theory, and Combinatorial Optimization. He has published more than 200 papers in Mathematical Programming, European Journal of Operational Research, Journal of Scheduling, Journal of Graph Theory, Operations Research Letters, and other journals. He has presided over 8 projects of National Natural Science Foundation of China.
The majority of publications in the extensive literature on resource-constrained project scheduling focus on a static deterministic setting for which a so-called baseline schedule is computed prior to project execution. In the real world, however, a project may be subject to considerable uncertainty. During the actual execution of a project, the baseline schedule may indeed suffer from disruptive events, causing the actually realized activity start times to deviate from the predicted baseline start times. This presentation focuses on robust project scheduling, in particular the development of effective and efficient proactive and reactive scheduling procedures. Proactive scheduling aims at generating robust baseline schedules that carry sufficient protection against possible schedule disruptions that may occur during project execution. Reactive scheduling procedures aim at repairing the baseline schedule when the built-in protection fails during the execution of the project. We discuss the fundamentals of state of the art proactive/reactive project scheduling approaches and discuss key directions for future research.
Project schedulingStochastic activity durationsProactive/reactive schedulingMarkov decision process
Machine scheduling problems are among the first optimization problems for which approximation algorithms have been analyzed. An approximation algorithm is a polynomial-time algorithm which always finds a feasible solution whose objective function value is within an a priori known factor (performance ratio) of the optimum solution value. In this talk we focus on identical parallel machine scheduling with total weighted completion time objective. We present, among other things, a refined analysis of the performance ratio for the weighted shortest processing time first (WSPT) rule. This is joint work with Sven Jäger.
Machine schedulingList schedulingApproximation algorithmPerformance guarantee
+1 more
This talk discusses some interesting topics on scheduling and data analytics of production, logistics and energy in the steel industry, including: 1) production batching and scheduling in steelmaking/continuous casting, and hot/cold rolling operations; 2) logistics scheduling in storage/stowage, shuffling, transportation and (un)loading operations; 3) energy optimization including energy allocation, coordinated planning and scheduling of production and energy; 4) data based analytics, including dynamic analytics of BOF steelmaking process based on multi-stage modeling; temperature prediction of blast furnace; temperature prediction of molten iron in transportation process; energy analytics for estimation, prediction of generation and consumption, diagnosis and benchmarking; temperature prediction of reheat furnace based on mechanism and data; strip quality analytics of continuous annealing based on multi-objective ensemble learning; process monitoring and diagnosis of continuous annealing based on mechanism and data. Professor Lixin Tang is the Vice President of Northeastern University, China, a member of Chinese Academy of Engineering, the Director of Key Laboratory of Data Analytics and Optimization for Smart Industry, Ministry of Education, China, the Head of Centre for Artificial Intelligence and Data Science, and the Chair Professor of Centre for Industrial Intelligence and Systems Optimization. He is also a member of the discipline review group of the State Council for Control Science & Engineering, a member of review group of National Natural Science Foundation, the Vice President of Operations Research Society of China (ORSC), and the Founding Director of Data Analytics and Optimization Society for Smart Industry for ORSC.
Production schedulingLogistics schedulingEnergy schedulingData analytics
+2 more
This talk considers stochastic scheduling, where job sizes and arrival times are drawn from a distribution. As empirical job size variability has skyrocketed, stochastic scheduling research has grown increasingly important. What scheduling policies should we use to keep response times low? How should we schedule when job sizes are unknown or only partially known? What scheduling policies should we use in a multi-server (M/G/k) setting, as compared with a single-server (M/G/1) setting? How can we analyze the response times of scheduling policies in single-server and multi-server settings? In this talk, we discuss recent breakthroughs over the last 3 years in the area of stochastic scheduling. These include: (1) The SOAP scheduling framework, which greatly expands the class of scheduling policies whose response times we can now analyze in the M/G/1 setting. (2) The first response time analysis of common scheduling policies in the M/G/k. (3) Asymptotically optimal scheduling in the M/G/k. Joint work with: Ziv Scully, Isaac Grosof
Stochastic schedulingJob schedulingM/G/1M/G/k
+13 more