List of Abstracts
The k-Sparsest Subgraph in (proper) Interval Graphs
Given a graph G=(V, E) and an integer k, the k-Sparsest Subgraph (k-SS) problem asks for a set of k vertices in G which induce the minimum number of edges. As a generalization of the classical Independent Set problem, k-SS is W-hard as well as inapproximable in general graphs. In this talk we provide an FPT algorithm in interval graphs and a Polynomial-Time Approximation Scheme in proper interval graphs, both using dynamic programming.
Deciding priority of rail mounted gantries when transport job sequences are given
Crane scheduling in container terminals is known as a difficult optimization problem that has become even more challenging in recent years with the proliferation of multi-gantry automated stacking cranes. In this paper we present an efficient algorithm solving a subproblem arising in this context, namely deciding the priority of cranes after transportation tasks have been assigned to cranes and sequenced. We tackle this problem for both, twin crane setting and crossover crane setting, and develop graphical models and strongly polynomial algorithms accordingly.
A heuristic algorithm for partitioning network of processes under chance constraints
Coauthors: Dritan Nace, Oana Stan, and Renaud Sirdey
In this talk, we study the problem of partitioning networks of processes under chance constraints. This problem arises in the field of compilation for multicore processors. The theoretical equivalent for the case we consider is the Node Capacited Graph Partitioning with uncertainty affecting the weights of the vertices. For solving this problem, we propose an approximate algorithm which takes benefit of the available experimental data through a sample-based approach combined with a randomized greedy heuristic, originally developed for the deterministic version. Our computational experiments illustrate the algorithm ability to efficiently obtain robust solutions.
Throughput Optimization for Pipeline Workflow Scheduling with Setup Times
Coauthors: Anne Benoit, Jean-Marc Nicod, Laurent Philippe, and Veronika Rehn-Sonigo
We tackle pipeline workflow applications that are executed on a distributed platform with setup times. Several computation stages are interconnected as a linear application graph, and each stage holds a buffer of limited size where intermediate results are stored and a proces- sor setup time occurs when passing from one stage to another. In this pa- per, we focus on interval mappings (consecutive stages mapped on a same processor), and the objective is the throughput optimization. Even when neglecting setup times, the problem is NP-hard on heterogeneous plat- forms and we therefore restrict to homogeneous resources. We provide an optimal algorithm for constellations with identical buffer capacities. When buffer sizes are not fixed, we deal with the problem of allocating the buffers in shared memory and present a b/(b + 1)-approximation algorithm.
Optimizing Monotonic Functions using Pareto Approximation: Application to Fairness in Multi-User Scheduling Problem
Coauthors: Nikhil Bansal, Erik Saule, and Denis Trystram
To achieve the highest performance of modern computer systems, we usually need to solve combinatorial optimizations problem using complex functions that combine different quantities. Each quantity is often easy to optimize by itself but when put together to express a complex actual application, the classical optimization techniques fail to deliver an efficient solution. In this work we present a generic method to optimize monotonic objective functions by approximating the Pareto set of the multi-objective problem where each parameter is an objective function. Essentially, the method enumerates the frontier of best compromise solutions between these quantities and select the solution that brings the best value for the function to optimize. Such approach would be exponential in the general case. However, we show how classical tools from the approximation theory help to address the problem in both theoretical and practical perspective. We present our technique on a multi-user scheduling problem by optimizing the fairness among users.
Mind the Gap: A Study of Tube Tour
Coauthors: Dawid Kowalski, Jan Mizgajski, Darisz Mokwa, and Grzegorz Pawlak
The problem considered can be expressed as a question: Is it possible to visit all Tube lines in a day? This is a new type of combinatorial optimization problem which generalizes classic problems like TSP, set cover and has ties with operations research applications. We call the graphs corresponding to the city railway systems subway graphs. Examples and properties of such graphs are described. We show that a decision version of our problem is NP-complete. Algorithms solving the problem are proposed and their performance is studied both analytically, and experimentally on transportation networks of several big cities of the world.
Reliable Service Allocation in Clouds
We consider several reliability problems that arise when allocating applications to processing resources in a Cloud computing platform. More specifically, we assume on the one hand that each computing resource is associated to a capacity constraint and to a probability of failure. On the other hand, we assume that each service runs as a set of independent instances of identical Virtual Machines, and that the Service Level Agreement between the Cloud provider and the client states that a minimal number of instances of the service should run with a given probability. In this context, given the capacity and failure probabilities of the machines, and the capacity and reliability demands of the services, the question for the cloud provider is to find an allocation of the instances of the services (possibly using replication) onto machines satisfying all types of constraints during a given time period. In this talk, we assess the impact of the reliability constraint on the complexity of resource allocation problems. We consider several variants of this problem, depending on the number of services and whether their reliability demand is individual or global. We prove several fundamental complexity results (#P' and NP-completeness results) and we provide several optimal and approximation algorithms. In particular, we prove that a basic randomized allocation algorithm, that is easy to implement, provides optimal or quasi-optimal results in several contexts, and we show through simulations that it also achieves very good results in more general settings.
Bridging the Gap between Theory and Application in Multi-Criteria Scheduling
Coauthor: Joachim Lepping
Considering multiple criteria in scheduling usually leads to NP-hard problems when we strive for the Pareto set, i.e., the set of optimal compromise solutions regarding all criteria. Although NP-hardness itself is common in the domain of scheduling, we have a lot of good order-based rules (e.g., EDD, SPT, LPT, or more complex priority rules). Further, there exist theoretical insights into the problem structure that allows us to design algorithms to heuristically solve NP-hard single-criterion scheduling problems. In contrast, approaches to multi-criteria scheduling problems hardly ever benefit from the existing single-criterion findings. It is usually unknown how to combine the single-criterion knowledge to produce optimal compromises. In practical application, this leads to either simplification of the original problem or to a complex and very specific adaptation of a generic algorithm. However, both ways do not fulfill the user requirements for completeness of results and ease of use. In this talk, we motivate the integration of available knowledge or expertise as a major challenge in bridging the gap between single-criterion scheduling theory and its application in multi-criteria scheduling. We highlight this gap discussing the issues of standard approaches (e.g., epsilon constraint, zenith point method) and generic methods from evolutionary computation. As a solution to these issues, we propose an agent-based approach inspired by evolutionary computation that may lead to a rather flexible framework for integrating single-criteria expertise in a generic multi-criteria solution strategy. Finally, we present some promising results that emphasize the potential of this approach in practice and show perspectives for further efforts.
Evolutionary Scheduling of Parallel Tasks Graphs onto Homogeneous Clusters
Coauthor: Joachim Lepping
Parallel task graphs (PTGs) arise when parallel programs are combined to larger applications, e.g., scientific workflows. Scheduling these PTGs onto clusters is a challenging problem due to the additional degree of parallelism stemming from moldable tasks. Most algorithms are based on the assumption that the execution time of a parallel task is monotonically decreasing as the number of processors increases. But this assumption does not hold in practice since parallel programs often perform better if the number of processors is a multiple of internally used block sizes. In this article, we introduce the Evolutionary Moldable Task Scheduling (EMTS) algorithm for scheduling static PTGs onto homogeneous clusters. We apply an evolutionary approach to determine the processor allocation of each task. The evolutionary strategy ensures that EMTS can be used with any underlying model for predicting the execution time of moldable tasks. With the purpose of finding solutions quickly, EMTS considers results of other heuristics (e.g., HCPA, MCPA) as starting solutions. The experimental results show that EMTS significantly reduces the make span of PTGs compared to other heuristics for both non-monotonically and monotonically decreasing models.
Exact Algorithms for Inventory Constrained Scheduling on a Single Machine
Coauthors: Dirk Briskorn and Erwin Pesch
We consider a single machine scheduling problem subject to inventory constraints. Jobs add or remove items to or from the inventory, respectively. Jobs that remove items cannot be processed if the required number of items is not available. The objective is to minimize the total weighted completion time. We develop properties of optimal solutions and design a branch and bound algorithm and a dynamic programming algorithm with two extensions. We compare the approaches in our computational study and empirically derive parameter settings leading to instances, which are hard to solve.
Weighted Completion Time Minimization on a Single-Machine with a Fixed Non-Availability Interval: Differential Approximability
Coauthor: Vangelis Th. Paschos
This work is the first successful attempt on differential approximability study for a scheduling problem. Such a study considers the weighted completion time minimization on a single machine with a fixed non-availability interval. The analysis shows that the Weighted Shortest Processing Time (WSPT) rule cannot yield a differential approximation for the studied problem in the general case. Nevertheless, a slight modification of this rule provides an approximation with a differential ratio of (3-√5)/2 ≈ 0.38.
Tabu Search and Lower Bounds for a Combined Production-Transportation Problem
Coauthors: Alessandro Condotta, Dimitri Meier, and Natalia V. Shakhlevich
In this talk we consider a combined production-transportation problem, where n jobs have to be processed on a single machine at a production site before they are delivered to a customer. At the production stage, for each job a release date is given; at the transportation stage, job delivery should be completed not later than a given due date. The transportation is done by m identical vehicles with limited capacity. It takes a constant time to deliver a batch of jobs to the customer. The objective is to find a feasible schedule minimizing the maximum lateness. After formulating the considered problem as a mixed integer linear program, we propose different methods to calculate lower bounds. Then we describe a tabu search algorithm which enumerates promising partial solutions for the production stage. Each partial solution is complemented with an optimal transportation schedule (calculated in polynomial time) achieving a coordinated solution to the combined production-transportation problem. Finally, we present results of computational experiments on randomly generated data.
Optimal edge-coloring with edge rate constraints
Coauthors: D. Dereniowski, B. Ries, and Y. Zwols
We consider the problem of covering the edges of a graph by a sequence of matchings subject to the constraint that each edge e appears in at least a given fraction r(e) of the matchings. Although it can be determined in polynomial time whether such a sequence of matchings exists or not, we show that several questions about the length of the sequence are computationally intractable. A recent paper characterizes so-called OLOP (Overall Local Pooling) graphs, a class of graphs for which similar matching-related problems are tractable (namely, in an online distributed wireless network scheduling setting). We therefore focus on these graphs. In particular, we show that deciding whether a given OLOP graph has a matching sequence of length at most k can be done in linear time. In case the answer is affirmative, we show how to construct, in quadratic time, the matching sequence of length at most k. Finally, we prove that, for OLOP graphs, the length of a shortest sequence does not exceed the least common denominator of the fractions r(e), leading to a pseudopolynomial-time algorithm for minimizing the length of the sequence. Following Seymour conjecture, we conjecture that a shortest sequence does not exceed twice the least common denominator of the fractions r(e) for general graphs. We then show that this conjecture holds for all graphs with at most 10 vertices.
On Scheduling Policies For Urgent Computing in Distributed High Performance Computing Systems
Urgent computing is a relatively new area of research addressing an immediate access of submitted jobs to large compute environments such as Grids or Clouds. In many cases it is difficult or even impossible to predict if and how urgent jobs will disturb scheduling policies as well as coexist with less critical waiting and running jobs. Some practical approaches have been proposed so far to deal with urgent computing, for instance based on preemption or high job priorities. Nevertheless, there is still a need to perform tests and analysis of the impact of urgent computing on existing local schedulers, especially using advance reservation features. The efficiency of scheduling policies should be presented in the light of different evaluation criteria, e.g. utilization of computing resources, waiting time or slowdown. Therefore, we discuss preliminary results of various experiments and benchmarks for different scheduling strategies showing added values of new policies based on advance reservation applied for urgent jobs. Our experiments have been performed on the well-known real workload thanks to the GSSIM simulator environment.
Fair Scheduling of Bags-of-Tasks Applications Using Distributed Lagrangian Optimization
Large scale distributed systems typically comprise hundreds to millions of entities (applications, users, companies, universities) that have only a partial view of resources (computers, communication links). How to fairly and efficiently share such resources between entities in a distributed way has thus become a critical question. Although not all applications are suitable for execution on large scale distributed computing platform, ideal are the Bags-of-Tasks (BoT) applications. Hence a large fraction of jobs in workloads imposed on Grids is made of sequential applications submitted in the form of BoTs. Up until now, mainly simple mechanisms have been used to ensure a fair sharing of resources among these applications. Although these mechanisms have proved to be efficient for CPU-bound applications, they are known to be ineffective in the presence of network-bound applications. A possible answer resorts to Lagrangian optimization and distributed gradient descent. Under certain conditions, the resource sharing problem can be formulated as a global optimization problem, which can be solved by a distributed self-stabilizing supply and demand algorithm. In the last decade, this technique has been applied to design network protocols (variants of TCP, multi-path network protocols, wireless network protocols) and even distributed algorithms for smart grids. In this talk, I will explain how to use this technique for fairly scheduling concurrent BoT applications with arbitrary communication-to-computation ratio on a Grid. Yet, application heterogeneity raises severe convergence and stability issues that did not appear in previous context and need to be addressed by non-trivial modifications. The effectiveness of our proposal is assessed through an extensive set of complex and realistic simulations.
Backfilling with guarantees made as jobs arrive
In this presentation, we present scheduling algorithms that simultaneously support guaranteed starting times and favor jobs with system-desired traits. To achieve the first of these goals, our algorithms keep a profile with potential starting times for every unfinished job and never move these starting times later, just as in Conservative Backfilling. To achieve the second, they exploit previously unrecognized flexibility in the handling of holes opened in this profile when jobs finish early. We find that, with one choice of job selection function, our algorithms can consistently reduce average waiting time and average bounded slowdown relative to Conservative Backfilling while still providing a guaranteed start time to each job as it arrives. In fact, in most cases, the algorithms give better performance than the more aggressive EASY backfilling algorithm, which does not provide guaranteed start times. Alternately, with a different choice of job selection function, our algorithms can focus the benefit on the widest submitted jobs, the reason for the existence of parallel systems. In this case, these jobs experience significantly lower waiting times than Conservative Backfilling with minimal impact on other jobs.
On preemptive and non-preemptive speed-scaling scheduling
Coauthors: E. Bampis, D. Letsios, and I. Nemparis
We are given a set of jobs, each one specified by its release date, its deadline and its processing volume (work), and a single (or a set of) speed-scalable processor(s). We adopt the standard model in speed-scaling in which if a processor runs at speed s then the energy consumption is sα per time unit, where α>1. Our goal is to find a schedule respecting the release dates and the deadlines of the jobs so that the total energy consumption is minimized. We present our results from two recent papers concerning the preemptive and non-preemptive cases. For the multiprocessor preemptive speed-scaling problem, we give a transformation to the convex cost flow problem and we obtain a polynomial-time algorithm, simplifying the previous known algorithms for the problem. For the single-processor non-preemptive speed-scaling problem, we present an improved approximation algorithm. Finally, for the multiprocessor non-preemptive case, we propose improved approximation algorithms, for particular families of instances.
Task-graph scheduling to minimize memory
As the cost of floating-point operations decreases compared to the cost of storing and moving data, the classical makespan metric may become irrelevant, and new metrics such as memory cost or number of input/output (I/O) operations may arise. We revisit the problem of scheduling task-graphs whose tasks require large I/O files. Our objective is to minimize the memory needed for their processing, if the available memory is smaller than needed, or to minimize the amount of I/O in an out-of-core execution. In this talk, we will review some previous studies that date back to the 1980's for the sequential processing of tree-shaped task-graphs, as well as recent attempts to extend these results to other classes of graphs and to parallel processing.
The Kissing Problem: How to End a Gathering When Everyone Kisses Everyone Else Goodbye
Coauthors: Michael A. Bender, Ritwik Bose, and Rezaul Chowdhury
We introduce the kissing problem: given a rectangular room
with n people in it, what is the most efficient way to schedule
movements so that each pair of peoples kisses goodbye? The room is viewed as a set of
pixels that form a subset
of the integer grid. At most one person can stand on a pixel at once, and people
move horizontally or vertically. In order to move into a pixel in time
step t, the pixel must be empty in time step t - 1.
We gives one algorithm for kissing everyone goodbye.
(1) This algorithm is a 4 + o(1)-approximation algorithm in a crowded room (e.g., only one unoccupied pixel).
(2) It is a 10 + o(1)-approximation algorithm for kissing in a comfortable room (e.g., at most half the pixels are empty).
(3) It is a 25+o(1)-approximation for kissing in a sparse room.
Instance-sensitive robustness guarantees for sequencing with an unknown packing/covering constraint
Sequencing problems with an unknown covering or packing constraint appear in various applications, e.g., in real-time computing environments with uncertain run-time availability. A sequence is called α-robust when, for any possible constraint, the maximal or minimal prefix of the sequence that satisfies the constraint is at most a factor α from an optimal packing or covering. It is known that the covering problem always admits a 4-robust solution, and there are instances for which this factor is tight. For the packing variant no such constant robustness factor over all instances is possible. However, in both cases, many problem instances allow for a much better robustness guarantee than the pathological worst-case instances. In this work we aim for more meaningful, instance-sensitive robustness guarantees. We present an algorithm that constructs for each instance a solution with a robustness factor arbitrarily close to optimal. We hope that the idea of instance-sensitive performance guarantees inspires to revisit other optimization problems and design algorithm tailored to perform well for each individual instance.
Scheduling Algorithms for Market Clearing Problems
Coauthor: Günther Schmidt
In online market clearing problems the market-maker must allocate bid prices to ask prices before they expire. The objective is to maximize profit generated by the allocation. We present and analyze scheduling algorithms to solve this problem. We differ between problems with known and unknown expiry dates and problems with constant and arbitrary bid price volumes.
A 4/3-Approximation Algorithm for a Scheduling Problem on Multi-Cores with GPUs
Coauthors: Safia Kedad-Sidhoum, Gregory Mounié, and Denis Trystram
There are more and more multi-core architectures using accelerators like GPUs (Graphics Processing Units). We propose here a new method for scheduling efficiently any parallel application using mixed CPU-GPU architectures where some tasks can be processed either on a core (CPU) or a GPU. This problem of scheduling with GPUs is NP-hard, thus, we are looking for efficient algorithms with small approximation ratio. We introduce this method for the relaxed problem of a computing platform with only one GPU. We provide a dynamic programming scheme which achieves an approximation ratio of 4/3. Then, the analysis is extended for any number of GPUs (denoted by k) whose guarantee is 4/3 + 1/(3k). Finally, we run simulations based on the benchmark suite Rodinia. They emphasize the good behaviour of the proposed method in regard to a classical list scheduling algorithm.
Equivalence of two classical list scheduling algorithms for a dependent typed tasks system with release and due dates
Coauthors: Aurélien Carlier and Claire Hanen
This talk deals with the classical scheduling of a dependent typed task system (including parallel and dedicated processors) of unit durations with release and due dates. The existence of a feasible scheduling problem is NP-Hard and can be, for some special subcases, polynomially or approximatively solved using Garey/Jonson (GJ in short) or Leung/Palem/Pnueli (LPP in short) algorithms. These two algorithms rely both on an iterative process that modifies the due dates until either a fixed point is reached (due dates are then said to be consistent), or a contradiction is observed. These modified due dates are then used as priorities to build a feasible schedule using a list scheduling algorithm. The motivation of our talk is to analyze the relationships between these two iterative process. We proved that although GJ and LPP algorithms are different, they rely on the same notion of consistency, (i.e. the modified due dates produced by one of the algorithm are consistent with respect to the other algorithm) and they converge to the same fixed point if it exists. The consequence is that all the results obtained in the literature for one of these algorithm are also valid for the other one.
Optimal Energy Consumption and Throughput for Workflow Applications on Distributed Architectures with Failures
Coauthors: Abdallah Ben Othman, Laurent Philippe, and Veronika Rehn-Sonigo
In this article, we study both the throughput and the energy optimization problems for a distributed system subject to failures that executes a work- flow at different speed levels. The application is modeled as a directed acyclic graph composed of typed tasks linked by dependency constraints. A continuous flow, or a great number of application instances, has to be pro- cessed. Optimizing the collaborative system performance implies to increase the throughput – the number of application instances processed by time unit – or to decrease the period, the time needed to output one instance of the system. The system is designed as a collaborative platform of distributed machines. Each machine collaborates with others by performing all the in- stances of at least one task of the DAG. The problem we tackle is to optimize the configuration of the platform. In this paper we propose two polynomial algorithms that optimize the two objectives of period (i.e., throughput) min- imization and energy minimization. We prove that the proposed algorithms give optimal results. Our optimization approach is hierarchic in the sens that we either minimize the energy consumption for an optimal period or mini- mize the period for the optimal energy consumption. For the cases where one of the constraints is not compliant with the optimal values we propose an implementation of the algorithms that provides the Pareto curve that links the two optimal values.
Performance-Energy Trade-off in Distributed High Performance Computing Systems
Recent trend to use many-core and heterogeneous architectures as well rising energy demands of computing systems caused that the energy efficiency became a key issue. It is important from levels of single servers, which are getting more and more dense with increasing numbers of computing cores, up to big data centers which are limited by power supply available and their cooling efficiency. For this reason, there is a need to design new resource management methods that take into account energy efficiency in addition to typical performance criteria. We present the analysis of performance and energy trade-off for diverse classes of applications and hardware used in a real high performance computing center. We apply metrics to evaluate this trade-off taking also into account variability of load, availability of servers and thermal effects. We show how results of this analysis can be applied in resource management policies to improve energy efficiency and we demonstrate it by experiments conducted using the GSSIM simulation tool.
Scheduling tasks aspects in car production environment
The car production process is spread into three main production plants – body shop, paint shop and assembly line. There are many groups of scheduling problems formulated for the car production process beginning from CSP (car sequencing problem) to the assembly line balancing. The aim of this paper is to present several combinatorial problems formulation and methods of optimization for each of these production phases applying tasks scheduling theory. As far as the processes in the body shop are concerned, an example of car segmentation problem will be considered. For the paint shop, optimization of number of colours and maximization of car blocks of the same colour are being taken into consideration with the restrictions of the real production process. For the case of assembly line, an example of balancing the assembly line considering the optimization of workers’ assignment will be presented. Even though these problems have previously been formulated in the literature, they do not take into consideration practical issues and real conditions of a car manufacture plant which is the case of our solution proposals.
Scheduling nuclear weapon stockpile transformation
Coauthors: Harvey Greenberg, Laura McNamara, Carol Meyers, and Andrew Wilson
The US has been steadily decreasing the size of its nuclear stockpile, both in response to international treaties and because of its stated policy to keep the smallest stockpile that meets national security needs. There is considerable work to perform over the next 30 years to maintain, refurbish, and dismantle weapons. This work must all be done with limited resources. We will describe the various competing constraints on quality multi-year schedules that transform the current stockpile into a goal stockpile 30 years from now. We will also describe a multi-disciplinary approach to assisting decision makers select a schedule. Our approach involves expert elicitation to determine alternative ways to evaluate the quality of a schedule and differing priorities for these objectives. The method also involves multi-objective analysis and visualization. Although the single objective problem can be solved efficiently, the multi-objective task is considerably more challenging.
The total adjustment cost problem: Applications, models, and solution algorithms
Resource leveling problems arise whenever it is expedient to reduce the fluctuations in resource utilization over time, while maintaining a prescribed project completion deadline. Several resource leveling objective functions may be defined, whose consideration results in well-balanced resource profiles. In this talk, we concentrate on a special objective function that determines the costs arising from increasing or decreasing the resource utilizations. The resulting total adjustment cost problem occurs, for example, in the construction industry and can be formulated using mixed-integer linear programming models. Apart from a discrete time-based formulation, two polynomial formulations, namely an event-based and a start-based model, that exploit structural properties of the problem are presented. In addition, a heuristic solution algorithm is proposed in order to generate start solutions for the problem. We use CPLEX 12.4 to solve medium-scale instances known from the literature. A computational performance analysis shows that the discrete time-based and the start-based model are suitable for practical applications.
Fairness Between Users in the Campaign Scheduling Problem
Coauthors: Vinicius Gama Pinheiro and Denis Trystram
A typical high performance computing center is shared by many users who compete for the common infrastructure. The scheduler must not only ensure efficiency of the whole system - but also be fair to individual users, especially when the amount of work submitted by different users differs significantly. We present a scheduling algorithm that guarantees each user an upper bound on jobs' maximum stretch. The bound is proportional to the user's load and to the number of active users in the system - thus the stretch does not depend on the total load in the system. The algorithm, rather than on individual jobs, operates on campaigns of jobs. Each campaign consists of several jobs submitted at the same time by the same user; all the jobs from a campaign must complete before the next campaign starts. Scheduling policies in HPC centers rely on various ad hoc mechanisms to ensure fairness. Our work suggests an alternative approach with guaranteed performance.
The preemptive project scheduling problem with generalized precedence relationships
We study a resource-constrained project scheduling problem where the activities can be interrupted at any point in time during their execution and generalized precedence relationships between the activities have to be taken into account. The precedence relationships define minimum and maximum time lags between points in time when given parts of the activities have been executed. After providing a formal problem statement, we reduce the problem to a canonical form only containing nonpositive completion-to-start minimum time lags. Next, we develop an MILP formulation that encodes a schedule as a sequence of time slices with associated sets of activities that are in progress during the respective slice. Moreover, we establish an upper bound on the number of slices needed to represent an optimal schedule. Finally, we report on the results of an experimental performance analysis comparing upper bounds obtained with the MILP model and a large neighborhood descent procedure and lower bounds that arise from solving a linear programming relaxation of the problem via column generation.
Towards Secure Cloud Computing: Load Balancing Through Coalition Formation
Coauthor: Jakub Gasior
The efficiency in terms of load balancing and scheduling and security of both communication and computation processes are the key issues in cloud computing. While usually these problems are solved separately we propose an approach leading to working out “soft security” mechanisms providing both efficient and secure load balancing. The approach is based on the phenomena of self-organization in game-theoretical Spatially Generalized Prisoner’s Dilemma model defined in two dimensional cellular automata space. The main concept of self-organization applied here is based on a formation of temporal coalitions of the participants (computational nodes) of the spatial game in the iterative process of load balancing. We present a general framework of the approach and preliminary experimental results of our study.
First Fit and Best Fit bin packing: A new analysis
Coauthor: György Dósa
In the bin packing problem we are given an instance consisting of a sequence of items with sizes between 0 and 1. The objective is to pack these items into the smallest possible number of bins of unit size. First Fit algorithm packs each item into the first bin where it fits, possibly opening a new bin if the item cannot fit into any currently open bin. Best Fit algorithm packs each item into the most full bin where it fits, possibly opening a new bin if the item cannot fit into any currently open bin. In early seventies it was shown that the asymptotic approximation ratio of First Fit and Best Fit bin packing algorithm is equal to 1.7. We give a new and simple analysis that proves the same asymptotic ratio. Furthemore, building on this method, we prove that also the absolute approximation ratio for First Fit bin packing is exactly 1.7. This means that if the optimum needs OPT bins, First Fit always uses at most ⌊1.7OPT ⌋ bins. We also show matching lower bounds for a majority of values of OPT, i.e., we give instances on which First Fit uses exactly ⌊1.7OPT⌋ bins. Such matching upper and lower bounds were previously known only for finitely many small values of OPT. The previous published bound on the absolute approximation ratio of First Fit was 12/7 ≈ 1.7143. Recently a bound of 101/59 ≈ 1.7119 was claimed.
Optimised Medium-term Rail Planning at a Major Iron Ore Producer of Australia
In supply chains, medium term plans (2 weeks to 2 year) are used to maximise throughput and identify bottlenecks. In mineral supply chain the plan not just provides number of trains to be sent to every mine in every period but is also essential for various other planning operations like crew-scheduling, production planning at mines and scheduling of maintenance. This plan needs to observe constraints like port and rail maintenance requirements, production plans, and capacities of fleet, dumping, loading and stockyard. The plan also needs to consider grade qualities, which introduces non-linear constraints as the quality depends on the mixing ratio of ore from different sources. In this paper we present MINLP model that is being used for such a planning at a major iron ore producer of Australia. The model has also reduced total planning time from 5-6 hours of manual planning to 30 minutes of cpu.
Different approaches to optimally schedule task graphs with communication delay
Many NP-hard scheduling problems are so difficult that heuristics are employed even for small problem instances. Scheduling tasks graphs with communication delays on a set of processors, P|prec,cij|Cmax, is such a problem. Nevertheless, for time-critical systems or for the evaluation of heuristics it is crucial to obtain optimal solutions. In this presentation two approaches to optimally scheduling P|prec,cij|Cmax are studied. The first is based on an Integer Linear Programming (ILP) approach. To achieve a good ILP formulation it is crucial to have an efficient conversion of the bilinear constraints that arise from the consideration of the communication delays. The second approach is based on A*, a best first search algorithm. The focus of the A* scheduling algorithm discuss will be on novel pruning techniques.
Network delay-aware load balancing in selfish and cooperative distributed systems
When accessing remote services, the observed latency is a sum of communication delays and the time needed to handle the request on a server. The handling time depends on the server congestion, i.e. the total number of requests a server must handle. We analyze the problem of balancing the divisible load in a network of servers in order to minimize the total observed latency. We consider both cooperative and selfish servers (each server aiming to minimize the latency of the locally-produced requests). The problem can be generalized to the task scheduling in a distributed cloud; or to content delivery in an organizationally-distributed CDNs. In a cooperative network, we show that the problem is polynomially solvable. We also present a distributed algorithm iteratively balancing the load. We prove that the distributed algorithm converges; moreover, we show how to estimate the distance between the current solution and the optimum based on the amount of load exchanged by the algorithm. During the experimental evaluation, we show that the distributed algorithm is efficient, therefore it can be used in networks with dynamically changing loads. In a network of selfish servers, we prove that the price of anarchy (the loss of performance due to selfishness) is low when the network is homogeneous and the servers are loaded (the request handling time is high compared to the communication delay). After relaxing these assumptions, we assess the price of anarchy experimentally, showing that it remains low. Our results indicate that a network of servers handling requests can be efficiently managed by a distributed algorithm. Additionally, even if the network is organizationally distributed, with individual servers optimizing performance of their requests, the network remains efficient.
A hybrid approach to large-scale short-term scheduling in make-and-pack production
Coauthor: Philipp Baumann
We investigate short-term scheduling of industrial make-and-pack production processes. The planning problem consists of minimizing the production makespan while meeting given end-product demands. A large number of operations, sequence-dependent changeover times, multi-purpose storage units with finite capacities, batch splitting, quarantine times, partial equipment connectivity, and material transfer times render the problem a challenging task. Known MILP formulations for such production processes can solve only small and medium-sized problem instances in reasonable CPU times. We present a hybrid heuristic approach to tackle large-scale instances. Under this approach, the set of batches is divided into several subsets, which are then scheduled iteratively using a MILP model. We enhance the performance of the heuristic by eliminating redundant constraints and variables after each iteration. The applicability of the proposed heuristic is demonstrated by means of a real-world production process.
Decentralized Throughput Scheduling
Coauthors: Jasper de Jong and Andreas Wombacher
We study models for throughput scheduling in a decentralized setting. In throughput scheduling, we have a set of jobs j with values wj , processing times pj , and release dates rj and deadlines dj , to be processed non-preemptively on a set of unrelated machines. The goal is to maximize the total value of jobs scheduled within their time window [rj , dj]. While several approximation algorithms with different performance guarantees exist for this and related models, we are interested in the situation where subsets of machines are governed by selfish players. We show that local α-approximation algorithms for each player yield Nash equilibria at most a factor (α+1) away from the global optimum, and this bound is tight. For identical machines, we can improve this bound to ≈ (α+1/2), when restricting to subgame perfect equilibria of sequential games. We also address several variations of the problem.
Scheduling 2D-grids with large communication delays
Coauthors: Mostafa El Daoudi and Denis Trystram
We are interested in this work in scheduling precedence task graphs which have the special structure of two dimensional grids. They correspond to the parallel execution of a large class of applications in Linear Algebra through manipulations of operations on matrices. The general problem of scheduling any graph with explicit large communication delays is known to be hard and there is no efficient approximation algorithm for solving the problem. We consider the delay model under the 1-port assumption where the communications between the machines are sequential. As the structure of the graph is restricted to regular 2D-grids, we exhibit a lower nound and a ploynomial time algorithm that reaches this bound for any number of machines.
A New Approach to Online Scheduling: Approximating the Optimal Competitive Ratio
We propose a new approach to competitive analysis in online scheduling by introducing the concept of online approximation schemes. Such a scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. Also, it provides algorithmic means to compute the actual value of the best possible competitive ratio up to an arbitrary accuracy. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines. By constructing online approximation schemes we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. We also generalize our techniques to arbitrary monomial cost functions and apply them to the makespan objective. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations.
Approximation Algorithms for a Vehicle Routing Scheduling Problem
Coauthor: Mordecai Golin and Wei Yu
We consider a generalization of the Traveling Salesman Problem, in which each city is associated with a release time and a service time. The salesman has to visit each city at or after its release time. We show an approximation algorithm with performance ratio better than 5/2 when the number of distinct release times is fixed. Then we analyze a natural class of algorithms and prove that no performance ratio better than 5/2 is possible unless the Metric TSP can be approximated with a ratio strictly less than 3/2. Finally, we consider a special case of VSP, that has a heavy edge, and present an approximation algorithm with performance ratio less than 5/2 as well.