December 5, 2019 
Marc Goerigk (University of Siegen)
TwoStage Robust Combinatorial Problems
Show abstract
In this talk I focus on combinatorial optimization problems under uncertainty. In the most basic (robust) setting, there is a set of cost vectors given, and the question is to find one solution such that the largest of the possible objective functions is as small as possible. A variant of this setting is twostage robustness: Here, we only need to make a partial decision in the first stage, under known firststage costs. Then the uncertain secondstage cost vector is revealed, and we can complete our partial solution to a full solution. While onestage robust combinatorial problems have been studied quite thoroughly over the past two decades, progress on twostage problems has been only recent. I give an accessible overview to those who are new to robust optimization, and will also present some of these recent results to highlight where the current research is at.

November 14, 2019 
Sebastian Siebertz (University of Bremen)
An introduction to nowhere dense graph classes
Show abstract
The notion of nowhere denseness was introduced by Nešetřil and
Ossona de Mendez and
provides a robust concept of uniform sparseness of graph classes.
Nowhere dense graph classes generalize many familiar classes of sparse graphs such as the class of planar graphs and classes that exclude a fixed minor or topological minor.
They admit several seemingly unrelated
natural characterisations that lead to strong algorithmic applications.
In particular, the modelchecking
problem for firstorder logic is fixedparameter tractable over
these classes.
In this tutorial I will give an introduction to the theory of nowhere
dense graph classes with a focus on different characterisations and
algorithmic applications.

November 7, 2019 
Lukas Nölke (University of Bremen)
Online Matching on the Line with Recourse
Show abstract
In the online matching problem on the line, \(n\) requests appear online and have to be matched, irrevocably and upon arrival, to a given set of servers, all on the real line. The goal is to minimize the sum of distances from the requests to their respective servers. Despite all research efforts, it remains as an intriguing open problem whether there exists a constantcompetitive online algorithm. The best known online algorithm, due to Raghvendra (SOCG 2018), achieves a competitive factor of \(O(\log n)\). There exists a matching a lower bound of \(\Omega(\log n)\) (Antoniadis et al., LATIN 2018) that holds for a quite large class of online algorithms, including all known deterministic algorithms in the literature. The best known general lower bound is \(9+\varepsilon\) (Fuchs et al., TCS 2005).
In this work, we consider the increasingly popular online recourse model and show that a constant competitive factor is possible if we allow to revoke online decisions to some extent. More precisely, we derive an \(O(1)\)competitive algorithm for online minimum matching on the line that rematches each of the \(n\) requests at most \(O(\log n)\) times. For special instances, where no more than one request appears between two servers, we obtain a substantially better result. We give a \((1+\varepsilon)\)competitive algorithm with \(O_\varepsilon(1)\) recourse actions per request. Incidentally, the construction of the aforementioned \(\Omega(\log n)\) lower bound uses such instances.
This is joint work with Nicole Megow.

October 30, 2019 
Franziska Eberle (University of Bremen)
Interval scheduling with a rechargeable resource
Show abstract
We consider a model for interval scheduling with a rechargeable resource. A set of interval jobs with starting times and end times needs to be scheduled on a set of identical machines. The goal is to maximize the total weight of the scheduled jobs. In the beginning, each machine is given an identical amount of a rechargeable resource. During the processing of a job a jobdependent amount of this resource is used. While no job is processed on a machine, its resource can charge with a linear rate. The resource of a machine is neither allowed to exceed the maximal capacity nor to become negative. This model generalizes scheduling with a (non)renewable resource as the linear charging rate allows for tradeoff between these two extremes. A typical application arises in the context of station based car sharing: scheduling bookings on ecars with a rechargeable battery.
For single machine scheduling, we show an optimal polynomialtime algorithm without weights and give a FPTAS in the presence of weights. The multiple machine case turns out to be considerably harder as the unit weight case already is NPhard for a constant number of machines. Based on the optimal singlemachine algorithm, we develop an e/(e1)approximation if the number of machines is part of the input.

October 24, 2019 
Nicole Megow (University of Bremen)
Optimization with Explorable Uncertainty
Show abstract
Explorable uncertainty refers to settings where parts of the input data
are initially unknown, but can be obtained at a certain cost using
queries. An algorithm can make queries one by one until it has obtained
sufficient information to solve a given problem. The challenge lies in
balancing the cost for querying and the impact on the solution quality.
In this talk, I give a short overview on recent work on explorable
uncertainty for combinatorial optimization problems, particularly. I
will speak about network design (minimum spanning tree) and scheduling.

September 9, 2019 
Miriam Schlöter (ETH Zurich)
Constructing latticefree gradient polyhedra
Show abstract
Latticefree gradient polyhedra are optimality certificates for mixed integer convex optimization problems.
We consider how to construct these polyhedra for problems with two integer variables.
A classic result of Bell, Doignon, and Scarf implies that a latticefree gradient polyhedron exists with at most four facets.
We show how to build a sequence of (not necessarily latticefree) gradient polyhedra, each of which has at most four facets, that converges in finitely many steps to latticefree.
This construction gives a gradient descent type algorithm for problems with two integer variables.
We also show that approximate gradient information is enough to guarantee convergence for strongly convex functions.

August 29, 2019 
Yeliz Sandikci (University of Bremen)
Erweiterungen
von partiellen Graphfärbungen: Algorithmen und Komplexität

August 21, 2019 
Jens Schlöter (University of Bremen)
Conditional
Directed Acyclic Graphs: On the Complexity of Computing the WorstCase
Execution Time
Show abstract
Timecritical tasks that need to be scheduled and executed are often modeled as directed acyclic graphs (DAGs) in order to describe precedence constraints and potential parallelization between their subtasks. However, when modeling realworld applications, the execution of a subtask might depend on the evaluation of a condition, like for example in ifthenelsestatements of computer programs. Thus, only a subset of the subtasks will actually be executed. In order to model such control flow instructions, conditional DAGs can be used that extend the original DAG model by allowing the usage of socalled conditional nodes, where the semantics of a conditional node is that exactly one of its successors will be processed.
Using conditional DAGs leads to additional challenges regarding schedulability tests and the calculation of worstcase execution times. Even if the scheduling algorithm that will be used to schedule the task is fixed, its execution time still depends on which successors of the conditional nodes will be chosen to be processed. In this talk, we will consider the case where a conditional DAG task and the scheduling algorithm to be used  in form of a priority order over all subtasks that potentially will be executed  are given, and the goal is to compute the worstcase execution time. Therefore, we will present complexity results as well as approximations for the described problem.

May 7, 2019 
Victor Verdugo (London School of Economics)
Prophets and Optimal Auctions
Show abstract
In many situations finding the optimal revenue pricing policy requires to solve a hard optimisation problem. Posted price mechanism are simple and efficiently implementable. In this talk I'll show the connection between this type of mechanisms and optimal stopping rules for online selection problems, and how the guarantees from one problem to the other are preserved.

April 10, 2019 
Ulrich
Pferschy (University of Graz)
On the Incremental Knapsack Problem
Show abstract
We consider the 01 Incremental Knapsack Problem (IKP) where the knapsack capacity grows over time. For each item one has to decide in which time period it should be packed, or whether it remains unpacked. Once an item is placed in the knapsack in a certain time period, it cannot be removed afterwards. The contribution of a packed item in each time period depends on its profit as well as on a time factor which reflects the importance of the period in the objective function. The problem calls for maximizing the weighted sum of the profits over the whole time horizon.
In this work, we provide approximation results for IKP and its restricted variants. In some results, we rely on Linear Programming to derive approximation bounds. We also devise a Polynomial Time Approximation Scheme (PTAS) when the number of periods is considered as a constant. Under the mild and natural assumption that each item can be packed in the first time period, we discuss different approximation algorithms suited for any number of time periods and for the special case with two and three periods.
Joint work with Federico Della Croce and Rosario Scatamacchia.

March 13, 2019 
Jens Schlöter (University of Bremen)
Computing the WorstCase Execution Time of a Conditional DAG Task
Show abstract
Tasks that need to be scheduled and executed are often modeled as
directed acyclic graphs (DAGs) in order to describe precedence
constraints and potential parallelization between their subtasks.
However, when modeling realworld tasks, the execution of a subtask
might depend on the evaluation of a condition, like for example in
ifthenelsestatements of computer programs, and thus, only a subset of
the subtasks will actually be executed.
In order to model such control flow instructions, conditional DAGs can
be used, that extend the original DAG model by allowing the usage of
socalled conditional nodes, where the semantics of a conditional node
is, that exactly one of its successors will be processed.
Using conditional DAGs leads to additional challenges regarding
schedulability tests and the calculation of the worstcase execution
time of a modeled task.
Even if the scheduling algorithm, that will be used to schedule the
task, is fixed, its execution time still depends on which successors of
the conditional nodes will be chosen to be processed.
In this talk, we will consider the case, where a conditional DAG task
and the scheduling algorithm to be used  in form of a priority order
of all subtasks that potentially will be executed  are given, and the
goal is to calculate the worstcase execution time.
Therefore, we will present complexity results as well as approximations
for the described problem.

March 6, 2019 
Ruben Hoeksma (University of Bremen)
Network congestion games are robust to variable demand
Show abstract
We consider a nonatomic network congestion game with incomplete information in which the incomplete information comes from randomness in the demand. We model this as a multicommodity flow, where for each commodity it is uncertain if it will actually appear in the network. Wang, Doan, and Chen (2014), by considering an equilibrium notion in which users evaluate their expected cost using the full knowledge of the demand distribution, concluded that the price of anarchy of the game can be arbitrarily large. We consider the problem to be a Bayesian game, where users evaluate their cost conditioned on the fact that they themselves are present in the network. In contrast to the conclusion by Wang, Doan, and Chen (2014), we find that the known bounds on the price of anarchy for the deterministic demand game also apply to the Bayesian game, even if the probability of traveling for different commodities is arbitrarily correlated. Specifically, if all latency functions in the network are affine, the price of anarchy for the Bayesian game is 4/3. We also generalize these results to the class of smooth games.
This talk is based on work done in collaboration with José Correa and Marc Schröder.

February 27, 2019 
Łukasz Jeż (University of Wrocław)
A ϕCompetitive Algorithm for Scheduling Packets with Deadlines
Show abstract
In the online packet scheduling problem with deadlines (PacketScheduling, for short), the goal is to schedule transmissions of packets that arrive over time in a network switch and need to be sent across a link. Each packet p has a deadline dp, representing its urgency, and a nonnegative weight wp, that represents its priority. Only one packet can be transmitted in any time slot, so, if the system is overloaded, some packets will inevitably miss their deadlines and be dropped. In this scenario, the natural objective is to compute a transmission schedule that maximizes the total weight of packets which are successfully transmitted. The problem is inherently online, with the scheduling decisions made without the knowledge of future packet arrivals. The central problem concerning PacketScheduling, that has been a subject of intensive study since 2001, is to determine the optimal competitive ratio of online algorithms, namely the worstcase ratio between the optimum total weight of a schedule (computed by an offline algorithm) and the weight of a schedule computed by a (deterministic) online algorithm.
We solve this open problem by presenting a ϕcompetitive online algorithm for PacketScheduling (where ϕ≈1.618 is the golden ratio), matching the previously established lower bound.

February 20, 2019 
Bertrand Simon (University of Bremen)
Minimizing I/Os in OutofCore Task Tree Scheduling
Show abstract
Scientific applications are usually described as directed acyclic graphs, where nodes represent tasks and edges represent dependencies between tasks. For some applications, such as the multifrontal method of sparse matrix factorization, this graph is a tree: each task produces a single output data,
used by a single task (its parent in the tree). We focus on the case when the data manipulated by tasks have a large size, which is especially the case in the multifrontal method. To process a task, both its inputs and its output must fit in the main memory. Moreover, output results of tasks have to be stored between their production and their use by the parent task. It may therefore happen, during an execution, that not all data fit together in memory. In particular, this is the case if the
total available memory is smaller than the minimum memory required to process the whole tree. In such a case, some data have to be temporarily written to disk and read afterwards. These Input/Output (I/O) operations are very expensive; hence, the need to minimize them. We revisit this open problem in this paper. Specifically, our goal is to minimize the total volume of I/O while processing a given task tree. We first formalize and generalize known results,
then prove that existing solutions can be arbitrarily worse than optimal. Finally, we propose a novel heuristic algorithm, based on the optimal tree traversal for memory minimization. We demonstrate good performance of this new heuristic through simulations on both synthetic trees and realistic trees built from actual sparse matrices.

February 11, 2019 
Tobias Hahn (University of Bremen)
Algorithms for scheduling with mandatory suspensions: worstcase and empirical analysis

December 13, 2018 
Claus Gwiggner (Universität Hamburg)
A Priori Sequencing with Uncertain Release Dates
Show abstract
Scheduling with uncertain release dates has received little attention in the literature, although transportation, production and supply chain problems often exhibit such properties.
In this paper, a sequence that minimizes the expected total weighted completion time under uncertain release dates is determined. The approach is a twostage model, where an a priori sequence is established in the first stage and a scheduling operation is performed in the second stage. The major difficulty is an exponential number of the joint realization of the release dates. This difficulty is resolved by a Markov model of the optimal solution of the second stage. It allows to solve instances to optimality with millions of scenarios. Moreover, the optimal policy turns out to shift a single task at the beginning of a sequence that is built according to the Weighted Shortest Expected Processing Times rule, depending on the variance of the release dates.

October 25, 2018

Thomas Weise (Hefei University )
Automating Scientific Work in Optimization
Show abstract
In the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Good experimentation is complicated. Most algorithms in the domain are anytime algorithms, meaning they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared (and runtime may be measured in multiple ways). We do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know why. We introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose reasons why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These highlevel conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of “quantitative” algorithm behavior, our process also allows for “qualitative” conclusions by feeding it with data which is normalized with problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the bestperforming setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies.

October 18, 2018 
Martin Böhm (University of Bremen)
Online Bin Stretching: Algorithms and Computer Lower Bounds
Show abstract
We investigate a problem in semionline algorithm design, called Online Bin
Stretching. The problem can be understood as an online repacking problem: The
goal of the algorithm is to repack items of various sizes into m containers of
identical size R>1. The input items arrive one by one and the algorithm must
assign an item to a container before the next item arrives.
A speciality of this problem is that there is a specific guarantee made to the
algorithm: The algorithm learns at the start of the input that there exists a
packing of all input items into m containers of capacity 1.
Our goal is to design algorithms for this problem which successfully pack the
entire incoming sequence one by one while requiring the lowest container
capacity R possible.
In this talk, we discuss recent results about Online Bin Stretching which were
included in the speaker's PhD thesis:
First, we design an algorithm that is able to pack the entire input into m
containers of capacity 1.5 regardless of what the value of m will be. Second,
we show a specialized algorithm for the setting of just 3 containers; this
algorithm is able to pack into 3 bins of capacity 1.375. Finally, we design and
implement an involved search algorithm which is able to find lower bounds for
Online Bin Stretching  and in fact we show the best known lower bounds for
3 ≤ m ≤ 8.

September 11, 2018 
Leen Stougie (CWI Amsterdam)
A Decomposition Theory For Vertex Enumeration of Convex Polyhedra
Show abstract
In the last years the vertex enumeration problem of polyhedra has seen a revival in the study of metabolic networks, which increased the demand for efficient vertex enumeration algorithms for highdimensional polyhedra given by equalities and inequalities. The complexity of enumeration of vertices of polytopes (bounded polyhedral) is a famous open problem in discrete and computational geometry.
In this lecture we do not solve this problem, but present a type of fixed parameter total polynomial time result.
We apply the concept of branchdecomposition to the vertex enumeration problem of polyhedra \(P = \{x : Ax = b; x \geq 0\}\). For that purpose, we introduce the concept of \(k\)module and show how it relates to the separators of the linear matroid generated by the columns of \(A\). This then translates structural properties of the matroidal branchdecomposition to the context of polyhedra. We then use this to present a total polynomial time algorithm for enumerating all vertices of polytopes \(P\) for which the branchwidth of the linear matroid generated by the columns of \(A\) is bounded by a constant \(k\). If time permits we will show a truly fixed parameter tractable result, with time being an exponential function of the parameters only times a polynomial function of the input/output size, if on top of bounded branchwidth we also assume that all square submatrices of \(A\) have bounded determinant.
This paper is joint work with Arne Reimers.

September 10, 2018 
Klaus Heeger (University of Bonn)
A 10/7Approximation for 2Connected Spanning Subgraphs
Show abstract
We consider the 2vertexconnected spanning subgraph problem. This problem asks, given a 2vertexconnected graph, to find a 2vertexconnected spanning subgraph with minimum number of edges. This classical network optimization problem is known to be APXhard. We give a 10/7approximation algorithm for this problem, improving on the 3/2approximation from Garg, Vempala and Singla. The algorithm is similar to the 17/12approximation algorithm for the related 2egdeconnected spanning subgraph problem, and works by carefully modifying an open eardecomposition with minimum number of even ears.

September 3, 2018 
Franziska Eberle (University of Bremen)
New Results on Throughput Maximization
Show abstract
We study a fundamental online job admission
problem where jobs with processing times and deadlines arrive online
over time at their release dates, and the task is to determine a
preemptive singleserver schedule which maximizes the number of
completed jobs. To circumvent known impossibility results, we make
a standard slackness assumption by which the feasible time window
for scheduling a job is at least \(1+\varepsilon\) times its processing time,
for some \(\varepsilon>0\). We analyze the competitive ratio of algorithms
depending on the slackness parameter \(\varepsilon\).
In this paper we quantify the impact that different provider
commitment requirements have on the performance of online
algorithms. Studying this impact is important, for example, in modern
cloudservices, where customers do not want lastminute rejections
of critical tasks and request an earlyenough providerside commitment to
completing admitted jobs. Our main contribution is
one universal algorithmic framework for online job admission both with and
without commitments. For scheduling without commitment, we prove that
our algorithm with a competitive ratio of \(O(\frac{1}{\varepsilon})\) is the
best possible (deterministic) for this problem. For different
commitment models, we give the first nontrivial performance bounds.
In the setting where commitment
decisions must be made before a job's slack becomes less than a \(\delta\)fraction of its size, we prove
a competitive ratio of \(O(\varepsilon \cdot(( \varepsilon 
\delta)\delta^2 )^{1})\), for \(0<\delta<\varepsilon \). When a provider must commit
upon starting a job, our bound is \(O(\varepsilon ^{2})\). A stronger
commitment model, where a provider has to commit at job arrival, does
not admit any bounded competitive ratio even for randomized
algorithms. Finally, we observe that for scheduling with
commitment the restriction to the ``unweighted'' throughput model is
essential; if jobs have individual weights, we rule out any bounded
competitive ratio (for deterministic algorithms).

July 17, 2018 
Adrian Fessel (University of Bremen)
Topological Dynamics of Physarum polycephalum Transport Networks
Show abstract
Network topology, as well as dynamic processes such as flow, synchronization of oscillators, or wave activity that take place on a given network topology, have been extensively studied. In selforganizing networks, topology and dynamic processes interact in a unique way: the network adapts its structure based on local activity, enabling the emergence of global dynamic properties on an optimized topology.
Working with Physarum polycephalum as an experimentally accessible model for an adaptive transportation system, we seek to formalize topological development into a sequence of discrete events intended to serve as basis for studying the interaction of flow and topology. We focus on reorganization of P. polycephalum networks after fragmentation, a process occurring in a prominent percolation transition. Unlike in most traditional percolation processes, system size in P. polycephalum is not fixed and requires a model that allows for dynamical changes in size.
The theoretical description follows a master equation with parameters obtained from statistical analysis of experimental data. Computational investigation of the model recreates the dynamics of the topological phase transition and enables characterization of finite size effects and critical exponents. Comparing the influences of system growth and fusion within the system reveals a significant shift of the transition when system growth dominates.

July 10, 2018 
Sigrid Knust (Osnabrück University)
Synchronous flow shop scheduling problems
Show abstract
A synchronous flow shop is a variant of a nonpreemptive permutation flow
shop where transfers of jobs from one machine to the next take place at
the same time. The processing is organized in synchronized cycles which
means that in a cycle all current jobs start at the same time on the
corresponding machines. Then all jobs are processed and have to wait
until the last one is finished. Afterwards, all jobs are moved to the
next machine simultaneously. As a consequence, the processing time of a
cycle is determined by the maximum processing time of the operations
contained in it. Furthermore, only permutation schedules are feasible, i.e.,
the jobs have to be processed in the same order on all machines.
The goal is to find a permutation of the jobs such that the makespan is
minimized.
Motivated by a practical application, we investigate special cases where the
processing times of the cycles are determined by a subset of socalled
dominating machines. Furthermore, we consider problems with additional
resources and setup times.

May 15, 2018 
Bertrand Simon (ENS de Lyon)
Task graph scheduling on modern computing platforms
Show abstract
With the increasing complexity of modern computing platforms, many scientific
applications now rely on runtime schedulers. Most of these frameworks model an
application as a directed acyclic graph of tasks, where nodes represent tasks
and edges represent dependences between tasks. In my thesis, I focused on three
theoretical challenges concerning task graph scheduling: exploiting the inherent
parallelism of tasks, efficiently using accelerators, and coping with a limited
memory.
In this talk, I will detail two projects lead in this context. The first one
targets a platform composed of two types of processors, such as CPUs and GPUs.
The processing time of each task depends on the type of processors it is
allocated to; this decision being one the main scheduling choices. We study the
makespan minimization problem in an online setting, where the graph is gradually
discovered, which increases the difficulty of the decision. In a second project,
we focus on reducing the memory consumption of an application in order to avoid
expensive memory transfers. Specifically, we aim at preventing dynamic
schedulers from running out of memory on a sharedmemory platform. These
schedulers generally start multiple tasks in parallel, which may lead to a
memory shortage. We study how to modify the task graph in order to restrict this
parallelism while maintaining performance.

April 24, 2018 
Viktor Bindewald (TU Dortmund)
Robust Bipartite Matchings  Hedging Against Edge Failures
Show abstract
The Bipartite Matching Problem is a wellstudied combinatorial optimization problem with numerous practical applications. In reallife optimization problems are modeled using data based on forecasts and estimates which leads to uncertain problem parameters. One way to take this fact into account is provided by the framework of robust optimization.
While most robust variants of the Bipartite Matching Problem considered in the literature address cost uncertainty, this talk deals with a new robust problem, where the structure of the underlying bipartite graph is uncertain. In our problem, an additional part of the input is a list of failure scenarios described by a subset of edges of the graph that may fail simultaneously. If a scenario emerges, the corresponding edges are deleted from the graph. In the robust problem the goal is to find a minimumcost set of edges containing a perfect matching in every scenario, i.e., a robust solution survives any possible incident. We will discuss the computational complexity and present asymptotically tight approximation algorithms. Additionally we address the following related question: Is it easier to harden an existent infrastructure rather than designing a new one from scratch? In our setting this results in the question how to robustify a given perfect matching by adding new edges to the ambient graph. For this problem we will present hardness and algorithmic results.

April 10, 2018 
Franziska Eberle (University of Bremen)
On Index Policies in Stochastic Scheduling
Show abstract
We address a fundamental stochastic scheduling problem where jobs with uncertain processing times must be scheduled nonpreemptively on \(m\) identical parallel machines. We are given a set \(J\) of \(n\) jobs, where each job \(j\in J\) is modeled by a random variable \(P_j\) with known distribution. The actual realization of a processing time becomes known only by executing a job. The goal is to find a policy \(\Pi\) that decides for any point in time which jobs to schedule such as to minimize the expected total completion time, \(\sum_{j\in J} \mathbb{E}[C_j^\Pi]\), where \(C_j\Pi\) denotes the completion time of job \(j\) under policy \(\Pi\). The scheduling problem can be stated in the standard threefield notation as \(P  \mathbb{E}[\sum_j C_j]\).
We consider socalled index policies that assign the same priority to jobs with the same probability distribution and schedule jobs one after the other on the first machine that becomes available. Jobbased index policies do not consider the number of jobs or the number of machines. We rule out distributionindependent approximation factors for jobbased index policies. Somewhat surprisingly this lower bound is obtained already for very simple instances with only two types of jobs, identical deterministic jobs and a set of stochastic jobs that all follow the same Bernoulli distribution. For this class of instances we also give a policy that is an \(\mathcal{O}(m)\)approximation.
This talk is based on work together with Felix Fischer, Nicole Megow, and Jannik Matuschke.

March 15, 2018 
Katrin Casel (Trier University)
Resolving Conflicts for LowerBounded Clustering
Show abstract
For most clustering problems, the quality of a solution is usually assessed with respect to a given pairwise distance on the input objects. Approximate solutions for such tasks often rely on this distance to be a metric. But what happens if this property does not hold? Especially for computing a clustering such that each cluster has a certain minimum cardinality k>2 (as required for anonymisation or balanced facility location problems) this becomes problematic. For example, with the objectives to minimise the maximum radius or diameter of all clusters, there exists no polynomialtime constant factor approximation if the triangle inequality is violated while there exists a 2approximation for metric distances. This talk introduces methods to resolve or at least soften this effect of nonmetric distances by devising particular strategies to deal with violations of the triangle inequality (conflicts). With the use of parameterised algorithmics it turns out that, if the number of such conflicts is not too large, constant factor approximations can still be computed efficiently.

December 19, 2017 
Ruben Hoeksma (University of Bremen)
A PTAS for TSP on hyperplanes
Show abstract
TSP with neighborhoods is a longstudied problem where we are asked to find the shortest tour visiting a given set of regions (=neighborhoods).
When the neighborhoods are lines in two dimensions, the problem is efficiently solvable. For lines in three dimensions, the problem is NPhard and this is generally true in higher dimensions with regions that are low dimensional subspaces.
The question we try to answer in this talk concerns the setting where the regions are hyperplanes in three or more dimensions. The complexity of this problem is open (no hardness results are known) and, until now, only a recent \((2^d)\)approximation algorithm was known. In this talk, we make a step forward in answering this sixteen year old open question by designing a PTAS.
In a highlevel view, we will see the underlying ideas, including a (new) sparsification technique for polytopes and a linear program that computes the shortest tour.
This talk is based on work together with Antonios Antoniadis, Krzysztof Fleszar, and Kevin Schewior.

December 12, 2017

Syamantak Das (University of Bremen)
Surviving in LowTreewidth Graphs
Show abstract
In the Survivable Network Design Problem (SNDP), we are given an undirected graph \(G\) together with costs on edges or vertices and the demand function \(k\) on sourcesink demand pairs \((s_i, t_i, k_i)\), where each \({s_i, t_i}\) are vertices in the graph and \(k_i\) is a natural number.
In the edgeconnectivity setting (ECSNDP), we are interested in the minimum cost subgraph \(H\) of \(G\) in which every demand pair \((s_i, t_i)\) is connected via \(k_i\) edgedisjoint paths. These problems are natural generalizations of Steiner tree and Steiner forest, which capture the faulttolerant requirements in networks.
In general, these problems are NPhard, and even restrictive special cases of SNDP post nontrivial challenges.
As such, SNDP has drawn a lot of attention from researchers for the past decades.
Special cases of SNDP, like Steiner Tree, Steiner Forest, \(2\)Edge connected subgraph etc. has been studied extensively on many special classes of graphs, e.g. planar graphs, Euclidean metrics, and bounded treewidth graphs.
In this talk, we shall discuss some recent results on exact algorithms for the general SNDP parameterized by the treewidth of the graph. In particular, we shall demonstrate a fairly general dynamic programming framework on the basis of the treedecomposition of the graph which can be applied to solve a bunch of related network design problems.

December 5, 2017

Nicole Megow (University of Bremen)
Banff 2017: Approximation Algorithms and the Hardness of Approximation

November 29, 2017

Thomas Schneider (University of Bremen)
The Taming of the Expressive: Foundations for Efficient Reasoning in Complex Modal and Description Logics
Show abstract
Modal logic and its syntactic variants are widely used in knowledge representation and verification. Applications in those areas require modal logics with a careful balance of the rivalling requirements "high expressivity" and "low computational complexity". The talk will give an overview of our contributions to "taming" expressive temporal, description, and hybrid logics, which consist in identifying fragments with good computational complexity and studying modularity. These contributions fall into three groups: (1) systematic studies of the complexity of subBoolean fragments of expressive modal and description logics, (2) theoretic and practical studies of module extraction and modularisation of ontologies, and (3) a study of the complexity of lightweight combinations of temporal and description logics.
The talk will be accessible to a general audience with a background in (theoretical) computer science.

November 21, 2017 
Rolf Drechsler (University of Bremen)
Design and Verification of CyberPhysical Systems – Challenges and Recent Developments
Show abstract
New challenges need to be faced during modeling, design, implementation, verification and test of
Cyber‐Physical Systems (CPS). For the design of hardware and software in this context, new
approaches need to be developed, taking into account also non‐functional requirements. This talk
gives an overview on challenges and recent developments with a special focus on verification and
correctness.

October 24, 2017 
Tobias Mömke (University of Bremen)
Maximum Scatter TSP in Doubling Metrics
Show abstract
We study the problem of finding a tour of \(n\) points in which every edge is long.
More precisely, we wish to find a tour that visits every point exactly once, maximizing the length of the shortest edge in the tour.
The problem is known as Maximum Scatter TSP, and was introduced by Arkin et al. (SODA 1997), motivated by applications in manufacturing and medical imaging.
Arkin et al. gave a \(0.5\)approximation for the metric version of the problem and showed that this is the best possible ratio achievable in polynomial time (assuming \(P \neq NP\)).
Arkin et al. raised the question of whether a better approximation ratio can be obtained in the Euclidean plane.
We answer this question in the affirmative in a more general setting, by giving
a \((1\varepsilon)\)approximation algorithm for \(d\)dimensional doubling metrics, with running time \( \tilde{O}(n^3 + 2^{O(K \log (K))}) \), where \(K \leq (13/\varepsilon)^d\).
As a corollary we obtain (i) an efficient polynomialtime approximation scheme (EPTAS) for all constant dimensions \(d\), (ii) a polynomialtime approximation scheme (PTAS) for dimension \(d = (\log \log (n))/c\), for a sufficiently large constant \(c\), and (iii) a PTAS for constant \(d\) and \(\varepsilon = \Omega(1/\log \log (n))\). Furthermore, we show the dependence on \(d\) in our approximation scheme to be essentially optimal, unless Satisfiability can be solved in subexponential time.

October 17, 2017 
Ruben Hoeksma (University of Bremen)
Optimal Threshold Strategies and Posted Price Mechanisms for Random Arrivals
Show abstract
In this talk, we discuss the following setting. A gambler is presented with a set of random variables that come to him in a random order. When the variable arrives, the gambler sees its value and decides to take it or go on to the next. If the gambler takes it, he gets an amount of money equal to the value of the variable and the game ends, otherwise he is presented the next random variable. The gamblers aim is to maximize the expected amount of money he receives. This setting is a classic example of an optimal stopping problem and is known as the Prophet Secretary Problem.
For this problem we analyze two different threshold strategies. One nonadaptive strategy, where the thresholds are chosen before any of the variables arrive and before the order is known, and one adaptive strategy, where thresholds are chosen based on the variables that have been rejected prior.
For the nonadaptive strategy we show that the gambler can, in expectation, get at least \(11/e\) times the expected maximum value among the random variables (i.e. the value a prophet would receive). We show that no nonadaptive strategy can do better, even for i.i.d. random variable. For the adaptive strategy, we analyze only the i.i.d. setting and show that this improves the gamblers expectation to \(0.745\) times the expected maximum. This results settles a conjecture by Kertz from 1986.
Finally, we discuss the implications for posted price mechanisms, a commonly used mechanism for selling goods. We show that the results for threshold strategies imply similar bounds for comparisons of posted price mechanisms with optimal auctions.

October 10, 2017 
Dmitry FeichtnerKozlov (University of Bremen)
Cheeger Graphs
Show abstract
Motivated by higherdimensional coboundary expansion
we define a certain family of graphs which are called Cheeger graphs.
Little is known at present about these graphs. We will make a short
selfcontained introduction into the subject and formulate
several open purely graphtheoretic problems.

August 29, 2017 
Antonios Antoniadis (MaxPlanckInstitut für Informatik)
A Tight Lower Bound for Online Convex Optimization with Switching Costs
Show abstract
We investigate online convex optimization with switching costs (OCO; Lin et al., INFOCOM 2011), a natural online problem arising when rightsizing data centers: A server initially located at \(p_0\) on the real line is presented with an online sequence of nonnegative convex functions \(f_1,f_2,\ldots ,f_n: R\rightarrow R^+\). In response to each function \(f_i\), the server moves to a new position \(p_i\) on the real line, resulting in cost \(p_ip_{i1} + (f_ip_i)\). The total cost is the sum of costs of all steps. One is interested in designing competitive algorithms.
In this talk, we solve the problem in the classical sense: We give a lower bound of \(2\) on the competitive ratio of any possibly randomized online algorithm, matching the competitive ratio of previously known deterministic online algorithms (Andrew et al., COLT 2013/arXiv 2015; Bansal et al., APPROX 2015). It has been previously conjectured that \((2\varepsilon)\)competitive algorithms for some \(\varepsilon > 0\) (Bansal et al., APPROX 2015).
Joint work with Kevin Schewior, to appear in WAOA 2017.

August 29, 2017 
Peter Kling (Universität Hamburg)
Multiprocessor Scheduling with a Shareable Resource
Show abstract
We consider \(n\) jobs that have to be scheduled on \(m\) processors. The processors
share a common, continuously divisible resource. In each discrete time step, the
scheduler can change the assignment of the resource to the processors. Jobs come
with a processing volume and a resource requirement. When a processor currently
has a share \(R\) of the resource and works on a job with processing volume \(p\)
and resource requirement \(r\), exactly \(\min\{R/r, 1\}\) units of the job's
processing volume are processed. That is, doubling the resource also doubles the
speed a job is processed with, but this speedup is capped. Migration and
preemption are not allowed, and the goal is to minimize the makespan.
In this talk, I present an \(1 + O(1/m)\)approximation algorithm if all jobs have
unit work and an \(2 + O(1/m)\)approximation algorithm for jobs with arbitrary
workload. The algorithms employ a sliding window technique and are quite simple
and intuitive. This problem generalizes bin packing with cardinality constraints
and splittable items, for which our results improve the approximation ratio
achieved by a fast and simple algorithm from previously \(2  O(1/m)\) to \(1
+ O(1/m)\).
This is joint work with Alexander Mäcker, Sören Riechers, and Alexander
Skopalik.

June 28, 2017 
Jan Hackfeld (TU Berlin)
Spaceoptimal collaborative graph exploration
Show abstract
We consider the problem of exploring an undirected and initially unknown graph by a set of cooperative agents. The vertices of the graph are
unlabeled, and the edges incident to a vertex have locally distinct labels.In this setting, it is known that for one agent Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that O(log log n) cooperative agents with only constant memory can explore any graph with at most n vertices in polynomial time. We match this result with a lower bound showing that the number of agents needed for exploring any graph with at most n vertices is already Ω(log log n) when we allow each agent to have at most O(log n 1−ε ) bits of memory for some ε > 0. In particular, this implies that Ω(log log n) agents with constant memory are necessary for exploration.

June 21, 2017 
Ilya Chernykh (Novosibirsk State University)
Open Problems in Open Shop Routing

June 20, 2017 
Markus Jablonka (University of Bremen)
Oblivious Deterministic Maximization of Submodular Functions
Show abstract
Submodular functions are set functions that capture the natural
"diminishing returns" property and can be seen as a discrete analogy to convex
functions. The problem of maximizing a submodular function is as
follows: Given a set \(E\) of elements and a set function \(f: 2^E\rightarrow Q\)
(typically through an oracle), we want to determine a subset \(S\) of \(E\)
with maximum value \(f(S)\). This is an NPhard problem for which several
approximation algorithms are known. These algorithms are either
randomized or adaptive, in the sense that they choose which function
values to look up after having already seen some others, and in this
way adapt to the function. In this talk we discuss a nonadaptive,
or oblivious, deterministic approach. All sets for which the value
of the function will be queried will be chosen from the beginning as
a list. The set of this list with the highest function value will
then be returned as an approximation of the problem. The focus of
the talk will be choosing such a list and determining the quality of
this solution.

May 31, 2017 
Tim Güneysu (University of Bremen)
Challenges of Modern Cryptography

May 24, 2017 
Franziska Eberle (University of Bremen)

May 10, 2017 
Syamantak Das (University of Bremen)
Online and Approximation Algorithms for Scheduling and Network Design
Show abstract
This will be a talk covering my general research interests, approximation and online algorithms for scheduling and network design.
First, I will speak about online scheduling problems. We propose a new model on top of the classical competitive analysis that we believe is useful in dealing with uncertainty in the input data. We show that one can design simple online algorithms with good guarantees if one is allowed to reject, that is, not process a tiny fraction of the input. We propose this in the light of the fact that such an algorithm is not possible for optimizing certain classical scheduling objectives like makespan minimization or minimizing the (weighted) maximum flowtime.
Joint work with: Anamitra Roy Choudhury, Naveen Garg and Amit Kumar.
In the second part, I shall talk about network design problems. We give improved approximation guarantees for a classical network design problem called the Group Steiner Tree, when the input graph has a special structure. In particular, we prove a generic reduction of a problem instance on such graphs to a problem instance on trees that allows us to carry out the desired improvement. The reduction may be of independent interest.
Joint work with: Parinya Chalermsook, Bundit Laekhanukit and Daniel Vaz.

May 3, 2017 
Nicole Megow (University of Bremen)
Online Resource Minimization
Show abstract
Minimizing resource usage is a key to achieving economic, environmental, or societal goals. We consider the fundamental problem of minimizing the number of machines that is necessary for feasibly scheduling preemptive jobs with release dates and hard deadlines. We consider the online variant of this problem in which every job becomes known to the online algorithm only at its release date. We present new competitive algorithms for this problem. We also discuss the power of job migration and show somewhat surprising strong lower bounds on the best possible performance guarantee.
Joint work with: Lin Chen and Kevin Schewior.

March 1, 2017 
Marek Adamczyk (University of Bremen)
When the Optimum is also Blind: A New Perspective on Universal Optimization
Show abstract
Consider the following variant of the set cover problem. We are given a universe \(U=\{1,...,n\}\) and a collection of subsets \(\mathcal{C} = \{S_1,...,S_m\}\) where \(S_i \subseteq U\). For every element \(u \in U\) we need to find a set \(\phi(u) \in \mathcal C\) such that \(u\in \phi(u)\). Once we construct and fix the mapping \(\phi:U \mapsto \mathcal C\) a subset \(X \subseteq U\) of the universe is revealed, and we need to cover all elements from \(X\) with exactly \(\phi(X):=\bigcup_{u\in X} \phi(u)\). The goal is to find a mapping such that the cover \(\phi(X)\) is as cheap as possible.
This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed.
Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worstcase ratio: universal solution for a given instance vs optimum solution for the same instance.
As the universal solution is significantly more constrained, it is typical that such a worstcase ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worstcases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worstcase ratio.
But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do?
In the talk we will see that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For the Set Cover problem  on which the main ideas and characteristics of the model will be presented  we will
see how one can obtain \(H_n\) approximation, which matches the approximation ratio from the classic deterministic setup.
Moreover, this result holds for all possible probability distributions over \(U\) that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. The result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization.
Joint work with: Fabrizio Grandoni, Stefano Leonardi and Michał Włodarczyk.

February 14, 2017 
Lukas Nölke (University of Paderborn)
Recent Developments in NowhereZero Flow Problems

December 21, 2016 
Felix Fischer (University of Glasgow)
Truthful Outcomes from NonTruthful Position Auctions
Show abstract
The area of mechanism design is concerned with the development of algorithms that produce good outcomes given inputs from selfinterested individuals. Much of mechanism design theory is dominated by the VickreyClarkeGroves (VCG) mechanism, which makes it optimal for each individual to provide its true input. In the real world the VCG mechanism is used with surprising rarity, and some reasons for this are known.
We identify another property of the mechanism that may be problematic in practice, a relative lack of robustness to inaccuracies in the choice of its parameters.
Joint work with Paul Dütting and David C. Parkes.

December 7, 2016 
Andreas Tönnis (University of Bonn)
Online Algorithms with Random Order

November 29, 2016 
Syamantak Das (University of Bremen)
A Colorful Routing Problem on Tree Networks
