December 12, 2017

Syamantak Das (University of Bremen)
Surviving in LowTreewidth Graphs
Show abstract
In the Survivable Network Design Problem (SNDP), we are given an undirected graph \(G\) together with costs on edges or vertices and the demand function \(k\) on sourcesink demand pairs \((s_i, t_i, k_i)\), where each \({s_i, t_i}\) are vertices in the graph and \(k_i\) is a natural number.
In the edgeconnectivity setting (ECSNDP), we are interested in the minimum cost subgraph \(H\) of \(G\) in which every demand pair \((s_i, t_i)\) is connected via \(k_i\) edgedisjoint paths. These problems are natural generalizations of Steiner tree and Steiner forest, which capture the faulttolerant requirements in networks.
In general, these problems are NPhard, and even restrictive special cases of SNDP post nontrivial challenges.
As such, SNDP has drawn a lot of attention from researchers for the past decades.
Special cases of SNDP, like Steiner Tree, Steiner Forest, \(2\)Edge connected subgraph etc. has been studied extensively on many special classes of graphs, e.g. planar graphs, Euclidean metrics, and bounded treewidth graphs.
In this talk, we shall discuss some recent results on exact algorithms for the general SNDP parameterized by the treewidth of the graph. In particular, we shall demonstrate a fairly general dynamic programming framework on the basis of the treedecomposition of the graph which can be applied to solve a bunch of related network design problems.

December 5, 2017

Nicole Megow (University of Bremen)
Banff 2017: Approximation Algorithms and the Hardness of Approximation

November 29, 2017 Wednesday, 10:30

Thomas Schneider (University of Bremen)
The Taming of the Expressive: Foundations for Efficient Reasoning in Complex Modal and Description Logics
Show abstract
Modal logic and its syntactic variants are widely used in knowledge representation and verification. Applications in those areas require modal logics with a careful balance of the rivalling requirements "high expressivity" and "low computational complexity". The talk will give an overview of our contributions to "taming" expressive temporal, description, and hybrid logics, which consist in identifying fragments with good computational complexity and studying modularity. These contributions fall into three groups: (1) systematic studies of the complexity of subBoolean fragments of expressive modal and description logics, (2) theoretic and practical studies of module extraction and modularisation of ontologies, and (3) a study of the complexity of lightweight combinations of temporal and description logics.
The talk will be accessible to a general audience with a background in (theoretical) computer science.

November 21, 2017 
Rolf Drechsler (University of Bremen)
Design and Verification of CyberPhysical Systems – Challenges and Recent Developments
Show abstract
New challenges need to be faced during modeling, design, implementation, verification and test of
Cyber‐Physical Systems (CPS). For the design of hardware and software in this context, new
approaches need to be developed, taking into account also non‐functional requirements. This talk
gives an overview on challenges and recent developments with a special focus on verification and
correctness.

October 24, 2017 
Tobias Mömke (University of Bremen)
Maximum Scatter TSP in Doubling Metrics
Show abstract
We study the problem of finding a tour of \(n\) points in which every edge is long.
More precisely, we wish to find a tour that visits every point exactly once, maximizing the length of the shortest edge in the tour.
The problem is known as Maximum Scatter TSP, and was introduced by Arkin et al. (SODA 1997), motivated by applications in manufacturing and medical imaging.
Arkin et al. gave a \(0.5\)approximation for the metric version of the problem and showed that this is the best possible ratio achievable in polynomial time (assuming \(P \neq NP\)).
Arkin et al. raised the question of whether a better approximation ratio can be obtained in the Euclidean plane.
We answer this question in the affirmative in a more general setting, by giving
a \((1\varepsilon)\)approximation algorithm for \(d\)dimensional doubling metrics, with running time \( \tilde{O}(n^3 + 2^{O(K \log (K))}) \), where \(K \leq (13/\varepsilon)^d\).
As a corollary we obtain (i) an efficient polynomialtime approximation scheme (EPTAS) for all constant dimensions \(d\), (ii) a polynomialtime approximation scheme (PTAS) for dimension \(d = (\log \log (n))/c\), for a sufficiently large constant \(c\), and (iii) a PTAS for constant \(d\) and \(\varepsilon = \Omega(1/\log \log (n))\). Furthermore, we show the dependence on \(d\) in our approximation scheme to be essentially optimal, unless Satisfiability can be solved in subexponential time.

October 17, 2017 
Ruben Hoeksma (University of Bremen)
Optimal Threshold Strategies and Posted Price Mechanisms for Random Arrivals
Show abstract
In this talk, we discuss the following setting. A gambler is presented with a set of random variables that come to him in a random order. When the variable arrives, the gambler sees its value and decides to take it or go on to the next. If the gambler takes it, he gets an amount of money equal to the value of the variable and the game ends, otherwise he is presented the next random variable. The gamblers aim is to maximize the expected amount of money he receives. This setting is a classic example of an optimal stopping problem and is known as the Prophet Secretary Problem.
For this problem we analyze two different threshold strategies. One nonadaptive strategy, where the thresholds are chosen before any of the variables arrive and before the order is known, and one adaptive strategy, where thresholds are chosen based on the variables that have been rejected prior.
For the nonadaptive strategy we show that the gambler can, in expectation, get at least \(11/e\) times the expected maximum value among the random variables (i.e. the value a prophet would receive). We show that no nonadaptive strategy can do better, even for i.i.d. random variable. For the adaptive strategy, we analyze only the i.i.d. setting and show that this improves the gamblers expectation to \(0.745\) times the expected maximum. This results settles a conjecture by Kertz from 1986.
Finally, we discuss the implications for posted price mechanisms, a commonly used mechanism for selling goods. We show that the results for threshold strategies imply similar bounds for comparisons of posted price mechanisms with optimal auctions.

October 10, 2017 
Dmitry FeichtnerKozlov (University of Bremen)
Cheeger Graphs
Show abstract
Motivated by higherdimensional coboundary expansion
we define a certain family of graphs which are called Cheeger graphs.
Little is known at present about these graphs. We will make a short
selfcontained introduction into the subject and formulate
several open purely graphtheoretic problems.

August 29, 2017 
Antonios Antoniadis (MaxPlanckInstitut für Informatik)
A Tight Lower Bound for Online Convex Optimization with Switching Costs
Show abstract
We investigate online convex optimization with switching costs (OCO; Lin et al., INFOCOM 2011), a natural online problem arising when rightsizing data centers: A server initially located at \(p_0\) on the real line is presented with an online sequence of nonnegative convex functions \(f_1,f_2,\ldots ,f_n: R\rightarrow R^+\). In response to each function \(f_i\), the server moves to a new position \(p_i\) on the real line, resulting in cost \(p_ip_{i1} + (f_ip_i)\). The total cost is the sum of costs of all steps. One is interested in designing competitive algorithms.
In this talk, we solve the problem in the classical sense: We give a lower bound of \(2\) on the competitive ratio of any possibly randomized online algorithm, matching the competitive ratio of previously known deterministic online algorithms (Andrew et al., COLT 2013/arXiv 2015; Bansal et al., APPROX 2015). It has been previously conjectured that \((2\varepsilon)\)competitive algorithms for some \(\varepsilon > 0\) (Bansal et al., APPROX 2015).
Joint work with Kevin Schewior, to appear in WAOA 2017.

August 29, 2017 
Peter Kling (Universität Hamburg)
Multiprocessor Scheduling with a Shareable Resource
Show abstract
We consider \(n\) jobs that have to be scheduled on \(m\) processors. The processors
share a common, continuously divisible resource. In each discrete time step, the
scheduler can change the assignment of the resource to the processors. Jobs come
with a processing volume and a resource requirement. When a processor currently
has a share \(R\) of the resource and works on a job with processing volume \(p\)
and resource requirement \(r\), exactly \(\min\{R/r, 1\}\) units of the job's
processing volume are processed. That is, doubling the resource also doubles the
speed a job is processed with, but this speedup is capped. Migration and
preemption are not allowed, and the goal is to minimize the makespan.
In this talk, I present an \(1 + O(1/m)\)approximation algorithm if all jobs have
unit work and an \(2 + O(1/m)\)approximation algorithm for jobs with arbitrary
workload. The algorithms employ a sliding window technique and are quite simple
and intuitive. This problem generalizes bin packing with cardinality constraints
and splittable items, for which our results improve the approximation ratio
achieved by a fast and simple algorithm from previously \(2  O(1/m)\) to \(1
+ O(1/m)\).
This is joint work with Alexander Mäcker, Sören Riechers, and Alexander
Skopalik.

June 28, 2017 
Jan Hackfeld (TU Berlin)
Spaceoptimal collaborative graph exploration
Show abstract
We consider the problem of exploring an undirected and initially unknown graph by a set of cooperative agents. The vertices of the graph are
unlabeled, and the edges incident to a vertex have locally distinct labels.In this setting, it is known that for one agent Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that O(log log n) cooperative agents with only constant memory can explore any graph with at most n vertices in polynomial time. We match this result with a lower bound showing that the number of agents needed for exploring any graph with at most n vertices is already Ω(log log n) when we allow each agent to have at most O(log n 1−ε ) bits of memory for some ε > 0. In particular, this implies that Ω(log log n) agents with constant memory are necessary for exploration.

June 21, 2017 
Ilya Chernykh (Novosibirsk State University)
Open Problems in Open Shop Routing

June 20, 2017 
Markus Jablonka (University of Bremen)
Oblivious Deterministic Maximization of Submodular Functions
Show abstract
Submodular functions are set functions that capture the natural
"diminishing returns" property and can be seen as a discrete analogy to convex
functions. The problem of maximizing a submodular function is as
follows: Given a set \(E\) of elements and a set function \(f: 2^E\rightarrow Q\)
(typically through an oracle), we want to determine a subset \(S\) of \(E\)
with maximum value \(f(S)\). This is an NPhard problem for which several
approximation algorithms are known. These algorithms are either
randomized or adaptive, in the sense that they choose which function
values to look up after having already seen some others, and in this
way adapt to the function. In this talk we discuss a nonadaptive,
or oblivious, deterministic approach. All sets for which the value
of the function will be queried will be chosen from the beginning as
a list. The set of this list with the highest function value will
then be returned as an approximation of the problem. The focus of
the talk will be choosing such a list and determining the quality of
this solution.

May 31, 2017 
Tim Güneysu (University of Bremen)
Challenges of Modern Cryptography

May 24, 2017 
Franziska Eberle (University of Bremen)

May 10, 2017 
Syamantak Das (University of Bremen)
Online and Approximation Algorithms for Scheduling and Network Design
Show abstract
This will be a talk covering my general research interests, approximation and online algorithms for scheduling and network design.
First, I will speak about online scheduling problems. We propose a new model on top of the classical competitive analysis that we believe is useful in dealing with uncertainty in the input data. We show that one can design simple online algorithms with good guarantees if one is allowed to reject, that is, not process a tiny fraction of the input. We propose this in the light of the fact that such an algorithm is not possible for optimizing certain classical scheduling objectives like makespan minimization or minimizing the (weighted) maximum flowtime.
Joint work with: Anamitra Roy Choudhury, Naveen Garg and Amit Kumar.
In the second part, I shall talk about network design problems. We give improved approximation guarantees for a classical network design problem called the Group Steiner Tree, when the input graph has a special structure. In particular, we prove a generic reduction of a problem instance on such graphs to a problem instance on trees that allows us to carry out the desired improvement. The reduction may be of independent interest.
Joint work with: Parinya Chalermsook, Bundit Laekhanukit and Daniel Vaz.

May 3, 2017 
Nicole Megow (University of Bremen)
Online Resource Minimization
Show abstract
Minimizing resource usage is a key to achieving economic, environmental, or societal goals. We consider the fundamental problem of minimizing the number of machines that is necessary for feasibly scheduling preemptive jobs with release dates and hard deadlines. We consider the online variant of this problem in which every job becomes known to the online algorithm only at its release date. We present new competitive algorithms for this problem. We also discuss the power of job migration and show somewhat surprising strong lower bounds on the best possible performance guarantee.
Joint work with: Lin Chen and Kevin Schewior.

March 1, 2017 
Marek Adamczyk (University of Bremen)
When the Optimum is also Blind: A New Perspective on Universal Optimization
Show abstract
Consider the following variant of the set cover problem. We are given a universe \(U=\{1,...,n\}\) and a collection of subsets \(\mathcal{C} = \{S_1,...,S_m\}\) where \(S_i \subseteq U\). For every element \(u \in U\) we need to find a set \(\phi(u) \in \mathcal C\) such that \(u\in \phi(u)\). Once we construct and fix the mapping \(\phi:U \mapsto \mathcal C\) a subset \(X \subseteq U\) of the universe is revealed, and we need to cover all elements from \(X\) with exactly \(\phi(X):=\bigcup_{u\in X} \phi(u)\). The goal is to find a mapping such that the cover \(\phi(X)\) is as cheap as possible.
This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed.
Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worstcase ratio: universal solution for a given instance vs optimum solution for the same instance.
As the universal solution is significantly more constrained, it is typical that such a worstcase ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worstcases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worstcase ratio.
But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do?
In the talk we will see that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For the Set Cover problem  on which the main ideas and characteristics of the model will be presented  we will
see how one can obtain \(H_n\) approximation, which matches the approximation ratio from the classic deterministic setup.
Moreover, this result holds for all possible probability distributions over \(U\) that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. The result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization.
Joint work with: Fabrizio Grandoni, Stefano Leonardi and Michał Włodarczyk.

February 14, 2017 
Lukas Nölke (University of Paderborn)
Recent Developments in NowhereZero Flow Problems

December 21, 2016 
Felix Fischer (University of Glasgow)
Truthful Outcomes from NonTruthful Position Auctions
Show abstract
The area of mechanism design is concerned with the development of algorithms that produce good outcomes given inputs from selfinterested individuals. Much of mechanism design theory is dominated by the VickreyClarkeGroves (VCG) mechanism, which makes it optimal for each individual to provide its true input. In the real world the VCG mechanism is used with surprising rarity, and some reasons for this are known.
We identify another property of the mechanism that may be problematic in practice, a relative lack of robustness to inaccuracies in the choice of its parameters.
Joint work with Paul Dütting and David C. Parkes.

December 7, 2016 
Andreas Tönnis (University of Bonn)
Online Algorithms with Random Order

November 29, 2016 
Syamantak Das (University of Bremen)
A Colorful Routing Problem on Tree Networks
