- Hugo Férée (University of Lille 3)

**Game Semantics Approach to Higher-Order Complexity**

ABSTRACT: Classical complexity theory provides a well-known notion of complexity of first-order functions (i.e. functions over integers or binary words) and even a less known but well understood notion of complexity for second-order functions (i.e. functions over first-order functions) as well as a class of feasible functions analogue to the standard class of polynomial time computable (first-order) functions. However, no satisfactory notion of complexity at higher types has been given yet. We propose here a definition of complexity for higher order functions (more precisely, PCF functionals) based on game semantics, i.e. where functions are represented by strategies and the application operation of corresponds to the confrontation of such strategies in a game. In particular, we define a complexity class which coincides with the usual notions of polynomial time complexity at first and second order, and verifies several properties that we can expect from a higher order class of feasible functions.

- Anupam Das (ENS, Lyon)
**Proof Complexity of Deep Inference: a Survey**

ABSTRACT: In this talk I survey the current state of research on the complexity of deep inference proofs in classical propositional logic, and also introduce a recent program of research to further develop this subject. Deep inference calculi, due to Guglielmi and others, differ from traditional systems by allowing proof rules to operate on *any* connective occurring in a formula, as opposed to just the main connective. Consequently such calculi are more fine-grained and admit smaller proofs of benchmark classes of tautologies. As well as plainly theoretical motivations, this is advantageous from the point of view of proof communication, allowing for compact and easy-to-check certificates for proofs. I present a graph-based formalism called 'atomic flows', due to Guglielmi and Gundersen following from previous work by Buss, and a rewriting system on them that models proof normalisation while preserving complexity properties. I show how such an abstraction has been influential in work up to now, allowing the obtention of surprisingly strong upper bounds and simulations in analytic deep inference. The biggest open question in the area is the relative complexity between the minimal system KS and a version that allows a form of sharing, KS+. To this end I will consider systems of bounded arithmetic to serve as uniform counterparts to these nonuniform propositional systems. Due to the peculiarities of deep inference calculi, the search for corresponding arithmetic theories takes us into the largely unexplored territories of intuitionistic and substructural arithmetics, requiring novel interplays of tools at the interface of logic and complexity.

- Yevgeny Kazakov (University of Ulm)
**Towards Practical Algorithms with Optimal Complexity for Description Logics**

ABSTRACT: Description Logics are a family of languages used for knowledge representation and reasoning, notably as the basis of the Web ontology language OWL standardized by W3C. These languages have been designed to achieve a balance between expressivity required for applications and the computational complexity of the reasoning problems involved in such applications. When it comes to reasoning procedures for Description Logics, however, there is a significant gap between computationally optimal and practical algorithms. Algorithms that were described to establish optimal complexity bounds are often not practical becausee their typical-case performance is close to their worst-case performance. Practical algorithms, on the other hand, perform well for typical use cases, but their complexity on some pathological inputs can be far worse than the optimal complexity. In this talk I will present a new kind of 'consequence-based' reasoning procedures that aim at closing this gap.

- Emanuel Kieroński (University of Wrocław)
**Two-Variable Logics with Equivalence Relations: From NP to 2NExpTime and Undecidability**

ABSTRACT: Propositional modal logic, the two-variable guarded fragment, and the two-variable fragment of first-order logic are three prominent formalisms, well motivated in various fields of computer science. All of them however have some important limitations in their expressive power. In particular none of them can say that a relation is transitive or that it is an equivalence relation. In this talk I am going to survey the complexity results concerning the satisfiability problem for the above mentioned logics extended by equivalence relations. The starting point will be the two-variable guarded fragment with equivalence relations in guards. On this example I will illustrate the issues arising in more details. Both the satisfiability and the finite satisfiability problems are NExpTime-complete in this case. The upper bound for general satisfiability can be shown quite easily by employing tree-like model property. The more challenging finite case can be solved using integer programming. Then I will briefly explain how various extensions and restrictions of the above example logic influence the decidability and the complexity of the satisfiability problem. The following complexity classes will appear: NP, PSpace, ExpTime, NExpTime, 2ExpTime, 2NExpTime.