Table of Contents

Pareo is a member of the Formal Methods department at Loria, which organizes a seminar every Tuesday at 2:00 pm. More informations can be found here.

Here are the archives of the seminar which used to be organized by the team until 2010.

- On the completeness of quantum computation models, Gilles Dowek (joint work with Pablo Arrighi)

- The notion of computability is stable (i.e. independent of the choice of an indexing) over infinite-dimensional vector spaces provided they have a finite “tensorial dimension”. Such vector spaces with a finite tensorial dimension permit to define an absolute notion of completeness for quantum computation models and give a precise meaning to the Church-Turing thesis in the framework of quantum theory.

- The structural lambda-calculus, Delia Kesner (Joint work with Beniamino Accattoli)

- Inspired from a recent graphical formalism for lambda-calculus based on linear logic technology by Accattoli and Guerrini, we introduce an untyped structural lambda-calculus, called lambda_j, which combines actions at a distance with exponential rules used to specify erasure, duplication and linear substitution. We first prove some sanity properties of the calculus. Secondly, we use the lambda_j calculus to describe known notions of development and superdevelopment, as well as a new more general one: XL-superdevelopment. We also show how to add on top of lambda_j a structural equivalence to denote terms having exactly the same graphical formalism representation. This enriched calculus is shown to preserve normalisation of lambda-terms.

- Finding Unity in Computational Logic Dale Miller

- While logic was once developed to serve philosophers and mathematicians, it is increasingly serving the varied needs of computer scientists. In fact, recent decades have witnessed the creation of the new discipline of Computational Logic. While Computation Logic can claim involvement in diverse areas of computing, little has been done to systematize the foundations of this new discipline. Here, we envision a unity for Computational Logic organized around the proof theory of the sequent calculus: recent results in the area of focused proof systems will play a central role in developing this unity.

- Nominal Techniques in the Theorem Prover Isabelle or, How Not to be Intimidated by the Variable Convention Christian Urban

- Bound variables play an important role in programming language research and in logic. Nearly all informal induction proofs involving bound variables make use of the variable convention. In the talk I will show how strong induction principles can be derived that have the variable convention already built-in. However, I will also show that this convention is in general an unsound reasoning principle and requires restrictions in order to be safe. The aim of this work is to provide all proving technology necessary for reasoning conveniently about programming languages and logic calculi.

- Représentation et interaction des preuves en superdéduction modulo, Clément Houtmann

- Finding all satisfying assignments in recognizable theories Tony Bourdier

- Frühwirth et al. showed that recognizable sets play an important role in the computation in Horn clause theories. Indeed, they proposed a restriction of Horn clauses (called uniform clauses) insuring that the least model consists of recognizable sets of terms and thus can be represented by a tree automaton. Moreover, it has been shown that order-sorted signatures could be viewed as regular tree languages and a correspondence between Prolog monadic programs and tree automata was formalized. Starting from theses observations, we propose to describe some theories by tree automata called “recognizable theories” and starting from the representation of a theory using tree automata we show how finding all satisfying assignments (which can be possibly infinite) of some first order formulae using a transformation system based on the semantics of operators over tree automata.

- Dependent Types as Higher Order Dependency Pairs Cody Roux

- The size change principle allows us to carry out termination analysis on programs written in a higher order typed programming language. However this analysis can be costly, and fail for intuitively terminating programs. We give a dependent type system, which combined with the size-change principle, gives a finer termination criterion. In particular, this analysis allows to disregard certain function calls, in a manner very similar to the Dependency Pair method, which functions best on first order programs. The natural lattice structure of base types in this framework is also very reminiscent of Abstract Interpretation.

- Separation Logic: Program Analysis's Swiss Knife Clément Hurlin

- We present different usages of separation logic. These usages include “classical” program verification, discovery of erroneous specifications, and automatic parallelization and/or optimization. We discuss limitations of these techniques and envision future work.

- Traits and their evolution Stéphane Ducasse

- We recently work on traits a new language constructs. A trait acts as a unit of reuse. We have been developing a simple compositional model for structuring object-oriented programs, which we call traits. Traits are essentially groups of methods that serve as building blocks for classes and are primitive units of code reuse. In this model, classes are composed from a set of traits by specifying glue code that connects the traits together and accesses the necessary state. Unlike mixins and multiple inheritance, traits do not employ the inheritance as the composition operator. Instead, traits composition is based on set of composition operators that are complementary to inheritance and result in better composition properties. We have implemented traits in Squeak, an open-source Smalltalk dialect. We extended traits to introduce state and control visibility. In this presentation, we will present stateless traits and the following evolution stateful traits.

- Conception d'un langage dédié à l'analyse et la transformation de programmes, Emilie Balland

- Software Assessment Research at SIG Joost Visser

- Joost Visser is head of R&D at the Software Improvement Group, Amsterdam, The Netherlands. SIG provides IT management consultancy based on source code analysis to a wide variety of industrial and governmental customers. SIG started in 2000 as a spin-off company of the Centre for Mathematics and Informatics (CWI) in Amsterdam. Currently, SIG is independent, highly profitable, and the proud employer of a high percentage of scientifically educated staff members. In this talk, Joost will present the research activities of SIG. He will explain how these activities are structured, which topics are on the SIG research agenda, and which results have been achieved in the recent past. In particular, he will explain how SIG uses its unique insight into the software systems of scores of clients in order to validate and calibrate its software quality measurement model.

- Defunctionalized Interpreters for Higher-Order Programming Languages Olivier Danvy

- This talk illustrates how functional implementations of formal semantics (structural operational semantics, reduction semantics, small-step abstract machines, big-step abstract machines, natural semantics, and denotational semantics) can be transformed into each other. These transformations were foreshadowed by John Reynolds in “Definitional Interpreters for Higher-Order Programming Languages” for functional implementations of denotational semantics, natural semantics, and big-step abstract machines using closure conversion, CPS transformation, and defunctionalization. Over the last few years, the author and his students have further observed that functional implementations of small-step and of big-step abstract machines are related using fusion by fixed-point promotion and that functional implementations of reduction semantics and of small-step abstract machines are related using refocusing and transition compression. It furthermore appears that functional implementations of structural operational semantics and of reduction semantics are related as well, also using CPS transformation and defunctionalization.

- Conception d'un langage dédié à l'analyse et la transformation de programmes, Emilie Balland

- Introduction sur les grammaires attribuées et leurs applications à l'analyse statique de programmes, Emilie Balland

- Les grammaires attribuées ont été introduites par Donald Knuth en 1968 aﬁn de déﬁnir la sémantique des langages de programmation. Une grammaire attribuée permet d’associer des valeurs aux nœuds de l’AST et il est donc très naturel de spéciﬁer des analyses de programmes sous forme d’attributs. Dans cet exposé, nous introduirons les concepts de base de ce formalisme et nous montrerons plusieurs applications à l'analyse de programmes. Les exemples seront présentés à l'aide du langage JastAdd (http://jastadd.org) qui propose une implémentation des grammaires attribuées permettant d’embarquer du code Java.

- Size based Termination as Semantic Labelling, Cody Roux

- Termination of typed higher order rewrite systems has attracted much attention, as it is the basis of termination analysis for a certain number of programming languages, and is sufficient to guarantee consistency and decidable type-checking of type-based logical theories. Size-based termination is a technique which provides syntactical approximations to a semantics of terms: sizes and types, and uses these to prove strong normalisation using the famous reducibility technique. However it is unclear what the power of this technique is, and every extension of it requires rebuilding the rather delicate proof. Semantic labelling is a powerful technique of first order rewriting which allows the transformation of a rewrite system into a system for which termination is (hopefully) more easy to prove. As opposed to most first order criteria, this technique generalizes to higher order rewriting. By using this criteria combined with the General Schema termination criterion, we hope to subsume the different proposals of size based termination.

- Circular Coinduction-based Techniques for Proving Behavioral Properties, Dorel Lucanu

- Behavioral abstraction in algebraic specification appears under various names in the literature such as hidden algebra (Goguen et al.), observational logic (Hennicker, Bidoit), swinging types (Padawitz), coherent hidden algebra (Diaconescu, Futatsugi), hidden logic (Rosu), and so on. Most of these approaches appeared as a need to extend algebraic specifications to ease the process of specifying and verifying designs of systems and also for various other reasons, such as, to naturally handle infinite types, to give semantics to the object paradigm, to specify finitely otherwise infinitely axiomatizable abstract data types, etc. The main characteristic of these approaches is that sorts are split into visible (directly accessible) for data and hidden (not directly accessible) for states, and the equality is behavioral, in the sense that two states are behaviorally equivalent if and only if they appear to be the same under any visible experiment. Coalgebraic bisimulation and context induction are sound proof techniques for behavioral equivalence. Unfortunately, both need human intervention: coinduction to pick a ``good'' bisimulation relation, and context induction to invent and prove lemmas. Circular coinduction is an automatic proof technique for behavioral equivalence. CIRC is an automated circular coinductive prover for propeties expressed as behavioral equivalences. CIRC is implemented as an extension of Maude (maude.cs.uiuc.edu) using metalevel capabilities of rewriting logic. In this talk we present the CIRC tool together with the circular coinductive-based techniques that forms the core of CIRC. CIRC is developed by a joint work with Grigore Rosu, University of Illinois at Urbana Champaign. More details about CIRC can be found on the following web adresses: http://fsl.cs.uiuc.edu/index.php/Circ (this site includes an online demo) http://circidei.info.uaic.ro/index-en.htm

- Verification of Graph Transformation Systems, Paolo Baldan

- With the advent of mobile and ubiquitous computing, modern software and computer systems are frequently characterised by a high level of dynamicity. Features such as flexible topologies, the dynamic creation and deletion of objects, and an infinite-state space make them very hard to analyse and verify. In this context, graph transformation systems (GTSs) emerge as an expressive specification formalism for concurrent, distributed and mobile systems, generalising another classical model of concurrency, namely Petri nets. While the semantic theory of GTSs is relatively well-understood, the problem of developing automatic verification techniques for GTSs is still a stimulating and promising research direction. In this presentation we outline a framework for the verification of infinite and finite-state GTSs based on their unfolding semantics. For general, possibly infinite-state, GTSs one can construct finite under- and over- approximations of the (infinite) unfolding, with arbitrary accuracy. Such approximations can be used to check behavioural properties of a GTS, expressed in a suitable temporal graph logic. For finite-state GTSs, we propose a variant of McMillan's complete prefix approach (originally developed for Petri nets), discussing some issues related to the construction of the prefix and its use.

- Typed pattern calculus: beyond the Curry-Howard Isomorphism, Barry Jay

- Pattern calculus is able to express new sorts of polymorphism, with respect to paths and patterns, so that one may define generic operations for addition and equality, and generic queries, for selecting or updating within an arbitrary data structure. This talk shows how to type such calculi. The machinery required is not so complex, but first one must jettison some popular assumptions about types, in particular that the types determine the terms, or that terms correspond to proofs (of known logics). We begin by re-working System F to make it smaller and more functional, and then progress to type static patterns (written by the programmer) and finally dynamic patterns, which arise during computation.

- Programming with patterns in bondi, Barry Jay

- Why are there so many programming styles? Pattern calculus provides a common foundaiton for the most popular programming styles, including functional, imperative, query-based and object-oriented styles, and also some new ones. This is being realised in the programming language bondi developed in Sydey. This talk will use bondi to introduce pattern calculus, and illustrate its expressive power. The running example will be to solve the following problem. Update the salaries of all employees of an organisation by applying a completely generic function to parameters for the company, the representation of employees and the formula for updating their salaries.

- Java class generation for testing refactoring tools, Emilie Balland

- Refactoring is the process of improving the design of existing code by performing behaviour-preserving program transformations. It can be done manually but most of the refactorings require tedious, error-prone manipulation of the code. Therefore, development environments now provide an automated support for applying them, for instance for renaming or extracting a method. Implementing such transformations is complex and even the most sophisticated IDEs contain bugs in their refactorings support. Testing such tools requires to generate automatically and randomly correct programs. This generation implies to implement Name analysis as in a compiler. For example, in Java, the complexity of this analysis is due to the numerous visibility rules. In this talk, I will show how strategic rewriting has been used to develop a class generator for a Java subset and in particular how the Java name analysis has been implemented in a declarative and modular manner thanks to the strategy language.

- Alchemy: Transmuting Specifications into Implementations, Daniel J. Dougherty

- Alloy is a simple structural modeling language based on first-order logic. (http://alloy.mit.edu). Alloy specifications are widely used to construct lightweight models of systems. A family of tools has evolved around this language to support several activities such as analysis, theorem-proving, and test-generation. Unfortunately, these leave the developer no closer to a working progam. We therefore present Alchemy, a tool for generating implementations from Alloy specifications. Alchemy compiles these specifications against persistent databases, such as those backing many Web services. In particular, it translates a subset of Alloy predicates into imperative update operations, and it converts facts into database integrity constraints that it attempts to maintain automatically in the face of these imperative actions. In addition to presenting the semantics and algorithm for this compilation, we present the tool and outline its application to a non-trivial specification. We also discuss the perspective we have gained about Alloy specifications from this exercise.

- Joint work with Kathi Fisler (WPI) and Shriram Krishnamurthi (Brown)

- A few remarks about developping safety or security critical systems within inductive formal systems, Thérèse Hardin

- Before their installation, critical systems must be assessed to ensure that software components are compliant with a set of requirements described in standards. To reach highest Safety Integrity Levels or Assurance Levels, formal methods must be used. In this talk, based on some experiments with the FoCaL environment, we give some hints on developing a life cycle easing assessment and re-using of components. Then , we give a few remarks on how to improve confidence in formal developments by identifying some possible traps in such developments (illustrated with the B system), which should be considered during the assessment process.

- Semantic web policies for security and privacy, Piero A. Bonatti

- Today's most “interesting” systems are open to the internet, so they are vulnerable to a number of possible attacks against system security and their users' privacy. Traditional countermeasures generally conflict with usability. Trust negotiation (TN) comprises a family of new, appealing techniques for achieving a better tradeoff between (i) security and privacy protection, and (ii) simple and friendly data and service publication on computer networks. According to the current approaches, the peers involved in a transaction

acquire a suitable level of trust in each other by negotiating and exchanging electronic credentials and other (digitally unsigned) information. This technique is very flexible, but it requires solving nontrivial problems.

Some of them concern usability and interoperability. Automated support to credential exchange is necessary to preserve smooth navigation experience, but this requires heterogeneous peers to understand each other's requirements. Here is where lightweight semantic techniques come into play.

Moreover, it can be argued that no sophisticated framework for protecting security and/or privacy can reach its full potential unless end users can understand and control it. So a major open problem consists in designing effective automated explanation systems and friendly policy specification languages. The specificities of TN frameworks permit a specialized approach that may result more effective than generic explanation approaches.

This talk will give a brief overview of the above research topics as they have been tackled - with a Semantic Web perspective - within the European network of excellence REWERSE (REasoning on the WEb with Rules and SEmantics) and implemented in the trust negotiation framework Protune.

- Réécriture et Modularité pour les Politiques de Sécurité, Anderson Santana de Oliveira

- Dans cette thèse, nous nous intéressons à la spécification et à l’analyse modulaires de politiques de sécurité flexibles basées sur des règles. Nous introduisons l’utilisation du formalisme de réécriture stratégique dans ce domaine, afin que notre cadre hérite des techniques, des théorèmes, et des outils de la théorie de réécriture. Ceci nous permet d’énoncer facilement et de vérifier des propriétés importantes des politiques de sécurité telles que l’absence des conflits. En outre, nous développons des méthodes basées sur la réécriture de termes pour vérifier des propriétés plus élaborées des politiques. Ces propriétés sont la sûreté dans le contrôle d’accès et la détection des flux d’information dans des politiques obligatoires.

Par ailleurs, nous montrons que les stratégies de réécriture sont importantes pour préserver des propriétés des politiques par composition. Les langages de stratégies disponibles dans des systèmes comme Tom, Stratego, Maude, ASF+SDF et Elan, nous permettent de définir plusieurs genres de combinateurs de politiques.

Enfin, nous fournissons également une méthodologie pour imposer à des applications existantes, de respecter des politiques basées sur la réécriture via la programmation par aspects. Les politiques sont tissées dans le code existant, ce qui génère des programmes mettant en oeuvre des moniteurs de référence. La réutilisation des politiques et des programmes est améliorée car ils peuvent être maintenus indépendamment.

- Réécriture et Modularité pour les Politiques de Sécurité, Anderson Santana de Oliveira

- Vérification de politiques de sécurité, Yassine Guebbas

- Compilation de programmes polygraphiques, Aurélien Monot

- SAT modulo la théorie de l'arithmétique linéaire: solvers exacts, inexacts et commerciaux, Germain Faure

- Germain Faure de passage dans l'équipe nous présentera informellement le travail qu'il réalise au sein de l'équipe Barcelogic (dirigé par R. Nieuwhenuis).

- Types, logique et coercions implicites, Cody Roux

- Le sous-typage, en plus d'etre lié à des questions profondes en informatique, est un enjeu important pour la formalisation des mathématiques en théorie des types. Nous présentons un système d'inférence de coercions pour le lambda calcul simplement typé et montrons sa décidabilité, puis évoquons des sujets connexes.

- Zenon: a practical theoretical introduction to (classical) tableaux methods, Richard Bonichon

- Tableaux methods have been used since the 1930s to define, study and implement automated theorem provers (ATP) for various logics. Although today's top-of-the-shelf ATP sytems are more often resolution-based, using tableaux still offer some advantages, even in the context of classical logic, as we will try to show. In this talk, we will first recall the basic theoretical notions and notations regarding tableaux methods. We will try and highlight the deep links between tableaux, sequent calculus and cut elimination. Then we will present the current state of Zenon, a tableau-based automated theorem prover. Its main claim to fame is to produce checkable proofs for first-order classical logic with equality. We will show some of its internals and how one goes from writing a proof input in Focal Proof Language to checking its output with Coq.gH