Systems of Dependent Types
Dependent type systems introduce the notion of family of types or type parameterized on objects of previously given types. They make it possible to express refined data structures with complex explicit invariants. As a consequence, type checking is able to warrant strong properties of constructions as, for instance, total correctness of programs with respect to complete functional specifications or correctness of derivations with respect to given systems of proof rules. We are working on:
- Dependently typed programming, with a medium-term goal of completing a systematic treatise. Within this framework, we are undertaking:
- The systematic study of design of data structures that allow to alleviate the load of explicit formal mathematics associated to certified programming.
- The design of minimal programming languages and systems.
- The development of methods of quantitative analysis of algorithms and data structures.
- Formal meta-theory and infrastructure, i.e. the formal Mathematics of programming languages and their implementations, as well as of formal systems of Logic. Currently, we are are working on:
- Meta-theory of the Lambda Calculus in Constructive Type Theory.
- The comparative study of techniques for dealing with bound names and substitution.
- Certified type checking.
- Logical Frameworks as abstractions of the concept of formal system and the design and implementation of generalised proof assistants. Specifically, the development of a system of dependent types for Nominal Abstract Syntax with an end to providing a logical framework more basic and simpler than those based on Lambda Calculus.
Automatic generation of parallel code
Until not long ago, improvements in the performance of computer processors --obtained as a consequence of the increase of the frequency, number of transistors and rate of instructions-per-cycle—had direct impact on that of software, because it was possible to mitigate performance deficiencies of the latter by means of sophisticated, though effective, compiling techniques.
Now a number of barriers have appeared that put a limit on the progression of the improvement of hardware performance, such as the impossibility of further augmenting the frequency, the increase of the probability of error for circuits under 65 nm and the relative slowness of memory.
Because of these limitations, hardware industry has turned to developing multiple processors architectures and, as a consequence, the exploitation of the improvements of performance is not anymore achieved by clever compilation of sequential software, but by the design of explicitly concurrent software.
Such change, though in principle a technological one, has brought about a number of scientific problems that need to be addressed. From the perspective of software development, these problems are tied, though not exclusively so, to the proliferation of paradigms, architectures and models of concurrent programming.
The goal of our research is to investigate rigorous techniques of design, implementation and analysis of parallel software, taking into account properties relative to correctness, performance and productivity. In particular, we seek to validate the thesis that automatic generation of code from formal high-level specifications by employing formal transformations is an appropriate tool for developing high-quality parallel software.