LIP6 CNRS Sorbonne Université Tremplin Carnot Interfaces
Direct Link LIP6 » Research » Projects


Team : ALSOC

  • TSAR - TSAR (Tera-Scale ARchitecture)

    Project leader : Alain Greiner
    More details here

Team : APR

  • MOPSA - Modular Open Platform for Static Analysis

    The Mopsa project aims at creating methods and tools to make computer software more reliable. Programming errors are pervasive with results ranging from user frustration to huge economical or human losses. Traditional test-based methods are insufficient to eliminate all errors. The project will develop static analyses able to detect at compile-time whole classes of program defects, leveraging the theory of abstract interpretation to design analyses that are approximate (to scale up to large programs) and sound (no defect is missed). Static analysis has enjoyed recent successes: Astrée, an industrial analyzer, was able to prove the absence of run-time error in Airbus software. But such results are limited to the specific, well-controlled context of critical embedded systems. We wish to bring static analysis to the next level: target larger, more complex and heterogeneous software, and make it usable by engineers to improve general-purpose software. We focus on analyzing open-source software which are readily available, complex, widespread, and important from an economical standpoint (they are used in many infrastructures and companies) but also societal and educational ones (promoting the development of verified software for and by citizens). A major target we consider is the set of technologies at the core of Internet on which static analysis could be applied to ensure a safer Internet. The scientific challenges we must overcome include designing scalable analyses producing relevant information, supporting novel popular languages (such as Python), analyzing properties more adapted to the continuous development of software common in open-source. At the core of the project is the construction of an open-source static analysis platform. It will serve not only to implement and evaluate the results of the project, but also create a momentum encouraging the research in static analysis and hasten its adoption in open-source development communities.

    Project leader : Antoine MINÉ
    More details here
  • COVERIF - Combining abstract interpretation and constraint programming to verify critical properties of embedded programs with floating point computations

    Verifying correctness and robustness of programs and systems is a major challenge in a society which relies more and more on safety-critical systems controlled by embedded software. This issue is even more critical when the computations involve floating-point number arithmetic, an arithmetic known for its quite unusual behaviors, and which is increasingly used in embedded software. Note for example the "catastrophic cancellation" phenomenon where most of the significant digits of a result are cancelled or, numerical sequences whose limit is very different over the real numbers and over the floating-point numbers. A more important problem arises when we want to analyse the relationship between floating-point computations and an "idealized" computation that would be carried out with real numbers, the reference in the design of the program. The point is that for some input values, the control flow over the real numbers can go through one conditional branch while it goes through another one over the floating-point numbers. Certifying that a program, despite some control flow divergences, computes what it is actually expected to compute with a minimum error is the subject of the robustness or continuity analysis. Providing a set of techniques and tools for verifying the accuracy, correctness and robustness for critical embedded software is a major challenge. The aim of this project is to address this challenge by exploring new methods based on a tight collaboration between abstract interpretation (IA) and constraint programming (CP). In other words, the goal is to push the limits of these two techniques for improving accuracy analysis, to enable a more complete verification of programs using floating point computations, and thus, to make critical decisions more robust. The cornerstone of this project is the combination of the two approaches to increase the accuracy of the proof of robustness by using PPC techniques, and, where appropriate, to generate non-robust test cases. The goal is to benefit from the strengths of both techniques: PPC provides powerful but computationally expensive algorithms to reduce domains with an arbitrary given precision whereas AI does not provide fine control over domain precision, but has developed many abstract domains that quickly capture program invariants of various forms. Incorporating some PPC mechanisms (search tree, heuristics) in abstract domains would enable, in the presence of false alarms, to refine the abstract domain by using a better accuracy. The first problem to solve is to set the theoretical foundations of an analyser based on two substantially different paradigms. Once the interactions between PPC and IA are well formalized, the next issue is to handle constraints of general forms and potentially non-linear abstract domains. Last but not least, an important issue concerns the robustness analysis of more general systems than programs, like hybrid systems which are modeling control command programs. Research results will be evaluated on realistic benchmarks coming from industrial companies, in order to determine their benefits and relevance. For the explored approaches, using realistic examples is a key point since the proposed techniques often only behave in an acceptable manner on a given sub classes of problems (if we consider the worst-case computational complexity all these problems are intractable). That's why many solutions are closely connected to the target problems.

    Project leader : Eric GOUBAULT
    More details here

Team : BD

  • EPIQUE - Reconstructing the long terms evolution of science through large scale analysis of science productions

    The evolution of scientific knowledge is directly related to the history of humanity. Document archives and bibliographic sources like the “Web Of Science” or PubMed contain a valuable source for the analysis and reconstruction of this evolution. The proposed project takes as starting point the contributions of D. Chavalarias and J.P. Cointet about the analysis of the dynamicity of evolutive corpora and the automatic construction of “phylomemetic” topic lattices (as an analogy with genealogic trees of natural species). Currently existing tools are limited to the processing of medium sized collections and a non interactive usage. The expected project outcome is situated at the crossroad between Computer science and Social sciences. Our goal is to develop new highly performant tools for building phylomemetic maps of science by exploiting recent technologies for parallelizing tasks and algorithms on complex and voluminous data. These tools are conceived and validated in collaboration with experts in philosophy and history of science over large scientific archives. Keywords : quantitative epistemology, evolution of science, topic detection, temporal alignment, big data processing, data science, big data

    Project leader : Bernd AMANN
    More details here
  • experimaestro - Computer science experiment scheduler and manager

    Experimaestro is an experiment manager based on a server that contains a job scheduler (job dependencies, locking mechanisms) and a framework to describe the experiments with JavaScript or in Java.

    Project leader : Benajmin PIWOWARSKI
    More details here
  • SPARQL on Spark - SPARQL query processing with Apache Spark

    A common way to achieve scalability for processing SPARQL queries over large RDF data sets is to choose map-reduce frameworks like Hadoop or Spark. Processing complex SPARQL queries generating large join plans over distributed data partitions is a major challenge in these shared nothing architectures. In this article we are particularly interested in two representative distributed join algorithms, partitioned join and broadcast join, which are deployed in map-reduce frameworks for the evaluation of complex distributed graph pattern join plans. We compare five SPARQL graph pattern evaluation implementations on top of Apache Spark to illustrate the importance of cautiously choosing the physical data storage layer and of the possibility to use both join algorithms to take account of the existing predefined data partitionings. Our experimentations with different SPARQL benchmarks over real-world and synthetic workloads emphasize that hybrid join plans introduce more flexibility and often can achieve better performance than join plans using a single kind of join implementation.

    Project leader : Hubert NAACKE
    More details here
  • BOM - Block-o-Matic!

    Block-o-Matic is a web page segmentation algorithm based on an hybrid approach for scanned document segmentation and visual-based content segmentation. A web page is associated with three structures: the DOM tree, the content structure and the logical structure. The DOM tree represents the HTML elements of a page, the geometric structure organizes the content based on a category and its geometry and finally the logical structure is the result of mapping content structure on the basis of the human-perceptible meaning that conforms the blocks. The segmentation process is divided in three phases: analysis, understanding and reconstruction of a web page. An evaluation method is proposed in order to perform the evaluation of web page segmentations based on a ground truth of 400 pages classified into 16 categories. A set of metrics are presented based on geometric properties of blocks. Satisfactory results are reached when comparing to other algorithms following the same approach.

    Project leader : Andrès SANOJA
    More details here

Team : CIAN

    CORIOLIS - Platform for physical synthesis of integrated circuits

    Coriolis is an experimental integrated platform for the research, development and evaluation of new back-end VLSI design flows. Interconnect scaling to nanometer processes presents many difficult challenges to CAD flows. Currently academic research on back-end tend to address only specific algorithmic issues separately, although one key issue to address is the cooperation of multiple algorithmic tools. CORIOLIS, our platform, is based on an integrated C++ database around which all tools consistently interact and collaborate. This platform currently includes a timing-driven global place and route flow.

    Project leader : Jean-Paul CHAPUT
    More details here
  • CAIRO - Analog IP Design

    Our purpose is to provide a language for designing generators of analog functions, that can be easily ported to new set of specfications and new technologogy processes. We are currently developing such a language that is called CAIRO+ The CAIRO+ language supports the four steps of a design flow based on net-list and layout templates. This language is aimed to help the designer to capture his knowledge, thus creating a library of layout-aware analog functions. It is based on C++ language. The design flow relevant to CAIRO+ is the following : ->net-list and layout template capture, ->design space exploration (managing electrical constraints) ->shape function computation (managing geometrical constraints) ->layout generation (place and route) CAIRO+ allows creating complex hierarchical analog function generators by using existing generators of simpler functions. It is an answer to the problem of Analog and Mixed IPs. As a demonstration of the CAIRO+'s capabilities, we are developping Analog to Digital converters, specially Sigma Delta.

    Project leader : Marie-Minerve LOUËRAT


    Lemon - library for easily modeling and operating on networks

    Lemon is a GUI toolkit written in C++ for manipulating graphical models. It relies on aGrUM both for graph theoretic algorithms and for computations within the graphical models (e.g., inference, learning).

    Project leader : Christophe GONZALES

    More details here
    aGrUM - a Graphical Unified Model

    aGrUM is a C++ library designed for easily manipulating graphical models. Its range of applications is quite large as it is designed, e.g., for performing learning tasks (for instance, learning Bayes nets from data), planning tasks (FMDPs) and inference (Bayes nets, GAI-nets, influence diagrams).

    Project leader : Christophe GONZALES & Pierre-Henri WUILLEMIN

    More details here

Team : MOCAH

  • Adaptiv’Math - Adaptiv’Math

    obtenu dans le cadre du Partenariat d'Innovation Intelligence Artificielle (P2IA) du ministère de l'éducation nationale et porté par la startup EvidenceB, implique des entreprises (Nathan, Daesign, Schoolab, Isograd, BlueFrog), deux laboratoires (LIP6 et Inria Bordeaux), l'APMEP (association des professeurs de mathématiques) ainsi que des chercheurs en psychologie cognitive (E. Sander) et en neurosciences (A. Knopf). Il vise à réaliser un assistant pédagogique pour les mathématiques du Cycle 2 (CP, CE1, CE2) s'appuyant sur des algorithmes d'IA et sur un ensemble d'exercices définis à partir d'avancées en sciences cognitives. Nous travaillons sur une brique IA visant à proposer des regroupements d'élèves (textit{clustering}) appris sur l'ensemble des classes sur la base de critères de maîtrise de compétences en mathématiques. Ce textit{clustering} est ensuite appliqué classe par classe à intervalles réguliers pour proposer à l'enseignant un suivi de l'évolution de ses groupes d'élèves, afin de faciliter la mise en place de stratégies de pédagogie différenciée.

    Project leader : François Bouchet
    More details here
  • SmashMedicine - SmashMedicine

    projet EIT Health Campus SmashMedicine porté par Oxford, sur l'analyse de base de données médicales pour l'aide à la génération de bons distracteurs pour des QCM d'entraînement pour les étudiants de médecine.

    Project leader : François Bouchet
  • MindMath - MindMath

    MindMath (financé par la BPI et la région IdF) est un projet de plateforme gamifiée et adaptative pour l’apprentissage des mathématiques au collège. Ce projet implique plusieurs partenaires industriels (Cabrilog, Tralalere, Domoscio et Bayard) et académiques (LDAR - Université Paris Diderot et LIP6 - Sorbonne Université). Nous développons des algorithmes pour décider, en fonction des activités des élèves au sein de tâches de résolution de problèmes en mathématiques, du feedback le plus adapté pour les aider à progresser dans leurs apprentissages. La décision s’appuie à la fois sur une ontologie construite avec des experts en didactique des mathématiques et sur des approches d’apprentissage automatique. La recherche des feedbacks optimaux se fait par apprentissage par renforcement, avec un système de récompense basé sur la réussite des élèves dans les activités. Ces propositions sont expérimentées dans différents contextes scolaires et parascolaires.

    Project leader : Amel Yessad

    IECARE est un projet de recherche financé par l'ANR. Il vise à produire des connaissances fondamentales et opératoires sur l’informatique, son enseignement et son apprentissage, à l’école obligatoire. Ce projet pluridisciplinaire associe des chercheurs en Informatique et en sciences humaines et sociales (Sciences de l’éducation, didactiques, psychologie des apprentissages, sociologie). La recherche suit trois thèmes structurant : analyser les représentations et les pratiques des enseignants et des élèves ; modéliser, concevoir et modifier des scénarios pédagogiques et des ressources pour soutenir les pratiques d’enseignement et d’apprentissage ; étudier les cadres d’accompagnement mis en place par et pour les enseignants et les formateurs en informatique.

    Project leader : Mathieu Muratet
  • MAGAM - Multi-Aspect Generic Adaptation Model

    MAGAM is a Multi-Aspect (didactic, pedagogic, affective and motivational, gaming, etc.) Generic Adaptation Model based on matrix calculation that aims to adapt learning activities.

    Project leader : Vanda LUENGO et Baptiste MONTERRAT
    More details here
  • LEA4PA - LEarning Analytics for Adaptation and Personnalisation

    This project aims to built a plateform to assist teachers in adapting learning activities. Multiple indicators (cognitive, pedagogical, temporal, etc. will be inferred from data traces that are recorded and generated automatically or manually from the learner activities. Visualization systems will be proposed to assist teachers in their activities' adaptation process and make it.

    Project leader : Vanda LUENGO et Amel YESSAD
  • NeoPraEval - New tools for new practices in evaluating and teaching mathematics

    NeoPraEval is an ANR-funded research project which aims at providing tools for teachers to deal with heterogeneity in learning through automatic diagnostic and evaluation systems that they can use in their classes, as well as appropriated resources to answer the needs identified in students.

    Project leader : François BOUCHET
    More details here
  • RecoMOOC - Recommending people to people in MOOCs

    RecoMOOC aims at reducing attrition and improving the learning experience for MOOC learners through interaction with other students they can talk to and work with, identified automatically through a peer recommender system analyzing each learner's individual factors and progression.

    Project leader : François BOUCHET

Team : MoVe

    PNML Framework

    PNML Framework is a prototype implementation of ISO/IEC-15909 part 2, International Standard on Petri Net Markup Language. The primary purpose of PNML is to enable interoperability among Petri net tools. PNML framework has thus been designed to back the Standard. It will enable Petri nets tools developers to seamlessly integrate PNML support into their tools. It provides an extensive and comprehensible API to create, save, load and browse PNML models.

    Project leader : Fabrice KORDON
    More details here

    is a Petri Net based CASE environment. It offers a set of services to perform specification, validation, formal verification, model checking, compute structural properties (invariants, traps, syphons etc.) simulate and generate code. These services have been implemented either by members of our team or university partners (Technical university of Helsinki, University of Torino, Technical university of Munchen, Bell laboratories). The second geration of CPN-AMI, build on top of FrameKit, is available on the Internet since March 1997.

    Project leader : Fabrice KORDON
    More details here
    SPOT - Spot Produces Our Traces

    SPOT (Spot Produces Our Traces) est une bibliothèque de model-checking facilement extensible. À la différence des model-checkers existants, dont le mode opératoire est immuable, SPOT fournit des briques que l'utilisateur peut combiner entre elles pour réaliser un model-checker répondant à ses propres besoins. Une telle modularité permet d'expérimenter facilement différentes combinaisons, et facilite le développement de nouveaux algorithmes. D'autre part, cette bibliothèque est centrée autour d'un type d'automates particulier permettant d'exprimer les propriétés à vérifier de façon plus compacte, qui n'a jamais été utilisé dans un outil jusqu'à présent.

    Project leader : Denis POITRENAUD

    More details here

Team : NPA

  • F-Interop - Remote interoperability testing services for IoT devices

    Remote interoperability testing services for IoT devices

    Project leader : Serge FDIDA
    More details here


    CADNA - Control of Accuracy and Debugging for Numerical Application

    CADNA is a library which allows to perform scientific computations with the estimation and the control of the round-off error propagation.

    Project leader : Jean-Marie CHESNEAUX
    More details here
 Mentions légales
Site map |