Research

Organisateurs et organisatrices.

  • Hugo Ayats (SemLIS)
  • Johanne Bakalara (Lacodam).
  • Francesco Bariatti (SemLIS).
  • Arnaud Belcour (Dyliss).
  • Kevin Da Silva (Genscale).
  • Ludivine Duroyon (Shaman).
  • Marie Le Roic (Service des assistantes).
  • Anne Siegel (Dyliss).
  • Constance Thierry (Druid)

Tias Guns (Greyc, Univ Caen)

Learning from user and environment in combinatorial optimisation.  (20/05/2021).

Industry and society are increasingly automating processes, which requires solving constrained optimisation problems. This includes vehicle routing problems in transportation, scheduling of tasks in electricity demand-response programs, rostering of personnel and more. However, the solutions found by state-of-the-art constraint programming solvers are often found to not match the expectations of domain experts, which reduces the potential of the technology.
One key direction to overcome this, is to automatically learn from the user and environment over time. This includes learning preferences, implicit constraints and impacts of external factors. It requires combining machine learning techniques with constraint solving techniques, and hence hybrid learning and reasoning systems.
In this talk I will provide an overview of three different ways in which part of the problem specification can be learned from data. This includes preference learning of objective functions, perception-based reasoning and end-to-end decision focussed learning, where I will highlight recent evolutions and advances we have been making in our lab.
I will end by sharing how I used early results in these directions as a motivation for an ERC grant, due to start in 2021; including the research challenges identified and how they range from feasible to potentially high impact.

 

Bruno Crémilleux (Greyc, Univ Caen)

Pattern mining : d’une recherche exhaustive à une découverte de motifs centrée sur l’utilisateur (29/04/2021).

La fouille de données orientée motifs – ou pattern mining – est maintenant un domaine bien établi de la fouille de données. Les méthodes ont longtemps privilégié les approches complètes en effectuant d’importants efforts sur les aspects algorithmiques. Dans cet exposé, nous montrons les succès et les limites pratiques de ces approches qui ont motivé l’évolution du domaine vers l’utilisation de préférences et des méthodes centrées sur l’utilisateur·ice. Nous illustrons cette évolution en montrant comment intégrer des préférences explicites en pattern mining. L’utilisateur·ice ayant souvent une idée vague de l’information recherchée, nous introduisons la fouille interactive et ses défis, la fouille interactive ayant l’avantage d’expliciter les préférences utilisateur·ice afin de fournir des motifs plus utiles.

Ioanna Manolescu (Inria, Saclay)

What do the Sources Say? Exploring Heterogeneous Journalistic Data As a Graph  (16/03/2021).(slides : SourcesSay-IRISA-032021)

Professional journalism is of utmost importance nowadays. It is a main feature distinguishing dictatorships from democracies, and a mirror sorely needed by society to look upon itself and understand its functioning. In turn, understanding is necessary for making informed decisions, such as political choices.

With the world turning increasingly digital, journalists need to analyze very large amounts of data, while having no control over the structure, organization, and format of the data. Since 2013, my team has been working to understand data journalism and computational fact-checking use cases, to identify and develop tools adapted for this challenging setting. I will describe our SourcesSay project (2020-2024), in which extremely heterogeneous data sources are integrated as graphs, on top of which journalistic applications can be supported through flexible graph queries. I will explain the data source integration module, the role played by Information Extraction and Entity Disambiguation, as well as novel techniques to explore and simplify these graphs.

This work is joint with Angelos Anadiotis, Oana Balalau, Helena Galhardas, Tayeb Merabti, Emmanuel Pietriga, and many other colleagues.  Project Web site: https://sourcessay.inria.fr

Michel Dumontier (Maastrich university).

Accelerating biomedical discovery science with an Internet of FAIR data and services (23/01/2021). (slides)

abstract:
Biomedicine has always been a fertile and challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR – Findable, Accessible, Interoperable, and Reusable – and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.

bio:
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research focuses on the development of computational methods for scalable and responsible discovery science. Dr. Dumontier obtained his BSc (Biochemistry) in 1998 from the University of Manitoba, and his PhD (Bioinformatics) in 2005 from the University of Toronto. Previously a faculty member at Carleton University in Ottawa and Stanford University in Palo Alto, Dr. Dumontier founded and directs the interfaculty Institute of Data Science at Maastricht University to develop sociotechnological systems for responsible data science by design. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon 2020, the European Open Science Cloud, the US National Institutes of Health and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.

 

Thomas Schiex (Inrae, Toulouse).

AI for Computational Protein Design: bridging symbolic and numerical IA (12/11/2020).
As a person working on automated reasoning, logic and constraints in Artificial Intelligence but also as a computational biologist at INRAE, I quickly realized that logic alone was not ideal to model and solve the computational problems I faced as an INRAE researcher. For this reason, our team quickly worked on the AI side to extend the “propositional
automated reasoning” techniques that reason on absolutely certain statements, to more flexible formulations using “weighted” information. This lead to the birth of several dedicated fundamental algorithms and their implementation in our dedicated solver “toulbar2”. Toulbar2 is now one of the most efficient solver in his area, and is capable of mixing
logical and numerically weighted information rigorously, solving complex puzzles that combine pure logical knowledge with more vague information, including probabilistic statements.
I will show how, in the last 8 years, we have put toulbar2 to the task of designing new proteins targeting a predefined biochemical/biological function, or more precisely a predefined 3-dimensional structure, standing of the shoulders of giants that designed biophysical force-fields and rotamer libraries, capturing slowly acquired and permanently improved bio-physical and statistical knowledge on proteins. This protein design problem, using a rigid backbone target, a catalog of discrete conformations for amino acid side-chains and a pairwise decomposable force-field such as AMBER, CHARMM or Rosetta score functions, is known to be NP-hard.
For a reason we do not really understand, toulbar2 shines on these problems. Contrarily to the usual Monte Carlo based methods, it is able to find and prove that the solution it has found is optimal for the force-field used on problems of non trivial sizes. It also allows to rigorously satisfy non-trivial designer constraints. It outperforms other guaranteed optimization tools we have tried and was even able to show the limits of an optimized Monte-Carlo method in Rosetta (IPD, Univ. Washington). Recent comparisons of toulbar2 with D-Wave quantum annealing hardware (by Rosetta team members ) also show its good relative performances. Thanks to this, we have shown it is also capable of dealing with backbone flexibility, at least when the aim is to design a protein sequence that should fit several “states”. We have put all
these techniques to work in practice with structural biologist colleagues, designing new self-assembling proteins, antibody or enzymes.
As I will show, for a few years now, we have upgraded this pure force-field and design target constraints based approach with Machine Learned information extracted from multiple sequence alignments, allowing to refine the force-field for a given suitable design structure, bridging the gap between data, machine learning information, thermodynamic and logic information, which perfectly fits the usual “designer” situation. Going back to AI, we have showed that this approach is also able to learn how to play the Sudoku, without knowing the rules, just from images of solved grids, better than “Deep Neural Net”-friendly appraoches, while providing understandable and customizable learned rules.

François Charoy (Loria, Nancy)

Confiance et partage d’information à grande échelle (22/10/2020).  

La coopération entre organisations et personnes à grande échelle est aujourd’hui facilitée par des outils et des plateformes qui permettent le partage et la production d’information. Depuis les premiers travaux de Ellis dans les années 80 différentes techniques ont été développées pour permettre ce partage à l’échelle du Web. Cependant l’existence d’une telle possibilité n’est pas suffisante pour en assurer un usage cohérent avec les contraintes des domaines concernés. A travers une présentation de différents travaux menés dans l’équipe Coast du LORIA, nous montrerons les facteurs qui peuvent impacter la confiance entre partenaires lorsqu’il s’agit de partager des informations. Le premier facteur est le facteur humain que nous introduirons à travers une étude menée dans le domaine de la sécurité civile. Le second facteur concerne la sureté du partage et en particulier les plateformes et les algorithmes permettant d’assurer un partage sure et efficace de données à grande échelle en pair à pair. Le dernier facteur concerne la sécurité du partage et nous expliquerons quels sont les risques d’attaque contre une plateforme de partage pair à pair et comment on peut s’en protéger.

 

Sarah Cohen (LRI).

Computational reproducibility in the life sciences and research in computer science: round trip(23/01/2020).
With the development of new experimental technologies, biologists are faced with an avalanche of data to be computationally analyzed  for scientific advancements and discoveries to emerge. Faced with the complexity of analysis pipelines, the large number of computational tools, and the enormous amount of data to manage, there is compelling evidence that many (if not most) scientific discoveries will not stand the test of time. Increasing the reproducibility of computed results is of paramount importance. While several elements of solutions are currently available, ensuring reproducible analyses relies on progress made in several areas of research in computer science including fundamental aspects: analysis pipelines may form very complex graphs to compared and mined, they are also pieces of software with dependencies to their computational environment, maintaining they is highly challenging… After an introduction to the problem of computational reproducibility, we go on to discuss the challenges posed by this domain and describe the remaining opportunities of research in computer science.

Laurent Simon (LABRI, Bordeaux).

SAT : résoudre un problème difficile pour les résoudre tous. (28/11/2020).

Les progrès autour de la résolution pratique du problème SAT, le problème NP-Complet canonique, ont été spectaculaires dans certains domaines applicatifs. Même si des limites fortes existent toujours sur quelques problèmes fortement combinatoires, nous présenterons, dans cet exposé, quelques applications clés qui ont bénéficié de ces progrès. Nous présenterons également comment la logique propositionnelle, au coeur de SAT, permet de modéliser et de résoudre des problèmes de raisonnement bien au delà de ce formalisme initial.  Ainsi, l’exposé se conclura par la présentation des progrès récents en  compilation de connaissance, formalisme puissant, général et élégant pour le raisonnement. (slides)

Antoine Amarilli (Telecom Paris).

Enumerating pattern matches in texts and trees. (24/10/2020).
We study the data management task of extracting structured information from unstructured documents, e.g., raw text or HTML pages. We use the framework of document spanners, where the pattern to extract is specified declaratively by the user (as a regular expression with capture variables) and is translated to an automaton that then is evaluated on the document to compute the pattern matches. We focus on the last step of this pipeline: our goal is to efficiently find the matches of the automaton on the document, even when there can be many of them. We do this by proposing an enumeration algorithm, which first preprocesses the automaton and document, and then enumerates all matches with a small delay between consecutive matches. Unlike previous work, our algorithm achieves the best possible bounds in the input document (namely, linear preprocessing and constant delay), while remaining tractable in the automaton. The guiding principle of the algorithm is to compute a factorized representation of all matches as a product of the automaton and document, and design efficient indexes based on the structure of this representation. We also present our ongoing follow-up work, e.g., how to extend our algorithm to the case of tree-shaped documents by efficiently enumerating the matches of tree automata, how to efficiently update the enumeration results when the input document changes, and other open research directions.

 

Comments are closed.