Research

Organisateurs et organisatrices Actuels:

  • Mathieu Chambe (Druid)
  • Wafaa Elhusseini (Shaman)
  • Victor Epain (Genscale)
  • Julien Delaunay (Lacodam)
  • Kerian Thuillier (Dyliss)

Sihem Amer-Yahia (CNRS, Univ. Grenoble Alpes)

Fairness on Online Labor Markets  (17/11/2022). https://lig-membres.imag.fr/amery/

Online labor markets are increasingly becoming a destination for work. These marketplaces include freelancing platforms such as Qapa and MisterTemp’ in France, and TaskRabbit and Fiverr in the USA. On those platforms, workers can find temporary jobs in the physical world such as moving furniture, or in the form of virtual micro-gigs such as helping with designing a website. I will present the results of a study of fairness on those platforms, and discuss the design of a model to enforce fairness in the Future of Work.
Bio: Sihem Amer-Yahia is a Silver Medal CNRS Research Director and Deputy Director of the Lab of Informatics of Grenoble. She works on exploratory data analysis and fairness in job marketplaces. Before joining CNRS, she was Principal Scientist at QCRI, Senior Scientist at Yahoo! Research and Member of Technical Staff at at&t Labs. Sihem is PC chair for SIGMOD 2023 and vice president of the VLDB Endowment. She currently leads the Diversity&Inclusion initiative for the database community.

Rémy Eyraud (LabHC CNRS, Université Jean Monnet, Saint-Etienne)

Hankel Matrix & Weighted Automata: From spectral Learning to Spectral Distillation  (08/09/2022). https://perso.univ-st-etienne.fr/er101405/index.html

This talk focuses on the Hankel matrix and Weighted Automata. After introducing these two notions and the main algorithm that links them, I will detail the spectral learning algorithm, a machine learning approach that comes with theoretical guarantees and interesting practical results. Extensions will be discussed with a particular insight on the extraction of weighted automata from any black box trained on (categorial) sequential data. If time allows it, a use case of an easy to use toolbox will be shown.

Marie-Christine Rousset (SLIDE, LIG, Grenoble)

Some ongoing work on building interpretable explanations for AI algorithms on tabular or graph data  (09/06/2022). https://lig-membres.imag.fr/rousset/

AI systems use sophisticated algorithms that apply to personal data for developing more and more decision making applications that directly impact humans. Both for social acceptability and for ethical purposes, it is of utmost importance to make the decisions of AI systems interpretable by humans and also to provide guarantees of privacy protection. In this talk, I will present some ongoing work that we are conducting in the Grenoble MIAI chaire “Explainable and Responsible AI” for building interpretable explanations of AI algorithms. In a first part, I will summarize experimental results that we have obtained on building local and global explanations for predictions of microcredit default learned by black-box models from a tabular dataset. In the second part, I will present our ongoing work on explaining privacy risks detected by a graph-based reasoning algorithm used to check incompatibility between privacy and utility policies expressed as queries. in this setting, queries are interpreted as logical formulas over a common schema and the explanation is based on the construction of a small synthetic graph data illustrating a possible entailment between graph patterns.

Charlotte Truchet (TASC, Université de Nantes)

Randomization of solutions in constraint programming  (24/03/2022). https://www.univ-nantes.fr/charlotte-truchet

Constraint programming provides generic techniques to efficiently solve combinatorial problems. In this talk, I will present an ongoing work on constraint sampling. The question is: is it possible to sample combinatorial problems in a generic way, using a constraint solver, and without making the computation time explode? I will present an algorithm, inspired from Meel’s method on SAT, to add randomly chosen hashing constraints to the problem, in order to split the search space into small cells of solutions. By sampling the solutions in a cell, it randomly generates solutions without revamping the model of the problem. We ensure the randomness by introducing a new family of hashing constraints: randomly generated tables. We implemented this solving method using the constraint solver Choco-solver. The quality of the randomness and the running time of our approach are experimentally compared to a random branching strategy. We show that our approach improves the randomness while being in the same order of magnitude in terms of running time.

Farouk Toumani (LIMOS, Clermont Ferrand)

Reasoning in description logics with variables: beyond matching and unification  (20/01/2022). http://franck.varenne.perso.monsite-orange.fr/

Description logics (DLs) are a family of logic-based knowledge representation languages used to represent knowledge of an application domain in a structured and well-understood way while providing inference procedures for reasoning about the represented knowledge. DLs form a very active research area, spanning the last 40 years, and which led to a rich variety of description formalisms, with differing expressive power, employed in various application domains (e.g., information systems and databases, natural language processing, semantic web, etc). Standard inference problems (e.g., subsumption, satisfiability and instance problem) as well as their computational properties (decidability, connection between complexity and expressivity) are now well understood. After a brief introduction to description logics nonstandard inferences, this talk will be devoted to presenting description logics augmented with (concept or role) variables. Concepts with variables (also called patterns) have been introduced in description logics since the midnineties and led to a highly interesting research stream on the so-called nonstandard reasoning, specifically, matching and unification. The talk will discuss new semantics for variables and highlights new inference problems which can be viewed as generalization of matching and unification.

Franck Varennes (ERIAC, Rouen)

Apprentissage machine et explicabilité : un regard issu de l’épistémologie comparée des modèles  (16/12/2021). https://perso.limos.fr/~fatouman/

Certaines techniques dites d’apprentissage machine (machine learning) sont de plus en plus employées pour concevoir des modèles mathématiques et informatiques à fonction soit de prédiction soit de décision. Dans des contextes de diagnostic médical ou de décision juridique, en particulier, il est demandé que ces modèles – ou bien les algorithmes associés ou encore leurs résultats – soient, dit-on, explicables. Parallèlement, dans ce sous-domaine de la philosophie des sciences nommé « épistémologie des modèles », il est devenu d’usage de distinguer les modèles utilisés dans les sciences au regard de leurs différentes fonctions de connaissance, selon qu’ils décrivent, prédisent, servent à comprendre, expliquent, etc. Dans cette conférence – qui présentera une partie des travaux que j’ai commencé à mener en collaboration avec Christophe Denis (LIP6, équipe ACASA) – je montrerai qu’il est possible et utile d’appliquer certaines analyses de cette épistémologie classificatrice des modèles au cas particulier des modèles à apprentissage machine. En revenant sur ce qu’il faut entendre derrière cette demande d’explication, ainsi que sur la différence entre expliquer le modèle et expliquer par le modèle, comme enfin sur le rapport entre explication et causalité, j’essaierai d’apporter quelques éclairages nouveaux sur un sujet controversé. Ce sujet est controversé car il mêle la pensée mathématique et informatique non seulement à l’IA, mais aussi à l’épistémologie des modèles comme à l’épistémologie des mathématiques appliquées.

William Ritchie (IGH, montpellier)

Sequencing in Health, a decade of disappointment. Possible AI solutions.  (18/11/2021). https://www.igh.cnrs.fr/en/research/departments/genome-dynamics/intelligence-artificielle

The recent increase in the amount of sequencing data has produced a remarkably low number of clinically actionable discoveries. The main issues in translating these data into outcomes come from the fact that many diseases are multifactorial requiring complex models and large sample sizes to understand them. However, sequencing data are large, sensitive and heterogeneous. In this talk, I will propose some solutions to address multiple issues pertaining to the use and interpretation of health-related sequencing data. Specifically, how can we explore sequencing without a reference genome? How can we realistically preserve patient identity? My talk will also introduce some basic deep learning architectures for an audience without a background in machine learning.

Thomas Schiex (INRAE, Toulouse)

AI for Computational Protein Design: bridging symbolic and numerical IA (12/11/2021). https://www.bioss-cnrs.fr/manif/biossia_202011/slides/Schiex.pdf

As a person working on automated reasoning, logic and constraints in Artificial Intelligence but also as a computational biologist at INRAE, I quickly realized that logic alone was not ideal to model and solve the computational problems I faced as an INRAE researcher. For this reason, our team quickly worked on the AI side to extend the “propositional automated reasoning” techniques that reason on absolutely certain statements, to more flexible formulations using “weighted” information. This lead to the birth of several dedicated fundamental algorithms and their implementation in our dedicated solver “toulbar2”. Toulbar2 is now one of the most efficient solver in his area, and is capable of mixing logical and numerically weighted information rigorously, solving complex puzzles that combine pure logical knowledge with more vague information, including probabilistic statements.

I will show how, in the last 8 years, we have put toulbar2 to the task of designing new proteins targeting a predefined biochemical/biological function, or more precisely a predefined 3-dimensional structure, standing of the shoulders of giants that designed biophysical force-fields and rotamer libraries, capturing slowly acquired and permanently improved bio-physical and statistical knowledge on proteins. This protein design problem, using a rigid backbone target, a catalog of discrete conformations for amino acid side-chains and a pairwise decomposable force-field such as AMBER, CHARMM or Rosetta score functions, is known to be NP-hard.

For a reason we do not really understand, toulbar2 shines on these problems. Contrarily to the usual Monte Carlo based methods, it is able to find and prove that the solution it has found is optimal for the force-field used on problems of non trivial sizes. It also allows to rigorously satisfy non-trivial designer constraints. It outperforms other guaranteed optimization tools we have tried and was even able to show the limits of an optimized Monte-Carlo method in Rosetta (IPD, Univ. Washington). Recent comparisons of toulbar2 with D-Wave quantum annealing hardware (by Rosetta team members) also show its good relative performances. Thanks to this, we have shown it is also capable of dealing with backbone flexibility, at least when the aim is to design a protein sequence that should fit several “states”. We have put all these techniques to work in practice with structural biologist colleagues, designing new self-assembling proteins, antibody or enzymes.

As I will show, for a few years now, we have upgraded this pure force-field and design target constraints based approach with Machine Learned information extracted from multiple sequence alignments, allowing to refine the force-field for a given suitable design structure, bridging the gap between data, machine learning information, thermodynamic and logic information, which perfectly fits the usual “designer” situation. Going back to AI, we have showed that this approach is also able to learn how to play the Sudoku, without knowing the rules, just from images of solved grids, better than “Deep Neural Net”-friendly appraoches, while providing understandable and customizable learned rules.

Francois Charoy (LORIA, Nancy)

Confiance et partage d’information à grande échelle.  (22/10/2021). https://members.loria.fr/fcharoy/

La coopération entre organisations et personnes à grande échelle est aujourd’hui facilitée par des outils et des plateformes qui permettent le partage et la production d’information. Depuis les premiers travaux de Ellis dans les années 80 différentes techniques ont été développées pour permettre ce partage à l’échelle du Web. Cependant l’existence d’une telle possibilité n’est pas suffisante pour en assurer un usage cohérent avec les contraintes des domaines concernés. A travers une présentation de différents travaux menés dans l’équipe Coast du LORIA, nous montrerons les facteurs qui peuvent impacter la confiance entre partenaires lorsqu’il s’agit de partager des informations. Le premier facteur est le facteur humain que nous introduirons à travers une étude menée dans le domaine de la sécurité civile. Le second facteur concerne la sureté du partage et en particulier les plateformes et les algorithmes permettant d’assurer un partage sure et efficace de données à grande échelle en pair à pair. Le dernier facteur concerne la sécurité du partage et nous expliquerons quels sont les risques d’attaque contre une plateforme de partage pair à pair et comment on peut s’en protéger.

Blaise Hanczar (Prof Université Paris-Saclay, Laboratoire IBISC)

Apprentissage profond pour la prédiction de phénotypes à partir de données transcriptomiques.  (14/10/2021). https://pepi-ibis.inrae.fr/sites/default/files/AG2019/presentation_PEPI_vf.pdf

Les données transcriptomiques représentent une estimation de la quantité d’ARN « produite » par chaque gène, elles peuvent être vues comme l’activité des gènes dans un échantillon. L’analyse de ces données joue un rôle majeur dans la compréhension de la biologique moléculaire et le développement de la médecine personnalisée. En effet, il est théoriquement possible de prédire de nombreux phénotypes à partir du profil d’expression d’un patient. Si de nombreuses méthodes d’apprentissage automatique classiques ont déjà été appliquées sur ces données, il y a encore peu de travaux sur l’utilisation de l’apprentissage profond.

Nous nous intéressons à l’apprentissage profond pour la prédiction de phénotype à partir de données transcriptomiques et en particulier a deux problèmes. Le premier est la petite taille de jeux de données transcriptomiques alors que l’apprentissage des modèles profonds demande beaucoup de données. Pour pallier cela, nous utilisons les approches d’apprentissage par transfert. Le deuxième problème vient du manque de transparence des réseaux de neurones. Il est très difficile de comprendre et d’interpréter les prédictions d’un modèle profond ce qui est pourtant indispensable pour des applications médicales. Nous proposons deux approches pour interpréter ces modèles. La première est une méthode qui identifie les parties importantes d’un modèle déjà appris, la seconde propose une nouvelle architecture interprétable par construction. Dans les deux cas l’interprétation se fait en identifiant les fonctions biologiques les plus mobilisées par le modèle.

Olivier Collin (IRISA, Rennes)

Présentation GENOUEST  (23/09/2021). https://www.genouest.org/

Opérationnelle depuis 2001, la plate-forme GenOuest offre un environnement complet initialement spécialisé pour la bioinformatique.
Elle offre ses services à une large communauté (plus de 850 comptes ouverts). Grâce à une stratégie d’intégration successive dans divers réseaux, régionaux, nationaux, elle fait désormais partie de l’Institut Français de Bioinformatique (IFB) ce qui lui permet de s’insérer dans l’infrastructure Européenne Elixir.
Au fil des ans, l’offre de service s’est étoffée pour proposer un éventail d’environnements adaptés aux divers besoins des utilisateur·trice·s avec, en plus d’un cluster de calcul, des environnements virtualisés et un cloud. Depuis quelques années, GenOuest s’est également focalisée sur la gestion de la donnée scientifique au travers de divers projets.
On présentera durant cet exposé la plate-forme GenOuest avec une vue générale des ressources et services offerts.

Tias Guns (Vrije Universiteit Brussel,Belgium)

Learning from user and environment in combinatorial optimisation.  (20/05/2021).  https://people.cs.kuleuven.be/~tias.guns/files/tias_learn_combinatorial-may21.pdf

Industry and society are increasingly automating processes, which requires solving constrained optimisation problems. This includes vehicle routing problems in transportation, scheduling of tasks in electricity demand-response programs, rostering of personnel and more. However, the solutions found by state-of-the-art constraint programming solvers are often found to not match the expectations of domain experts, which reduces the potential of the technology.
One key direction to overcome this, is to automatically learn from the user and environment over time. This includes learning preferences, implicit constraints and impacts of external factors. It requires combining machine learning techniques with constraint solving techniques, and hence hybrid learning and reasoning systems.
In this talk I will provide an overview of three different ways in which part of the problem specification can be learned from data. This includes preference learning of objective functions, perception-based reasoning and end-to-end decision focussed learning, where I will highlight recent evolutions and advances we have been making in our lab.
I will end by sharing how I used early results in these directions as a motivation for an ERC grant, due to start in 2021; including the research challenges identified and how they range from feasible to potentially high impact.

 

Bruno Crémilleux (Greyc, Univ Caen)

Pattern mining : d’une recherche exhaustive à une découverte de motifs centrée sur l’utilisateur (29/04/2021).

La fouille de données orientée motifs – ou pattern mining – est maintenant un domaine bien établi de la fouille de données. Les méthodes ont longtemps privilégié les approches complètes en effectuant d’importants efforts sur les aspects algorithmiques. Dans cet exposé, nous montrons les succès et les limites pratiques de ces approches qui ont motivé l’évolution du domaine vers l’utilisation de préférences et des méthodes centrées sur l’utilisateur·ice. Nous illustrons cette évolution en montrant comment intégrer des préférences explicites en pattern mining. L’utilisateur·ice ayant souvent une idée vague de l’information recherchée, nous introduisons la fouille interactive et ses défis, la fouille interactive ayant l’avantage d’expliciter les préférences utilisateur·ice afin de fournir des motifs plus utiles.

Ioanna Manolescu (Inria, Saclay)

What do the Sources Say? Exploring Heterogeneous Journalistic Data As a Graph  (16/03/2021).(slides : SourcesSay-IRISA-032021)

Professional journalism is of utmost importance nowadays. It is a main feature distinguishing dictatorships from democracies, and a mirror sorely needed by society to look upon itself and understand its functioning. In turn, understanding is necessary for making informed decisions, such as political choices.

With the world turning increasingly digital, journalists need to analyze very large amounts of data, while having no control over the structure, organization, and format of the data. Since 2013, my team has been working to understand data journalism and computational fact-checking use cases, to identify and develop tools adapted for this challenging setting. I will describe our SourcesSay project (2020-2024), in which extremely heterogeneous data sources are integrated as graphs, on top of which journalistic applications can be supported through flexible graph queries. I will explain the data source integration module, the role played by Information Extraction and Entity Disambiguation, as well as novel techniques to explore and simplify these graphs.

This work is joint with Angelos Anadiotis, Oana Balalau, Helena Galhardas, Tayeb Merabti, Emmanuel Pietriga, and many other colleagues.  Project Web site: https://sourcessay.inria.fr

Michel Dumontier (Maastrich university).

Accelerating biomedical discovery science with an Internet of FAIR data and services (23/01/2021). (slides)

abstract:
Biomedicine has always been a fertile and challenging domain for computational discovery science. Indeed, the existence of millions of scientific articles, thousands of databases, and hundreds of ontologies, offer exciting opportunities to reuse our collective knowledge, were we not stymied by incompatible formats, overlapping and incomplete vocabularies, unclear licensing, and heterogeneous access points. In this talk, I will discuss our work to create computational standards, platforms, and methods to wrangle knowledge into simple, but effective representations based on semantic web technologies that are maximally FAIR – Findable, Accessible, Interoperable, and Reusable – and to further use these for biomedical knowledge discovery. But only with additional crucial developments will this emerging Internet of FAIR data and services enable automated scientific discovery on a global scale.

bio:
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research focuses on the development of computational methods for scalable and responsible discovery science. Dr. Dumontier obtained his BSc (Biochemistry) in 1998 from the University of Manitoba, and his PhD (Bioinformatics) in 2005 from the University of Toronto. Previously a faculty member at Carleton University in Ottawa and Stanford University in Palo Alto, Dr. Dumontier founded and directs the interfaculty Institute of Data Science at Maastricht University to develop sociotechnological systems for responsible data science by design. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon 2020, the European Open Science Cloud, the US National Institutes of Health and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.

 

Thomas Schiex (Inrae, Toulouse).

AI for Computational Protein Design: bridging symbolic and numerical IA (12/11/2020).
As a person working on automated reasoning, logic and constraints in Artificial Intelligence but also as a computational biologist at INRAE, I quickly realized that logic alone was not ideal to model and solve the computational problems I faced as an INRAE researcher. For this reason, our team quickly worked on the AI side to extend the “propositional
automated reasoning” techniques that reason on absolutely certain statements, to more flexible formulations using “weighted” information. This lead to the birth of several dedicated fundamental algorithms and their implementation in our dedicated solver “toulbar2”. Toulbar2 is now one of the most efficient solver in his area, and is capable of mixing
logical and numerically weighted information rigorously, solving complex puzzles that combine pure logical knowledge with more vague information, including probabilistic statements.
I will show how, in the last 8 years, we have put toulbar2 to the task of designing new proteins targeting a predefined biochemical/biological function, or more precisely a predefined 3-dimensional structure, standing of the shoulders of giants that designed biophysical force-fields and rotamer libraries, capturing slowly acquired and permanently improved bio-physical and statistical knowledge on proteins. This protein design problem, using a rigid backbone target, a catalog of discrete conformations for amino acid side-chains and a pairwise decomposable force-field such as AMBER, CHARMM or Rosetta score functions, is known to be NP-hard.
For a reason we do not really understand, toulbar2 shines on these problems. Contrarily to the usual Monte Carlo based methods, it is able to find and prove that the solution it has found is optimal for the force-field used on problems of non trivial sizes. It also allows to rigorously satisfy non-trivial designer constraints. It outperforms other guaranteed optimization tools we have tried and was even able to show the limits of an optimized Monte-Carlo method in Rosetta (IPD, Univ. Washington). Recent comparisons of toulbar2 with D-Wave quantum annealing hardware (by Rosetta team members ) also show its good relative performances. Thanks to this, we have shown it is also capable of dealing with backbone flexibility, at least when the aim is to design a protein sequence that should fit several “states”. We have put all
these techniques to work in practice with structural biologist colleagues, designing new self-assembling proteins, antibody or enzymes.
As I will show, for a few years now, we have upgraded this pure force-field and design target constraints based approach with Machine Learned information extracted from multiple sequence alignments, allowing to refine the force-field for a given suitable design structure, bridging the gap between data, machine learning information, thermodynamic and logic information, which perfectly fits the usual “designer” situation. Going back to AI, we have showed that this approach is also able to learn how to play the Sudoku, without knowing the rules, just from images of solved grids, better than “Deep Neural Net”-friendly appraoches, while providing understandable and customizable learned rules.

François Charoy (Loria, Nancy)

Confiance et partage d’information à grande échelle (22/10/2020).  

La coopération entre organisations et personnes à grande échelle est aujourd’hui facilitée par des outils et des plateformes qui permettent le partage et la production d’information. Depuis les premiers travaux de Ellis dans les années 80 différentes techniques ont été développées pour permettre ce partage à l’échelle du Web. Cependant l’existence d’une telle possibilité n’est pas suffisante pour en assurer un usage cohérent avec les contraintes des domaines concernés. A travers une présentation de différents travaux menés dans l’équipe Coast du LORIA, nous montrerons les facteurs qui peuvent impacter la confiance entre partenaires lorsqu’il s’agit de partager des informations. Le premier facteur est le facteur humain que nous introduirons à travers une étude menée dans le domaine de la sécurité civile. Le second facteur concerne la sureté du partage et en particulier les plateformes et les algorithmes permettant d’assurer un partage sure et efficace de données à grande échelle en pair à pair. Le dernier facteur concerne la sécurité du partage et nous expliquerons quels sont les risques d’attaque contre une plateforme de partage pair à pair et comment on peut s’en protéger.

 

Sarah Cohen (LRI).

Computational reproducibility in the life sciences and research in computer science: round trip(23/01/2020).
With the development of new experimental technologies, biologists are faced with an avalanche of data to be computationally analyzed  for scientific advancements and discoveries to emerge. Faced with the complexity of analysis pipelines, the large number of computational tools, and the enormous amount of data to manage, there is compelling evidence that many (if not most) scientific discoveries will not stand the test of time. Increasing the reproducibility of computed results is of paramount importance. While several elements of solutions are currently available, ensuring reproducible analyses relies on progress made in several areas of research in computer science including fundamental aspects: analysis pipelines may form very complex graphs to compared and mined, they are also pieces of software with dependencies to their computational environment, maintaining they is highly challenging… After an introduction to the problem of computational reproducibility, we go on to discuss the challenges posed by this domain and describe the remaining opportunities of research in computer science.

Laurent Simon (LABRI, Bordeaux).

SAT : résoudre un problème difficile pour les résoudre tous. (28/11/2020).

Les progrès autour de la résolution pratique du problème SAT, le problème NP-Complet canonique, ont été spectaculaires dans certains domaines applicatifs. Même si des limites fortes existent toujours sur quelques problèmes fortement combinatoires, nous présenterons, dans cet exposé, quelques applications clés qui ont bénéficié de ces progrès. Nous présenterons également comment la logique propositionnelle, au coeur de SAT, permet de modéliser et de résoudre des problèmes de raisonnement bien au delà de ce formalisme initial.  Ainsi, l’exposé se conclura par la présentation des progrès récents en  compilation de connaissance, formalisme puissant, général et élégant pour le raisonnement. (slides)

Antoine Amarilli (Telecom Paris).

Enumerating pattern matches in texts and trees. (24/10/2020).
We study the data management task of extracting structured information from unstructured documents, e.g., raw text or HTML pages. We use the framework of document spanners, where the pattern to extract is specified declaratively by the user (as a regular expression with capture variables) and is translated to an automaton that then is evaluated on the document to compute the pattern matches. We focus on the last step of this pipeline: our goal is to efficiently find the matches of the automaton on the document, even when there can be many of them. We do this by proposing an enumeration algorithm, which first preprocesses the automaton and document, and then enumerates all matches with a small delay between consecutive matches. Unlike previous work, our algorithm achieves the best possible bounds in the input document (namely, linear preprocessing and constant delay), while remaining tractable in the automaton. The guiding principle of the algorithm is to compute a factorized representation of all matches as a product of the automaton and document, and design efficient indexes based on the structure of this representation. We also present our ongoing follow-up work, e.g., how to extend our algorithm to the case of tree-shaped documents by efficiently enumerating the matches of tree automata, how to efficiently update the enumeration results when the input document changes, and other open research directions.

 

Comments are closed.