Extracting Context-Free Grammars from Recurrent Neural Networks using Tree-Automata Learning and A* Search.

Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Igor Khmelnitsky, Martin Leucker, Daniel Neider, Rajarshi Roy, Lina Ye

Abstract

This paper presents (i) an active learning algorithm for visibly pushdown grammars and (ii) shows its applicability for learning surrogate models of recurrent neural networks (RNNs) trained on context-free languages. Such surrogate models may be used for verification or explainability. Our learning algorithm makes use of the proximity of visibly pushdown languages and regular tree languages and builds on an existing learning algorithm for regular tree languages. Equivalence tests between a given RNN and a hypothesis grammar rely on a mixture of A* search and random sampling. An evaluation of our approach on a set of RNNs from the literature shows good preliminary results.
Original languageEnglish
Pages113-129
Publication statusPublished - 2021

Research Areas and Centers

  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)

Fingerprint

Dive into the research topics of 'Extracting Context-Free Grammars from Recurrent Neural Networks using Tree-Automata Learning and A* Search.'. Together they form a unique fingerprint.

Cite this