Learning Transparent Data Automata

Normann Decker, Peter Habermehl, Martin Leucker, Daniel Thoma

Abstract

This paper studies the problem of learning data automata (DA), a recently introduced model for defining languages of data words which are finite sequences of pairs of letters from a finite and, respectively, infinite alphabet. The model of DA is closely related to general Petri nets, for which no active learning algorithms have been introduced so far. This paper defines transparent data automata (tDA) as a strict subclass of deterministic DA. Yet, it is shown that every language accepted by DA can be obtained as the projection of the language of some tDA. The model of class memory automata (CMA) is known to be equally expressive as DA. However deterministic DA are shown to be strictly less expressive than deterministic CMA. For the latter, and hence for tDA, equivalence is shown to be decidable. On these grounds, in the spirit of Angluin's L* algorithm we develop an active learning algorithm for tDA. They are incomparable to register automata and variants, for which learning algorithms were given recently.

Original languageEnglish
Title of host publicationApplication and Theory of Petri Nets and Concurrency
Number of pages20
Volume8489 LNCS
Place of PublicationCham
PublisherSpringer Verlag
Publication date01.06.2014
Pages130-149
ISBN (Print)978-3-319-07733-8
ISBN (Electronic)978-3-319-07734-5
DOIs
Publication statusPublished - 01.06.2014
Event35th International Conference on Application and Theory of Petri Nets and Concurrency - Tunis, Tunisia
Duration: 23.06.201427.06.2014
Conference number: 106194

Fingerprint

Dive into the research topics of 'Learning Transparent Data Automata'. Together they form a unique fingerprint.

Cite this