Next Article in Journal
Performance Evaluation of Fiber-Reinforced, Stress Relief Asphalt Layers to Suppress Reflective Cracks
Next Article in Special Issue
A Machine Learning-Assisted Numerical Predictor for Compressive Strength of Geopolymer Concrete Based on Experimental Data and Sensitivity Analysis
Previous Article in Journal
Enhancing the Reliability of Cellular Internet of Things through Agreement
Previous Article in Special Issue
Selection of Support Vector Candidates Using Relative Support Distance for Sustainability in Large-Scale Support Vector Machines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Answer Set Programming for Regular Inference

1
Department of Computer Science and Automatics, University of Bielsko-Biala, Willowa 2, 43-309 Bielsko-Biala, Poland
2
Department of Algorithmics and Software, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
3
Department of Computer Engineering, Wroclaw University of Science and Technology, Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7700; https://doi.org/10.3390/app10217700
Submission received: 6 October 2020 / Revised: 24 October 2020 / Accepted: 25 October 2020 / Published: 30 October 2020
(This article belongs to the Special Issue Applied Machine Learning)

Abstract

:
We propose an approach to non-deterministic finite automaton (NFA) inductive synthesis that is based on answer set programming (ASP) solvers. To that end, we explain how an NFA and its response to input samples can be encoded as rules in a logic program. We then ask an ASP solver to find an answer set for the program, which we use to extract the automaton of the required size. We conduct a series of experiments on some benchmark sets, using the implementation of our approach. The results show that our method outperforms, in terms of CPU time, a SAT approach and other exact algorithms on all benchmarks.

1. Introduction

The main problem investigated in this paper is as follows. Given a finite alphabet Σ , two finite subsets S + , S Σ * , and an integer k > 0 , find a k-state NFA A that recognizes a language L Σ * such that S + L and S Σ * L . In other words, we are dealing with the process of learning a finite state machine based on a set of labeled strings, thus building a model reflecting the characteristics of the observations. Machine learning of automata and grammars has a wide range of applications in such fields as syntactic pattern recognition, computational biology, systems modeling, natural language acquisition, and knowledge discovery (see [1,2,3,4,5]).
It is well known that NFA or regular expression minimization is computationally hard: it is PSPACE-complete [6]. Moreover, even if we specify the regular language by a deterministic finite automaton (DFA), the problem remains PSPACE-complete [7]. Angluin [8] showed that there is no polynomial-time algorithm for finding the shortest compatible regular expression for arbitrary given data (if P ≠ NP). Thus we conjecture that the complexity of inferring a minimal-size NFA that matches a labeled set of input strings is probably exponential.
For the deterministic case, the problem is NP-complete [9]. Besides, in contrast to the NFAs, for a given regular language there is always exactly one minimum-size DFA (i.e., there is no other non-isomorphic DFA with the same minimal number of states). Therefore, is NFA induction harder than DFA induction? To answer this, let us compare the problem search space sizes expressed by the number of automata with a fixed number of states. Let c be the size of the alphabet and k the number of automaton states. The number of pairwise non-isomorphic minimal k-state DFAs over a c-letter alphabet is of order k 2 k 1 k ( c 1 ) k . The number of NFAs such that every state is reachable from the start state is of order 2 c k 2 [10]. Thus, switching from determinism to non-determinism increases the search space enormously. However, on the other hand, it is well known that NFAs are more compact. A DFA could even be exponentially larger than a corresponding NFA for a given language.
The purpose of the present proposal is twofold. The first objective is to devise an algorithm for the smallest non-deterministic automaton problem. It entails preparing logical rules (this set of rules will be called an AnsProlog program) before starting the searching process. The second objective is to show how the ASP solvers help to tackle the regular inference problem for large-size instances and to compare our approach with the existing ones. Particularly, we will refer to the following exact NFA identification methods [11]:
  • A randomized algorithm using Parallelization Scheme 1 (RA-PS1), with deg, mmex and mmcex variable ordering methods chosen at random,
  • A randomized algorithm using Parallelization Scheme 2 (RA-PS2), with final state combinations chosen at random,
  • An ordered algorithm using Parallelization Scheme 1 (OA-PS1), with deg, mmex and mmcex variable ordering methods chosen according to the given order in a round robin fashion,
  • An ordered algorithm using Parallelization Scheme 2 (OA-PS2), with final state combinations ordered by the number of final states.
We will also refer to a SAT encoding given in [5]. All four above-mentioned methods and a SAT encoding are thoroughly described in Section 4.2. To enable comparisons with other methods in the future, the Python implementation of our approach is made available via GitHub. The Python scripting language is used only for generating the appropriate AnsProlog facts and running Clingo, an ASP solver.
Another line of research concerns the induction of DFAs. The original idea of SAT encoding in this context comes from the work made by Heule and Verwer [12]. Their work, in turn, was based on the idea of transformation from DFA identification into graph coloring, which was proposed by Coste and Nicolas [13]. Zakirzyanov et al. [14] proposed BFS-based symmetry breaking predicates, instead of the original max-clique predicates, which improved the translation-to-SAT technique. The improvement was demonstrated with the experiments on randomly generated input data. The core idea is as follows. Consider a graph G, the vertices of which are the states of an initial automaton and there are edges between vertices that cannot be merged. Finding minimum-size DFA is equivalent to a graph coloring with a minimum number of colors. The graph coloring constraints, on the other hand, can be efficiently encoded into SAT according to Walsh [15].
In a more recent approach, satisfiability modulo theories (SMT) are explored. Suppose that A = ( Σ , Q = { 0 , 1 , , K 1 } , s = 0 , F , δ ) is a target automaton and P is the set of all prefixes of S + S . An SMT encoding proposed by Smetsers et al. [16] uses four functions: δ : Q × Σ Q , m : P Q , λ A : Q { , } , λ T : S + S { , } , where { , } represents logical { false , true } , and the following five constraints:
m ( ε ) = 0 ,
x S + λ T ( x ) = ,
x a P : x Σ * , a Σ δ ( m ( x ) , a ) = m ( x a ) ,
x S + S λ A ( m ( x ) ) = λ T ( x ) ,
q Q a Σ r Q δ ( q , a ) = r .
They implemented the encodings using Z3Py, the Python front-end of an efficient SMT solver Z3.
This paper is organized into five sections. In Section 2, we present necessary definitions and facts originating from automata, formal languages, and declarative problem-solving. Section 3 describes our inference algorithm based on solving an AnsProlog program. Section 4 shows the experimental results of our approach and describes in detail all reference methods. Concluding comments are made in Section 5.

2. Preliminaries

We assume the reader to be familiar with basic regular language and automata theory, for example, from [17], so that we introduce only some notations and notions used later in the paper.

2.1. Words and Languages

An alphabet Σ is a finite, non-empty set of symbols. A word w is a finite sequence of symbols chosen from an alphabet. The length of word w is denoted by | w | . The empty word ε is the word with zero length. Let x and y be words. Then x y denotes the concatenation of x and y, that is, the word formed by making a copy of x and following it by a copy of y. As usual, Σ * denotes the set of words over Σ . A word w is called a prefix of a word u if there is a word x such that u = w x . It is a proper prefix if x ε . A set of words taken from some Σ * , where Σ is a particular alphabet, is called a language.
A sample S is an ordered pair S = ( S + , S ) where S + , S are finite languages with an empty intersection (i.e., having no common word). S + is called the positive part of S (examples), and S the negative part of S (counter-examples).

2.2. Non-Deterministic Finite Automata

A non-deterministic finite automaton (NFA) is a five-tuple A = ( Σ , Q , s , F , δ ) where Σ is an alphabet, Q is a finite set of states, s Q is the initial state, F Q is a set of final states, and δ is a relation from Q × Σ to Q. Members of δ are called transitions. A transition ( ( q , a ) , r ) δ with q , r Q and a Σ , is usually written as r δ ( q , a ) . Relation δ specifies the moves: the meaning of r δ ( q , a ) is that automaton A in the current state q reads a and can move to state r. If for given q and a there is no such r that ( ( q , a ) , r ) δ , the automaton stops and we can assume it enters the rejecting state. Moving into a state that is not final is also regarded as rejecting but it may be just an intermediate state.
It is convenient to define δ ¯ as a relation from Q × Σ * to Q by the following recursion: ( ( q , y a ) , r ) δ ¯ if ( ( q , y ) , p ) δ ¯ and ( ( p , a ) , r ) δ , where a Σ , y Σ * , and requiring ( ( t , ε ) , t ) δ ¯ for every state t Q . The language accepted by an automaton A is then
L ( A ) = { x Σ * there   is q F such   that ( ( s , x ) , q ) δ ¯ } .
Two automata are equivalent if they accept the same language.
Let A = ( Σ , Q , s , F , δ ) be an NFA. Then we will say that x Σ * is: (a) recognized by accepting (or accepted) if there is q F such that ( ( s , x ) , q ) δ ¯ , (b) recognized by rejecting if there is q Q F such that ( ( s , x ) , q ) δ ¯ , and (c) rejected if it is not accepted.

2.3. Answer Set Programming

Let us shortly introduce the idea of answer set programming (ASP). The readers interested in the details of ASP, alternative definitions, and the formal specification of AnsProlog are referred to handbooks [18,19,20].
Let A be a set of atoms. A rule is of the form:
a b 1 , , b k , c 1 , , c m .
where a, b i s, and c i s are atoms and k , m 0 . The head of the rule, a, may be absent. The part on the right of ‘←’ is called the body of the rule. The symbol ∼ is called default negation and, by analogy to database systems, in logic programming it refers to the absence of information. Informally, a b means: if and there is no evidence for b then a should be included into a solution. A program Π is a finite set of rules.
Let R be the set of rules of the form:
a b 1 , , b k .
and A be a set of atoms occurring in R. The model of a set R of rules without negated atoms is a subset M A which fulfills the following conditions:
  • if a . is in R, then a M ;
  • if b 1 , , b k . is in R, then at least one b i ( 1 i k ) is in A M ; for k = 0 (i.e., the head and the body of a rule r R are simultaneously absent) this condition does not hold, so no model exists;
  • for rules a b 1 , , b k . with non-empty head and k > 0 , b 1 , , b k M implies a M .
Alternatively, if all atoms were treated as Boolean variables (i.e., presence is true, absence is false), M would be the model of an R exactly when all rules (i.e., clauses) are satisfied.
The semantics of a program is defined by an answer set as follows. The reduct Π X of a program Π relative to a set X of atoms is defined by
Π X = { a b 1 , , b k . a b 1 , , b k , c 1 , , c m . Π and { c 1 , , c m } X = } .
The ⊆-smallest model of Π X is denoted by Cn ( Π X ) . A set X of atoms is an answer set of Π if X = Cn ( Π X ) .
For the sake of simplicity, AnsProlog programs are written using variables (by convention, variables start with uppercase letters). Such programs are then grounded, i.e., transformed to programs with no variables, by applying a Herbrand substitution. Note, however, that clever grounding discards rules that are redundant, i.e., that can never apply, because some atoms in their bodies have no possibility to be derived [19]. For example, the program:
  • el(a) ←.
  • el(b) ←.
  • equal(LL) ←el(L).
  • neq(LY) ←el(L), el(Y), ∼equal(LY).
can be transformed to Π :
  • el(a) ←.
  • el(b) ←.
  • equal(aa) ←el(a).
  • equal(bb) ←el(b).
  • neq(aa) ←el(a), el(a), ∼equal(aa).
  • neq(ab) ←el(a), el(b), ∼equal(ab).
  • neq(ba) ←el(b), el(a), ∼equal(ba).
  • neq(bb) ←el(b), el(b), ∼equal(bb).
which has a single answer set: X = { equal(a, a), equal(b, b), el(a), el(b), neq(b, a), neq(a, b)}. A reduct Π X becomes:
  • el(a) ←.
  • el(b) ←.
  • equal(aa) ←el(a).
  • equal(bb) ←el(b).
  • neq(ab) ←el(a), el(b).
  • neq(ba) ←el(b), el(a).
Its minimal model Cn ( Π X ) is just X. In other words, a set X of atoms is an answer set of a logic program Π if: (i) X is a classical model of Π and (ii) all atoms in X are justified by some rule in Π .
Recently, Answer Set Programming has emerged as a declarative problem-solving paradigm. This particular way of programming in AnsProlog is well-suited for modeling and solving problems that involve common sense reasoning. It has been fruitfully used in a range of applications.
Early ASP solvers used backtracking to find solutions. With the evolution of Boolean SAT solvers, several ASP solvers were built on top them. The approach taken by these solvers was to convert the ASP formula into SAT propositions, apply the SAT solver, and then convert the solutions back to ASP form. Newer systems, such as Clasp (which is a part of the Clingo solver, https://potassco.org/clasp/), take advantage of the conflict-driven algorithms inspired by SAT, without the complete conversion into a Boolean-logic form. These approaches improve the performance significantly, often by an order of magnitude, over earlier backtracking algorithms [21].

3. Proposed Encoding for the Induction of NFA

Our translation reduces NFA identification into an AnsProlog program. Suppose we are given a sample S over an alphabet Σ , and a positive integer k. We want to find a k-state NFA A = ( Σ , { q 0 , q 1 , , q k 1 } , q 0 , F , δ ) such that every w S + is recognized by accepting and every w S is recognized by rejecting. The parameter k can be regarded as the degree of data generalization. The smallest k, say k 0 , for which our logic program has an answer set, will give the most general automaton. As k increases, we obtain a set of nested languages, the largest for k 0 and the smallest for some k m k 0 . Usually, the running time for k > k 0 is shorter than for k 0 .
Let Pref ( S ) be the set of all prefixes of S + S . The relationship between an automaton A and a sample S in terms of ASP is constrained as shown below in seven groups of rules. In rules (5)–(24) the following convention for naming variables is used: P stands for a prefix, N stands for a number (state index), I, J, and M also represent state indexes, C stands for a character (the element of alphabet), W stands for word (which is also a prefix), U represents another prefix.
  • We have the following domain specification, i.e., our AnsProlog facts.
    q ( i ) . for   alll   i { 0 , 1 , , k 1 } .
    symbol ( a ) . for   all   a Σ .
    prefix ( p ) . for   all   p Pref ( S ) .
    positive ( s ) . for   all   s S + .
    negative ( s ) . for   all   s S .
    join ( u , a , v ) . for   all   u , v Pref ( S ) and a Σ such   that u a = v .
    Facts (5) and (6) define the set of states Q and the input alphabet Σ , while facts (7)–(9) describe the input sample. In particular, they define the prefixes as well as words to be recognized by accepting and rejecting, respectively.
    Finally, fact (10) defines the concatenation operation, which given prefix u Pref ( S ) and symbol a Σ produces prefix v Pref ( S ) .
  • The next rules ensure that in an automaton A every prefix goes to at least one state and every state is final or not.
    x ( P , N ) prefix ( P ) , q ( N ) , not _ x ( P , N ) .
    not _ x ( P , N ) prefix ( P ) , q ( N ) , x ( P , N ) .
    has _ state ( P ) prefix ( P ) , q ( N ) , x ( P , N ) .
    prefix ( P ) , has _ state ( P ) .
    final ( N ) q ( N ) , not _ final ( N ) .
    not _ final ( N ) q ( N ) , final ( N ) .
    Rules (11) and (12) describe the reachability of states q Q by prefixes p Pref ( S ) . State q is reachable by prefix p iff the prefix can be read by following a series of transitions from state . State q is reachable by prefix p iff the prefix can be read by following a series of transitions from state q 0 to state q (this series of transitions builds a path for prefix p ) . The unreachable states are described by the default negation rule n o t _ x . Clearly, for every prefix p Pref ( S ) and every state q Q , either (11) or (12) holds. Here P (a prefix) and N (a number, state index) are variables, which means that during the grounding they will be substituted for, respectively, every p Pref ( S ) because of the atom prefix ( P ) in the body of the rule and for every i { 0 , 1 , , k 1 } because of the atom q ( N ) in the body of the rule. Notice that for every p Pref ( S ) we already have fact prefix ( p ) and for every i { 0 , 1 , , k 1 } we already have fact q ( i ) , which are the sources of this substitution.
    Rules (13) and (14) declare that for every prefix p Pref ( S ) there has to be some reachable state q Q . These rules follow from the fact that the members of sets S + and S have to be recognized by accepting or rejecting, respectively. In other words, for each w ( S + S ) there has to be at least one path in the inferred NFA.
    Finally, rules (15) and (16) ensure that each state q Q is either accepting (final) or rejecting (not final). Such rules as the pair (15) and (16) are recommended in ASP textbooks to specify that each element either is/has something or is/has not (refer for example to Chapter 4 of Chitta Baral’s [18]).
  • For encoding transitions we will use predicates delta.
    delta ( I , C , J ) q ( I ) , symbol ( C ) , q ( J ) , not _ delta ( I , C , J ) .
    not _ delta ( I , C , J ) q ( I ) , symbol ( C ) , q ( J ) , delta ( I , C , J ) .
    Rule (17) says that if there exists a transition between a pair of states q i , q j Q , marked with a symbol c Σ then d e l t a ( I , C , J ) is in the model. Otherwise, the default negation rule n o t _ d e l t a applies (rule (18)).
  • Without sacrificing the generality, we can assume that q 0 is the initial state.
    x ( ε , 0 ) .
    x ( ε , N ) , q ( N ) , N 0 .
    Rules (19) and (20) mean that only state q 0 is reachable by the empty word ε .
  • Every counter-example has to be recognized by rejecting.
    q ( N ) , x ( W , N ) , final ( N ) , negative ( W ) .
    Recall that for the headless rules at least one predicate present in the body of the rule cannot be satisfied. Hence, rule (21) means that there is no final state that is reachable by any word w S .
  • Every example has to be recognized by accepting. In this rule we used an extension syntax of ASP—a choice construction. Here, it means that the number of final states, q n , for which ( ( q 0 , W ) , q n ) δ ¯ cannot be equal to 0 for any example w.
    positive ( W ) , { final ( N ) : q ( N ) , x ( W , N ) } = 0 .
  • Finally, there are mutual constraints between x and delta predicates.
    x ( W , M ) q ( I ) , q ( M ) , join ( U , C , W ) , x ( U , I ) , delta ( I , C , M ) .
    join ( U , C , W ) , q ( N ) , x ( W , N ) , { delta ( J , C , N ) : q ( J ) , x ( U , J ) } = 0 .
    Rule (23) says that for some state r that is reachable by word w = u c , there exists some state q i reachable by word u and there is a transition between states q i and q m with symbol c.
    Similarly, rule (24) says that if there is a word w = u c leading to some state q i Q , then the number of transitions with symbol c outgoing from a state reachable by word u cannot be zero.
Example 1.
Let us see an example. Suppose we are given S + = { a b c , c } , S = { a , a b } , and k = 2 . Rules (5) to (10) concretize into:
  • q(0) ←.    q(1) ←.
  • symbol(a) ←.    symbol(b) ←.    symbol(c) ←.
  • prefix(ε) ←.    prefix(a) ←.    prefix( a b ) ←.    prefix(c) ←.    prefix( a b c ) ←.
  • positive(c) ←.    positive( a b c ) ←.
  • negative(a) ←.    negative( a b ) ←.
  • join( ε , a , a ) ←.    join( ε , c , c ) ←.    join( a , b , a b ) ←.    join( a b , c , a b c ) ←.
Rules (11) to (24) always remain unchanged. This program has an answer set {q(0), …, delta( 1 , b , 1 ), final(0)}. In order to construct an associated NFA it is enough to take all final and delta predicates, which define, respectively, final states and transitions of the resultant automaton. So we have obtained an NFA depicted in Figure 1.
Additionally, in Appendix A there is a description of how answer sets are determined. In Appendix B a larger illustration is given.

4. Experimental Results

In this section, we describe some experiments comparing the performance of our approach (the program can be found at https://gitlab.com/wojtek3dan/asp4nfa) with the methods mentioned in the introductory section and described in more detail in Section 4.2. We used an ASP solver, Clingo, which can be executed sequentially or in parallel [22]. While comparing our approach with RA-PS1, RA-PS2, OA-PS1, and OA-PS2, all programs ran on an 8-core processor. ASP vs. SAT comparison was performed using a single core. For these experiments, we used a set of 40 samples (the samples can be found at https://gitlab.com/wojtek3dan/asp4nfa/-/tree/master/samples) based on randomly generated regular expressions.

4.1. Benchmarks

As far as we know, all standard benchmarks are too hard to be solved by pure exact algorithms. Thus, we generated problem instances using our own algorithm. This algorithm builds a set of words with the following parameters: size | E | of a regular expression to be generated, alphabet size | Σ | , the number | S | of words actually generated and their minimum, d min , and maximum, d max , lengths. The algorithm is arranged as follows. First, construct a random regular expression E. Next, obtain corresponding minimum-state DFA M. Then, as long as a sample S is not symmetrically structurally complete (refer to Chapter 6 of [3] for the formal definition of this concept) with respect to M, repeat the following steps: (a) using the Xeger library (https://pypi.org/project/xeger/) for generating random strings from a regular expression, get two words u and w; (b) truncate as few symbols from the end of w as possible in order to obtain a counter-example w ¯ ; if it succeeds, add u to S + and w ¯ to S . Finally, accept S = ( S + , S ) as a valid sample if it is not too small, too large or highly imbalanced. In order to ensure that these conditions are fulfilled, the equations | S + | 8 , | S | 8 , and | S + | + | S | 1000 hold for all our samples. In generating a random word from a regex or from an automaton we encounter a problem with, respectively, star operator and self-loops. Theoretically, there are infinitely many words matched to these fragments, so we have to bound the number of repetitions. We set this parameter to four.
In this manner we produced 40 sets with: | E | [ 27 , 46 ] , | Σ | { 2 , 4 , 6 , 8 } , | S | [ 27 , 958 ] , d min = 0 , and d max = 305 . The file names with samples have the form ‘a | Σ | words | E | .txt’. To give the reader a hint on the variability of the resulting automata, we show in Table 1 the numbers of states and transitions in each of the 40 NFAs found using our approach. We show there also the size of M and the size of minimal DFA D compatible with sample data. Example solutions from each group of problems, defined by the size of the input alphabet | Σ | are also shown in Figure 2.

4.2. Compared Algorithms

As already mentioned, our algorithm was compared with a SAT-based algorithm and several exact parallel algorithms. To make the paper self-contained let us briefly describe these algorithms.
The SAT-based algorithm defines three types of binary variables, x w q , y a p q , and z q , for w Pref ( S ) , a Σ , p , q Q . Variable x w q = 1 iff state q is reachable by prefix w, otherwise x w q = 0 . Variable  y a p q = 1 iff there exists a transition from state p to state q with symbol a, otherwise y a p q = 0 . Finally,  z q = 1 iff state q is final, and z q = 0 otherwise. The constraints involving these variables are as follows:
  • All examples have to be accepted, while none of the counter-examples should be, which is described by
    w S + { ε } q Q x w q z q 1 ,
    w S { ε } q Q x w q z q = 0 .
  • All prefixes w = a , w Pref ( S ) , a Σ , result from the transitions outgoing from state q 0
    w = a x w q y a q 0 q = 0 .
  • For all states q Q reachable by prefixes w = v a , v , w Pref ( S ) , a Σ , there has to be some state r reachable by prefix v, and there has to be an outgoing transition from r to q with symbol a. By symmetry, if there exists a path for prefix v ending in some state r and there exists a transition from r to q with symbol a then there exists a path to state q with prefix w = v a . These conditions are expressed as
    w = v a x w q + r Q x v r y a r q 0 ,
    q , r Q x w q x v r y a r q 0 .
Additionally, it holds that z q 0 = 1 , when ε S + , z q 0 = 0 , when ε S , and z q 0 is not predefined when ε ( S + S ) . The solution to the presented problem formulation is sought by a SAT solver.
Example 2.
Let us consider Example 1 again. In the SAT-based formulation we have the following variables x a q 0 , x a q 1 , …, x a b c q 1 , y a q 0 q 0 , y a q 0 q 1 , …, y c q 1 q 1 , z q 0 , and z q 1 . Constraints (25)–(29) remain unchanged. A set of assignments satisfying the constraints at hand is as follows: x a q 1 = 1 , x a b q 1 = 1 , x c q 0 = 1 , x a b c q 0 = 1 , y a q 0 q 1 = 1 , y b q 1 q 1 = 1 , y c q 0 q 0 = 1 , y c q 1 q 0 = 1 , z q 0 = 1 . All remaining variables are zeros. The resulting NFA is shown in Figure 3. Note that even though the set of transitions in Figure 3 is smaller than in Figure 1 both solutions are valid.
Identification of a k-state NFA by means of the exact algorithms RA-PS1, RA-PS2, OA-PS1, and OA-PS2 is based on the SAT formulation given before. Assuming k is fixed we only need to determine the set of final states F and the transition function δ . Let us recall that a set of final states is feasible iff the following conditions are satisfied: (i) F , (ii) q 0 F , if ε S + , (iii) q 0 F , if ε S . Clearly, an NFA without final states cannot accept any word, and if the empty word ε is in S + (resp.  S ) the initial state q 0 has to be final (resp. not final). Since every feasible set F may lead to an NFA consistent with the sample S (as the NFAs need not be unique), we distribute the different sets F among processes and try to identify the δ function by means of a backtracking algorithm.
While searching for the values of y a p q variables, we apply different search orders. This is so, because there is no universal ordering method assuring fast convergence to the solution. The orderings used in the analyzed algorithms are deg, mmex, and mmcex. The deg ordering is a static ordering method based on a variable degree, i.e., the number of constraints the variables are involved in. The ordering does not change as the algorithm progresses. The mmex and mmcex orderings change dynamically while the algorithm runs. They aim at satisfying first the equations related to examples, or counter-examples, respectively.
The Parallelization Scheme 1 (PS1) maximizes the number of sets F processed simultaneously. If the number of available processes is greater than the number of sets F to be analyzed, we assign multiple variable orderings (VOs) to each set. In the RA-PS1 algorithm this assignment is performed randomly, while in the OA-PS1 algorithm, the deg, mmex, and mmcex methods are ordered by their complexity and chosen in a round robin fashion.
The Parallelization Scheme 2 (PS2) maximizes the number of variable orderings applied to the same set F. This way we shorten the time needed to obtain an answer whether an NFA exists for the given set F. If the number of available processes is smaller than the product of the number of sets F and the number of variable orderings used, we need to choose the sets F to be processed first. In the RA-PS2 algorithm we choose them at random, while in the OA-RS2 algorithm, we analyze first the sets for which the size of F is smaller.
Example 3.
Let us consider the problem given in Example 1. Since k = 2 and ε ( S + S ) , the following sets F can be defined: F 1 = { q 0 } , F 2 = { q 1 } , and F 3 = { q 0 , q 1 } . Let us also assume that we can use the three VOs discussed before. Finally, let the number of processes p = 3 (denoted by p i , for i = 0 , 1 , 2 ). We can have the following example configurations of algorithms RA-PS1, RA-PS2, OA-PS1, OA-PS2:
1.
Algorithm RA-PS1—process p 0 gets ( F 1 , V O 3 ) ; process p 1 gets ( F 2 , V O 2 ) ; process p 2 gets ( F 3 , V O 3 ) . Each process uses a single VO to analyze one of the possible sets F i , i = 1 , 2 , 3 . There is no guarantee that all VOs are used at least once.
2.
Algorithm RA-PS2—process p 0 gets ( F 2 , V O 1 ) ; process p 1 gets ( F 2 , V O 2 ) ; process p 2 gets ( F 2 , V O 3 ) . Each process uses a different VO to analyze just one set F at a time (chosen randomly). If there is no solution for set F 2 , we need to repeat the above assignments but this time for the set F 1 or F 3 (again chosen randomly). We repeat the above procedure until the solution is found.
3.
Algorithm OA-PS1—process p 0 gets ( F 1 , V O 1 ) ; process p 1 gets ( F 2 , V O 2 ) ; process p 2 gets ( F 3 , V O 3 ) . Each process uses a single VO to analyze one of the possible sets F i , i = 1 , 2 , 3 , but this time the VOs are assigned according to a predefined order.
4.
Algorithm OA-PS2—process p 0 gets ( F 1 , V O 1 ) ; process p 1 gets ( F 1 , V O 2 ) ; process p 2 gets ( F 1 , V O 3 ) . Each process uses a different VO to analyze just one set F at a time, but this time we start with F 1 followed by F 2 and F 3 (unless the solution is found at some stage).
Note that in Parallelization Scheme 2, obtaining a negative answer, i.e., that an NFA does not exist for the given set F i , by means of one VO allows us to stop the execution of other VOs and move on to another set F j , i j .

4.3. Performance Comparison

In all experiments, we used Intel (Santa Clara, California, U.S.) Xeon CPU E5-2650 v2, 2.6GHz (8 cores, 16 threads), under Ubuntu 18.04 operating system with 190 GB RAM. The time limit (TL) was set to 1000 s. The results are listed in Table 2. In order to determine whether the observed mean difference between ASP and the remaining methods is a real CPU time decrease, we used a paired samples t test [23] pp. 1560–1565, for ASP vs. SAT, ASP vs. RA-PS1, ASP vs. RA-PS2, ASP vs. OA-PS1, and ASP vs. OA-PS2. As we can see from Table 3, p value is low in all cases, so we can conclude that our results did not occur by chance and that using our ASP encoding is likely to improve CPU time performance for prepared benchmarks.
Let us explain how the mean values were computed. All TL cells were substituted by 1000. Notice that this procedure does not violate the significance of the statistical tests, because our program completed computations within the time limit for all problems (files). Thus, determining all running times would even strengthen our hypothesis.
To make the advantage of the ASP-based approach over the exact parallel algorithms even more convincing let us analyze the largest sizes of automata analyzed by the algorithms within the time limit TL = 1000 s. The summary of obtained sizes is given in Table 4. Note that the table includes only the problems for which TL entries exist in Table 2. The entries marked with * denote executions in which the algorithms started running for the given k but were terminated due to the time limit, without producing the final NFA.

5. Conclusions

We have experimented with a model learning approach based on ASP solvers. The approach is very flexible, as proven by its successful adaptation for learning NFAs, implemented in the provided open source tool. Experiments indicate that our approach clearly outperforms the current state-of-the-art satisfiability-based method and all backtracking algorithms proposed in the literature. The approach does scale well (as far as non-deterministic acceptors are considered): we have shown that it can be used for learning models from up to a thousand words. In the future, we wish to develop more efficient encodings that will make the approach scale even better. We hope this paper encourages more interest in ASP-based problem solving since the presented approach has several benefits over traditional model learning algorithms. The ASP encoding is more readable than SAT encoding and the resulting program is much faster than its backtracking counterparts.

Author Contributions

Conceptualization, W.W.; methodology, O.U. and W.W.; software, T.J. and W.W.; validation, T.J.and O.U.; formal analysis, O.U.; investigation, W.W. and T.J.; resources, W.W.; writing—original draft preparation, W.W. and T.J.; writing—review and editing, O.U. and T.J.; supervision, O.U.; project administration, O.U.; funding acquisition, O.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Science Center (Poland), grant number 2016/21/B/ST6/02158.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. How Answer Sets Are Computed

Let Π be a grounded program, and let A be a set of atoms occurring in Π . Assume that atom z is not in A . Observe that every grounded program Π that has rules
b 1 , , b k , c 1 , , c m .
with empty head, can be transformed to a program without such rules by inserting z and ∼z in this manner:
z b 1 , , b k , c 1 , , c m , z .
A grounded program without empty-headed rules will be called normal.
A rule r of the form:
a b 1 , , b k .
where there is no default negation in the body, and the head is not empty, will be called positive. A program that contains only positive rules will be called positive too. We will denote by head ( r ) the set { a } , and by body ( r ) the set { b 1 , , b k } . Now, let us define how a positive program P can act on a set of atoms X A (here A is a set of atoms occurring in P):
X P = { head ( r ) r P and body ( r ) X } .
This operation can be repeated and we define:
X P 1 = X P and X P i = ( X P i 1 ) P .
It is easy to see that Cn ( P ) = i 1 P i . Because for a certain i the equation X P i = X P i + 1 holds, determining Cn ( P ) is straightforward and fast.
Consider any normal program Π . We recall from Section 2.3 that a set X A is an answer set of Π if X = Cn ( Π X ) (please do not confuse the reduct with program’s acting on a set of atoms). Take two sets, L and U, such that L X U for an answer set X of Π . Observe that: (i) X Cn ( Π L ) , and (ii) Cn ( Π U ) X . Thus we get:
L Cn ( Π U ) X U Cn ( Π L ) .
The last property is a recipe for expanding the lower bound L and cutting down the upper bound U. The procedure in which we replace L by L Cn ( Π U ) and then U by U Cn ( Π L ) as long as L or U are changed, will be called narrowing. At some point we get L = U = X . When we start from L = , U = A , then there are also two more possibilities: L U (there is no answer set), and L U . In the latter case we can take any a U L and check out two paths: a should be included into L or a should be excluded from U. This leads to Algorithm A1 [19]:   
Algorithm A1: Final algorithm
Solve ( Π , L , U )
   ( L , U ) narrowing ( Π , L , U )
  if L U then return
  if L = U then output L
  else
   choose a U L
   Solve ( Π , L { a } , U )
   Solve ( Π , L , U { a } )
Which outputs all answer sets of a program Π provided that it had been invoked with Solve ( Π , , A ) . The pessimistic time complexity of this algorithm can be assessed by the recurrence relation T ( n ) = 2 T ( n 1 ) + n 2 , where n = | U | | L | , which gives us the exponential complexity T ( n ) = O ( 2 n ) .

Appendix B. The Complete Example of an ASP Program for NFA Induction

Suppose we are given Σ = { a , b } , S + = { a } , S = { b } , and k = 2 . After grounding rules (5)–(24) we get a program Π in a Clasp format (symbol :- denotes left arrow, symbol not denotes default negation, and symbol lambda denotes the empty word ε ):
symbol(a).
symbol(b).
prefix(lambda).
prefix(a).
prefix(b).
positive(a).
negative(b).
join(lambda,a,a).
join(lambda,b,b).
q(0).
q(1).
delta(0,a,0):-not not_delta(0,a,0).
delta(1,a,0):-not not_delta(1,a,0).
delta(0,b,0):-not not_delta(0,b,0).
delta(1,b,0):-not not_delta(1,b,0).
delta(0,a,1):-not not_delta(0,a,1).
delta(1,a,1):-not not_delta(1,a,1).
delta(0,b,1):-not not_delta(0,b,1).
delta(1,b,1):-not not_delta(1,b,1).
not_delta(0,a,0):-not delta(0,a,0).
not_delta(1,a,0):-not delta(1,a,0).
not_delta(0,b,0):-not delta(0,b,0).
not_delta(1,b,0):-not delta(1,b,0).
not_delta(0,a,1):-not delta(0,a,1).
not_delta(1,a,1):-not delta(1,a,1).
not_delta(0,b,1):-not delta(0,b,1).
not_delta(1,b,1):-not delta(1,b,1).
x(lambda,0):-not not_x(lambda,0).
x(a,0):-not not_x(a,0).
x(b,0):-not not_x(b,0).
x(lambda,1):-not not_x(lambda,1).
x(a,1):-not not_x(a,1).
x(b,1):-not not_x(b,1).
not_x(lambda,0):-not x(lambda,0).
not_x(a,0):-not x(a,0).
not_x(b,0):-not x(b,0).
not_x(lambda,1):-not x(lambda,1).
not_x(a,1):-not x(a,1).
not_x(b,1):-not x(b,1).
x(a,0):-delta(1,a,0),x(lambda,1).
x(a,1):-delta(1,a,1),x(lambda,1).
x(b,0):-delta(1,b,0),x(lambda,1).
x(b,1):-delta(1,b,1),x(lambda,1).
x(a,0):-delta(0,a,0),x(lambda,0).
x(a,1):-delta(0,a,1),x(lambda,0).
x(b,0):-delta(0,b,0),x(lambda,0).
x(b,1):-delta(0,b,1),x(lambda,0).
:-x(a,0),0>=#count{0,delta(0,a,0):x(lambda,0),delta(0,a,0);
                   0,delta(1,a,0):delta(1,a,0),x(lambda,1)}.
:-x(a,1),0>=#count{0,delta(0,a,1):x(lambda,0),delta(0,a,1);
                   0,delta(1,a,1):x(lambda,1),delta(1,a,1)}.
:-x(b,0),0>=#count{0,delta(0,b,0):x(lambda,0),delta(0,b,0);
                   0,delta(1,b,0):x(lambda,1),delta(1,b,0)}.
:-x(b,1),0>=#count{0,delta(0,b,1):x(lambda,0),delta(0,b,1);
                   0,delta(1,b,1):x(lambda,1),delta(1,b,1)}.
final(0):-not not_final(0).
final(1):-not not_final(1).
not_final(0):-not final(0).
not_final(1):-not final(1).
:-0>=#count{0,final(0):final(0),x(a,0);0,final(1):final(1),x(a,1)}.
:-final(0),x(b,0).
:-final(1),x(b,1).
:-x(lambda,1).
:-not x(lambda,0).
has_state(lambda):-x(lambda,0).
has_state(a):-x(a,0).
has_state(b):-x(b,0).
has_state(lambda):-x(lambda,1).
has_state(a):-x(a,1).
has_state(b):-x(b,1).
:-not has_state(lambda).
:-not has_state(a).
:-not has_state(b).
        
One of the answer sets X is:
q(0) q(1) prefix(lambda) prefix(a) prefix(b) symbol(a) symbol(b)
negative(b) positive(a) join(lambda,a,a) join(lambda,b,b) x(lambda,0)
not_x(lambda,1) has_state(lambda) not_delta(0,a,0) not_delta(1,a,0)
delta(0,b,0) not_delta(1,b,0) delta(0,a,1) not_delta(1,a,1)
not_delta(0,b,1) not_delta(1,b,1) not_x(a,0) x(b,0) x(a,1)
not_x(b,1) not_final(0) final(1) has_state(a) has_state(b)
		
Note that the above answer set corresponds to a 2-state automaton having non-final state q 0 and final state q 1 (see predicates not_final(0) and final(1)), and the following transitions q 0 δ ( q 0 , b ) , q 1 δ ( q 0 , a ) (defined by predicates delta(0,b,0) and delta(0,a,1)).
It can be easily verified that Π X = P is the positive program:
symbol(a).
symbol(b).
prefix(lambda).
prefix(a).
prefix(b).
positive(a).
negative(b).
join(lambda,a,a).
join(lambda,b,b).
q(0).
q(1).
delta(0,b,0).
delta(0,a,1).
not_delta(0,a,0).
not_delta(1,a,0).
not_delta(1,b,0).
not_delta(1,a,1).
not_delta(0,b,1).
not_delta(1,b,1).
x(lambda,0).
x(b,0).
x(a,1).
not_x(a,0).
not_x(lambda,1).
not_x(b,1).
x(a,0):-delta(1,a,0),x(lambda,1).
x(a,1):-delta(1,a,1),x(lambda,1).
x(b,0):-delta(1,b,0),x(lambda,1).
x(b,1):-delta(1,b,1),x(lambda,1).
x(a,0):-delta(0,a,0),x(lambda,0).
x(a,1):-delta(0,a,1),x(lambda,0).
x(b,0):-delta(0,b,0),x(lambda,0).
x(b,1):-delta(0,b,1),x(lambda,0).
final(1).
not_final(0).
has_state(lambda):-x(lambda,0).
has_state(a):-x(a,0).
has_state(b):-x(b,0).
has_state(lambda):-x(lambda,1).
has_state(a):-x(a,1).
has_state(b):-x(b,1).
		
Cn ( P ) = X since i 1 P i = P ( P ) P = X . Further action of P on X does not change it.

References

  1. Heinz, J.; de la Higuera, C.; van Zaanen, M. Grammatical Inference for Computational Linguistics; Morgan & Claypool Publishers: San Rafael, CA, USA, 2015; Volume 8. [Google Scholar]
  2. Heinz, J.; Sempere, J. (Eds.) Topics in Grammatical Inference; Springer-Verlag: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  3. de la Higuera, C. Grammatical Inference: Learning Automata and Grammars; Cambridge University Press: New York, NY, USA, 2010. [Google Scholar]
  4. Miclet, L. Grammatical inference. In Syntactic and Structural Pattern Recognition: Theory and Applications; Series in Computer, Science; Bunke, H., Sanfeliu, A., Eds.; World Scientific: Singapore, 1990; Volume 7, pp. 237–290. [Google Scholar]
  5. Wieczorek, W. Grammatical Inference: Algorithms, Routines and Applications; Studies in Computational Intelligence; Springer: New York, NY, USA, 2017; Volume 673. [Google Scholar]
  6. Meyer, A.R.; Stockmeyer, L.J. The equivalence problem for regular expressions with squaring requires exponential space. In Proceedings of the 13th Annual Symposium on Switching and Automata Theory, College Park, MA, USA, 25–27 October 1972; pp. 125–129. [Google Scholar]
  7. Jiang, T.; Ravikumar, B. Minimal NFA problems are hard. SIAM J. Comput. 1993, 22, 1117–1141. [Google Scholar] [CrossRef]
  8. Angluin, D. An Application of the Theory of Computational Complexity to the Study of Inductive Inference. Ph.D. Thesis, University of California, Berkeley, CA, USA, 1969. [Google Scholar]
  9. Gold, E.M. Complexity of automaton identification from given data. Inf. Control. 1978, 37, 302–320. [Google Scholar] [CrossRef] [Green Version]
  10. Domaratzki, M.; Kisman, D.; Shallit, J. On the number of distinct languages accepted by finite automata with n states. J. Autom. Lang. Comb. 2002, 7, 469–486. [Google Scholar]
  11. Jastrzab, T. Two Parallelization Schemes for the Induction of Nondeterministic Finite Automata on PCs. In PPAM 2017: Parallel Processing and Applied Mathematics; Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K., Eds.; Springer: Cham, Switzerland, 2018; Volume 10777, pp. 279–289. [Google Scholar]
  12. Heule, M.; Verwer, S. Exact DFA Identification Using SAT Solvers. In Lecture Notes in Computer Science; Springer: New York, NY, USA, 2010; Volume 6339, pp. 66–79. [Google Scholar]
  13. Coste, F.; Nicolas, J. Regular Inference as a graph coloring problem. In Proceedings of the Workshop on Grammar Inference, Automata Induction, and Language Acquisition, Nashville, TN, USA, 12 July 1997. [Google Scholar]
  14. Zakirzyanov, I.; Shalyto, A.; Ulyantsev, V. Finding All Minimum-Size DFA Consistent with Given Examples: SAT-Based Approach. In Software Engineering and Formal Methods; Cerone, A., Roveri, M., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 117–131. [Google Scholar]
  15. Walsh, T. SAT v CSP. In Principles and Practice of Constraint Programming—CP 2000; Dechter, R., Ed.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 441–456. [Google Scholar]
  16. Smetsers, R.; Fiterau-Brostean, P.; Vaandrager, F.W. Model Learning as a Satisfiability Modulo Theories Problem. In Proceedings of the Language and Automata Theory and Applications—12th International Conference, Ramat Gan, Israel, 9–11 April 2018; pp. 182–194. [Google Scholar]
  17. Hopcroft, J.E.; Motwani, R.; Ullman, J.D. Introduction to Automata Theory, Languages, and Computation, 2nd ed.; Addison-Wesley: Boston, MA, USA, 2001. [Google Scholar]
  18. Baral, C. Knowledge Representation, Reasoning, and Declarative Problem Solving; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  19. Gebser, M.; Kaminski, R.; Kaufmann, B.; Schaub, T. Answer Set Solving in Practice; Morgan & Claypool Publishers: San Rafael, CA, USA, 2012. [Google Scholar]
  20. Lifschitz, V. Answer Set Programming; Springer: New York, NY, USA, 2019. [Google Scholar]
  21. Gebser, M.; Kaufmann, B.; Schaub, T. Conflict-driven answer set solving: From theory to practice. Artif. Intell. 2012, 187, 52–89. [Google Scholar] [CrossRef] [Green Version]
  22. Gebser, M.; Kaufmann, B.; Schaub, T. Multi-threaded ASP solving with clasp. arXiv 2012, arXiv:1210.3265. [Google Scholar] [CrossRef] [Green Version]
  23. Salkind, N.J. Encyclopedia of Research Design; SAGE Publications, Inc.: Newbury Park, CA, USA, 2010. [Google Scholar]
Figure 1. An inferred non-deterministic finite automaton (NFA).
Figure 1. An inferred non-deterministic finite automaton (NFA).
Applsci 10 07700 g001
Figure 2. Example solutions found by the ASP solver for problems a2words33, a4words28, a6words31, and a8words37. (a) Problem a2words33; (b) Problem a4words28; (c) Problem a6words31; (d) Problem a8words37.
Figure 2. Example solutions found by the ASP solver for problems a2words33, a4words28, a6words31, and a8words37. (a) Problem a2words33; (b) Problem a4words28; (c) Problem a6words31; (d) Problem a8words37.
Applsci 10 07700 g002
Figure 3. An inferred NFA.
Figure 3. An inferred NFA.
Applsci 10 07700 g003
Table 1. Sizes of NFAs found by the answer set programming (ASP) solver ( k 0 —number of states, t—number of transitions, | M | —number of states in deterministic finite automaton (DFA) M, | D | —number of states in minimal DFA D compatible with sample data).
Table 1. Sizes of NFAs found by the answer set programming (ASP) solver ( k 0 —number of states, t—number of transitions, | M | —number of states in deterministic finite automaton (DFA) M, | D | —number of states in minimal DFA D compatible with sample data).
Problem k 0 t | M | | D | Problem k 0 t | M | | D |
a2words278322015a6words2732393
a2words283673a6words28450154
a2words29522106a6words2932463
a2words303763a6words3021482
a2words31613127a6words31219112
a2words32510105a6words3221162
a2words33719178a6words337971710
a2words3441365a6words34558166
a2words353663a6words3521462
a2words3652398a6words36441155
a4words27432104a8words3721983
a4words28316143a8words3833463
a4words2953466a8words39227112
a4words3031574a8words40330133
a4words31423124a8words41579116
a4words3231863a8words42465215
a4words33424124a8words43460154
a4words34421164a8words4422372
a4words35428234a8words45566157
a4words362953a8words4632793
Table 2. Execution times of exact solving NFA identification in seconds.
Table 2. Execution times of exact solving NFA identification in seconds.
ProblemExecution
SequentialParallel
SATASPRA-PS1RA-PS2OA-PS1OA-PS2ASP
a2words27148.1177.46TLTLTLTL10.69
a2words281.860.181.071.151.281.190.25
a2words296.400.49TLTLTLTL0.40
a2words302.780.399.148.529.559.080.25
a2words312.080.52TLTLTLTL0.37
a2words32791.194.50TLTLTLTL3.50
a2words3349.721.99TLTLTLTL0.93
a2words3410.050.5720.5920.9219.8819.020.59
a2words350.470.090.530.380.380.380.10
a2words36526.072.57TLTLTLTL3.47
a4words271.320.15TLTLTLTL0.17
a4words280.130.046.456.897.234.490.05
a4words2978.930.96TLTLTLTL1.15
a4words300.200.050.480.500.480.440.06
a4words3121.780.70TLTLTLTL1.08
a4words320.830.131.392.101.241.230.13
a4words337.190.48729.40719.59738.89733.210.50
a4words3420.330.81TLTLTLTL0.82
a4words3521.000.65TLTLTLTL0.71
a4words360.770.160.760.860.930.900.13
a6words271.130.151.592.242.891.880.16
a6words284.620.37TLTLTLTL0.33
a6words291.050.133.152.382.482.630.16
a6words301.970.200.850.521.111.060.21
a6words311.430.180.860.810.940.830.22
a6words321.150.162.101.972.132.160.16
a6words33441.954.14TLTLTLTL2.03
a6words34272.271.88TLTLTLTL1.86
a6words350.200.060.850.730.820.820.06
a6words3615.970.86TLTLTLTL0.81
a8words370.200.060.460.500.370.410.06
a8words3863.711.4015.7316.1516.0918.081.24
a8words391.440.191.551.551.781.480.20
a8words407.630.44TLTLTLTL0.66
a8words41324.291.94TLTLTLTL2.61
a8words425.860.35TLTLTLTL0.42
a8words439.140.45TLTLTLTL0.69
a8words441.230.211.521.341.251.220.38
a8words45279.742.07TLTLTLTL2.32
a8words465.200.33TLTLTLTL0.58
Mean78.292.71544.96544.73545.24545.011.01
Table 3. Obtained p values from the paired samples t test.
Table 3. Obtained p values from the paired samples t test.
ASP vs. SATASP vs. RA-PS1ASP vs. RA-PS2ASP vs. OA-PS1ASP vs. OA-PS2
0.007732.72 × 10−82.73 × 10−82.70 × 10−82.72 × 10−8
Table 4. Sizes of NFAs reached by the parallel algorithms. The sign * means that the time limit was exceeded.
Table 4. Sizes of NFAs reached by the parallel algorithms. The sign * means that the time limit was exceeded.
ProblemASPRA-PS1RA-PS2OA-PS1OA-PS2
a2words2786666
a2words2955 *5 *5 *5 *
a2words3165555
a2words3254444
a2words3375555
a2words3654444
a4words2744 *4 *4 *4 *
a4words2954444
a4words3144 *4 *4 *4 *
a4words3443333
a4words3544 *4 *4 *4 *
a6words2843333
a6words3373333
a6words3453333
a6words3643333
a8words4033 *3 *3 *3 *
a8words4153333
a8words4244 *4 *4 *4 *
a8words4343333
a8words4554444
a8words4633 *3 *3 *3 *
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wieczorek, W.; Jastrzab, T.; Unold, O. Answer Set Programming for Regular Inference. Appl. Sci. 2020, 10, 7700. https://doi.org/10.3390/app10217700

AMA Style

Wieczorek W, Jastrzab T, Unold O. Answer Set Programming for Regular Inference. Applied Sciences. 2020; 10(21):7700. https://doi.org/10.3390/app10217700

Chicago/Turabian Style

Wieczorek, Wojciech, Tomasz Jastrzab, and Olgierd Unold. 2020. "Answer Set Programming for Regular Inference" Applied Sciences 10, no. 21: 7700. https://doi.org/10.3390/app10217700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop