Finite Automata with Generalized Acceptance Criteria

We examine the power of nondeterministic ﬁnite automata with acceptance of an input word deﬁned by a leaf language, i


Introduction
Let M be a nondeterministic finite automaton and w be an input word.Usually w is said to be accepted by M if and only if there is at least one possible computation path of M which accepts w.In this paper we look at the tree T M (w) of all computations that automaton M on input w can possibly perform.A node v in this tree is labelled by a configuration C of M at a certain point during its computation on input w, where such a configuration is given by the state of M and the portion of the input which is still unscanned.The children of v in the computation tree are associated with the successor configurations of C, i. e., if the transition function of M has several entries for this particular C, then each of these will lead to a successor configuration and a child of v in the computation tree.The leaves in the tree are associated to those configurations that M reaches when all input symbols are consumed.Now the acceptance criterion of nondeterministic automata can be rephrased as follows: An input word w is accepted by M if and only if in the computation tree of M on x there is at least one leaf labelled with an accepting state.
Using the concept of computation trees, we will study modified acceptance criteria in this paper.Consider for example the following question: If we say that a word is accepted by M if and only if the number of accepting leaves in the computation tree is divisible by a fixed prime number p, can non-regular languages be recognized in this way?The acceptance here is thus given by a more complicated condition on the cardinality of the set of accepting paths in the computation tree.(For the definition of the class REG we just require that this cardinality is non-zero.)But we do not only consider such cardinality conditions in the present paper.
If we attach certain symbols to the leaves in T M (w), e. g., the symbol 1 to an accepting leaf and 0 to a non-accepting leaf, then the computation tree of M on input w defines a word, which we get by concatenating the symbols attached to the leaves, read from left to right (in a natural order of the paths of T M (w) we define below).We call this string the leaf word of M on w.Observe that the length of the leaf word can be exponential in the length of w.Generally, an acceptance criterion is nothing other than the set of those leaf words that make M accept its input; that is, such a criterion is defined by a so called leaf language L over the alphabet of the leaf symbols.By definition a word is accepted by M if and only if the leaf word of M on input w is in L. In the example above we used as leaf language the set L of all binary words with a number of 1's divisible by p.
We now ask what class of languages such automata can accept given a particular class of leaf languages.E. g., what if we allow all regular languages as acceptance criteria, can non-regular languages be recognized?The main result of this paper is a negative answer to this question.As another example, if the criterion is given by a context-free language, then we see that non-regular, even non context-free languages can be recognized.To mention a final example, if we allow leaf languages from the circuit class NC 1 (a class whose power is captured in a sense by the regular languages, since there are regular languages complete for NC 1 under uniform projections, a very strict reducibility [BIS90]), then we obtain that even PSPACE-complete languages can be accepted by such finite automata.
In this paper we study in a systematic way the power of acceptance criteria given by leaf languages which are (1) taken from a (complexity) class defined via space or time restrictions for Turing machines, or (2) taken from a (formal language) class of the Chomsky hierarchy.
The power of nondeterministic Turing machines whose acceptance is given by a leaf language is wellstudied, see, e. g., [BCS92, Ver93, HLS + 93, JMT96].More recently the model has also been applied to Boolean circuits, see [CMTV98]; formally, in this latter model so called programs over automata were used as leaf string generators -in the case of language decision these programs are known to yield exactly the power of the class NC 1 [Bar89].Programs over automata consist of (uniform) projections whose outputs are fed into nondeterministic FAs.The power of finite automata per se, the probably most basic type of machine, with acceptance defined by a leaf language, has not been considered so far in the literature.The present paper closes this gap.In general, as had to be expected, our results differ quite a lot from those obtained in the above cited papers.However, in the context of leaf languages taken from a complexity class we will see that finite automata as underlying model are essentially as good as polynomial-time Turing machines.

Preliminaries
We assume the reader is familiar with the basic automata and machine models from formal language theory and complexity theory, see, e. g., [HU79,BDG95,BC94,Pap94].For more background on the models we use, we refer the reader to the different chapters in [RS97].
Our Turing machines are standard multi-tape machines, see [HU79].For the definition of sublinear time classes we use indexing machines, introduced in [Sip83].These machines cannot directly access their input tape, but instead have to write down a number in binary on a so called index tape.When they enter a specified read state with bin(i) on the index tape, they are supplied with the ith input symbol (or a particular blank symbol, if i exceeds the input length) in unit time.We use the so called standard (or, proviso U) model which does not delete its index tape after a read operation, see [CC95,RV97].This means that, even with a logarithmic time-bound, such a machine may access logarithmically many bits of its input; this fact allows it to determine the length of the input using one-sided binary search.
In our main proof we make use of a generalized model of automata, known as alternating finite automata (AFA).They were introduced by Chandra, Kozen, and Stockmeyer in [CKS81] and work like the better known alternating Turing machines.Although the model at first sight seems to be more powerful than deterministic automata, it was shown that the class of languages they accept is REG [CKS81].
The following, somewhat intuitive, exposition is basically from [Yu97], for a more precise definition of the model refer to [CKS81].
Let B = {0, 1} and Q be a finite set.Then B Q is the set of all mappings from Q into B. Note that u ∈ B Q can be considered as a |Q|-dimensional vector with entries in B.
An alternating finite automaton (AFA) is a quintuple A = (Q, Σ, s, F, g) where Q is the finite set of states, Σ is the input alphabet, s ∈ Q is the starting state, F ⊆ Q is the set of final states and g is a function from Note that g(q), for q ∈ Q, is a function from Σ × B Q into B, denoted below by g q .
How does an AFA work?Inductively, we define the language accepted by a state q ∈ Q as follows: A state q ∈ Q accepts the empty word λ, if and only if q ∈ F. Having a nontrivial input x = ay, a ∈ Σ, y ∈ Σ * , q reads the first letter a and calls all states to work on the rest y of the input.The states working on y will accept or reject and those results can be described by a vector u ∈ B Q .Now the value g q (a, u) ∈ B shows whether q accepts or rejects.An AFA A accepts an input when the initial state s accepts it.
One form to represent an alternating finite automaton A = (Q, Σ, s, F, g) is to give a system of equations, for all q ∈ Q, of the form X q represents the state q ∈ Q and X is the vector of all variables X q .The final ε q is used to denote if q is accepting: If ε q = λ then q is an accepting state; otherwise we set ε q = 0.The reader may think of the symbol 0 as used here in these equations by convention for "rejection"; we do not want to imply anything else from its use -in particular, 0 need not be an alphabet letter.(This does not say, of course, that this is an arbitrary convention; that there is good reason to use 0 here can be seen from the extensive treatment of the equation calculus in [Yu97].) In the equation X q = a • X r + b • (X r ∧ X s ) + c • 0, for example, q is not an accepting state.In this state there is a deterministic transition into r when reading an a.State q definitely rejects when reading a c.If a b is read then q will accept if and only if r accepts the rest of the input and s rejects it.
It is clear that one obtains a nondeterministic automaton with a system of equations in which only the ∨ function occurs.A more detailed elaboration of this topic and a proof of the following statement is given in [CKS81,Yu97].

Leaf Automata
In this section we will formally define finite automata with generalized acceptance criterion.
The basic model we use is that of nondeterministic finite automata.On an input word w such a device defines a tree of possible computations.We want to consider this tree, but with a natural order on the leaves.Therefore we make the following definition: A finite leaf automaton (leaf automaton for short) is a tuple M = (Q, Σ, δ, s, Γ, v), where • Σ is an alphabet, the input alphabet; • Q is the finite set of states; • Γ is an alphabet, the leaf alphabet; If we contrast this with the definition of nondeterministic finite automata, where we have that δ(q, a) is a set of states, we here additionally fix an ordering on the possible successor states by arranging them in a string from Q + .We explicitly remark that in leaf automata we allow the same state to appear more than once as a successor in δ(q, a); an example is the automaton N in the proof of Theorem 4.1.
Let M be as above.The computation tree T M (w) of M on input w is a labeled directed rooted tree defined as follows: 1.The root of T M (w) is labeled (s, w).

Let i be a node in T
, and these are labeled by (q 1 , y), (q 2 , y), . . ., (q k , y) in this order.
If we look at the tree T M (w) and attach the symbol v(q) to a leaf in this tree with label (q, ε), then leafstring M (w) is the string of symbols attached to the leaves, read from left to right in the order induced by δ.
As an example, suppose M = (Σ, Q, δ, s, F) is a usual non-deterministic finite automaton, where F ⊆ Q is the set of accepting states.Define a leaf automaton and v(q) = 0 else, and δ ′ (q, a) is concatenation of the elements of the set δ(q, a) ordered arbitrarily.Then obviously, M accepts input w if and only if leafstring M ′ (w) contains at least one letter 1, i. e., leafstring M ′ (w) ∈ 0 * 1(0 + 1) * .Conversely, every leaf automaton with Γ = {0, 1} may be thought of as a non-deterministic finite automaton.
In the above example we used the language 0 * 1(0+1) * as acceptance criterion.We want to use arbitrary languages below.Therefore we define: Let M = (Σ, Q, δ, s, Γ, v) be a leaf automaton, and let A ⊆ Γ * .The language Leaf M (A) = def w ∈ Σ * leafstring M (w) ∈ A is the language accepted by M with acceptance criterion A. The class Leaf FA (A) consists of all languages B ⊆ Σ * , for which there is a leaf automaton M with input alphabet Σ and leaf alphabet Γ such that B = Leaf M (A).If C is a class of languages then Our example above shows that Leaf FA (0 * 1(0 + 1) * ) = REG.It will be our aim in the upcoming section to identify Leaf FA (C ) for different classes C .
We will also consider a more restricted form of leaf automata, defined as follows: Let M = (Σ, Q, δ, s, Γ, v) be such that |δ(q, a)| ≤ 2 for all q ∈ Q and a ∈ Σ; that is, in every step M has at most two possible successor states.In terms of the computation tree T M (x) this means that leaves trivially have no successors and inner nodes have either one or two successors.Observe that all paths have length exactly n.Thus a path is given by a word p ∈ {L, R} n , describing how one has to move from the root to the leaf (L stands for left, R for right).Since there may be inner nodes in T M (x) with only one successor (which, by definition, then is considered the left successor), there maybe words q ∈ {L, R} n with no corresponding path.In this case we say that the path q is missing.We say that the computation tree T M (x) is balanced, if the following holds: There is a path p ∈ {L, R} n in T M (x) such that to the left of p no path is missing, and to the right of p all paths are missing.Thus p is the rightmost path in T M (x), and T M (x) informally is a complete binary tree with a missing subpart in the right.
For A ⊆ Γ * , the class BLeaf FA (A) consists of all languages B ⊆ Σ * , for which there is a leaf automaton M with input alphabet Σ and leaf alphabet Γ such that 1.For all input words w ∈ Σ * , the computation tree T M (w) is balanced; and 2. B = Leaf M (A).
We will compare the classes (B)Leaf FA (C ) with (B)Leaf P (C ), (B)Leaf L (C ), (B)Leaf LOGT (C ) (the classes of languages definable with leaf languages taken from C as acceptance criterion for (balanced) nondeterministic Turing machines operating respectively in polynomial time, logarithmic space, and logarithmic time), and (B)Leaf NC 1 (C ) (languages definable with leaf languages taken from C as acceptance criterion for so called programs over automata, a model which corresponds to the circuit class NC 1 [BIS90]; our (B)Leaf FA -model can be obtained from this latter Leaf NC 1 -model by omitting the programs but taking only finite automata).For background on these models, we refer the reader to [HLS + 93, JMT96, CMTV98].

Acceptance Criteria Given by a Complexity Class
We first turn to leaf languages defined by time-or space-bounded Turing machines.
Proof.The proof uses standard padding arguments. (⊇): a for all a ∈ Σ, and δ is given as follows: for all a ∈ Σ, and δ(q b , a) = q b q b for all a, b ∈ Σ.
The reader may check that, given input hence the ith symbol of x is equal to the 2 n+1−i th symbol in leafstring N (x).It is clear that the computation tree of N is always balanced.Now define the indexing machine M ′ operating essentially as M, but when M reads its ith input symbol then M ′ reads its 2 n+1−i th input symbol.To simulate M's read operations, M ′ on input of length 2 m (corresponding to input x 1 • • • x m of machine M) first initializes its index tape with the string 10 m .Now head movements of M can easily be simulated by adjusting the index tape (movements to the right correspond to deleting a 0, movements to the left to adding a 0).M ′ thus accepts a leaf word (⊆): Let A ∈ BLeaf FA ATIME(t(n)) ; let N be the corresponding leaf automaton, and let M be the Turing machine accepting the leaf language in time t.Define M ′ as follows: M ′ works as M, but when M reads its ith input symbol, M ′ guesses the input bit and then branches universally.On one of these branches the simulation of M is continued, on the other branch N on its ith path is simulated deterministically.This is possible, since the computation tree is balanced, and hence the number i written down in binary immediately gives the nondeterministic choices of N on the ith path.The time requirements for this simulation are given by the time bound of machine M (i.e., t(2 n ) for an input of M ′ of length n) plus time O(n) for a single simulation of N. ❑ At this point, two remarks are in order.First, observe that, for the left-to-right inclusion, to obtain the time bound t(2 n ) for machine M ′ we make essential use of its ability to branch existentially and universally; hence this works only for alternating machines.Second, for the above simulation it is necessary that the computation tree produced by N is balanced, because we have to find the ith path of N, given only number i. Next we want to examine what happens when these requirements no longer hold.
Let us first address the case of deterministic machines, i.e., we no longer have the power of alternation, used above to guess and verify.We will see that, nevertheless, a statement similar to the above can be proved if the resource bound class is closed under addition and multiplication.

Theorem 4.2 Let t(n)
Proof.The proof is similar to the above one.For the right to left inclusion, just replace ATIME by DTIME in the above argument.
For the left to right inclusion, let A ∈ BLeaf FA DTIME (t(n)) O(1) via the leaf automaton N and Turing machine M accepting the leaf language.As in the proof of Theorem 4.1 we define M ′ to work as M, but this time, when M reads its ith input symbol, we interrupt the simulation of M, simulate N on its ith path to compute the input symbol of M, and then resume the simulation of M.These subprograms for simulations of N will lead to an extra factor of t(n) w.r.t. the time requirements of M ′ , which poses no problem here.Note that above, in Theorem 4.1, we did not have this extra time available, hence the other form of simulation there, making use of the power of alternation.❑ Next we turn to non-balanced computation trees.Given the position of a symbol of the leaf string now no longer enables us to immediately follow the corresponding path in the leaf automaton, because the relation between the position and the nondeterministic choices on the corresponding path is not valid any more.However, as soon as t is at least linear, this is no longer needed as we observe next.
Let us also consider the case of unbalanced trees where the automata we consider have the property that |δ(q, a)| ≤ 2 for all q ∈ Q and a ∈ Σ.Our notation for the obtained classes is Leaf FA 2 (•).
Theorem 4.3 Let t(n) ≥ n.Then we have: Proof.Similar to the above.
To prove all of the inclusions BLeaf FA DTIME(t(n)) , it is sufficient to observe that we now have enough time to first compute the whole leaf word (for statement 1 in time 2 n , note that we have at most binary branches in the finite automaton; for statement 2 in time 2 O(n) ) and then simulate M (in time , are proven exactly as before.For the inclusion DTIME t(2 O(n) ) ⊆ Leaf FA DTIME(t(n)) we also proceed along the same line, but, if necessary, we replace automaton N from Theorem 4.1 by an automaton with a higher branching degree to pad a length n input to a length 2 cn string for suitable c ∈ N. ❑ In fact the above result can be generalized to many familiar complexity measure.In particular, let Φ be one of the measures DTIME, NTIME, DSPACE, NSPACE, Σ k TIME, TIME, . . . .Let t(n) ≥ n in case of a time-restriction, and t(n) ≥ log n in case of a space-restriction.The proof given for Theorem 4.3 remains valid in the case of these measures and bounds; hence we conclude that More generally, using Hertrampf's locally definable acceptance types [Her92,Her94], we conclude that for any locally definable acceptance type F .
Hence we obtain in particular: The above proofs make use of fairly standard padding techniques.The main point is the definition of an automaton which pads a given word of length n into a word of length 2 n (or 2 O(n) in the unbalanced case).Turing machines and Boolean circuits can pad up to length 2 n O(1) , therefore similar proofs show that, e. g., the classes Leaf P (NC 1 ), Leaf L (NC 1 ), Leaf NC 1 (NC 1 ), Leaf P (POLYLOGSPACE), Leaf L (POLYLOGSPACE), and Leaf NC 1 (POLYLOGSPACE) coincide with ATIME(n O(1) ) = PSPACE, see [HLS + 93, JMT96, CMTV98].Hence we see that here in the context of complexity classes as leaf languages, the ability to pad is the central point, and Turing machines, Boolean circuits, and finite automata behave quite similarly.

Acceptance Criteria Given by a Formal Language Class
We now consider in turn the different classes that make up the Chomsky hierarchy of formal languages.

Regular Languages
One can easily see that REG is defined by the regular leaf language 0 * 1(0 + 1) * , but already the language {1} over B = {0, 1} defines REG as the following proof shows.Furthermore, we show next in our main result that a regular leaf language cannot define a class containing nonregular languages.
Proof.The inclusion BLeaf FA (REG) ⊆ Leaf FA (REG) is trivial.To show REG ⊆ BLeaf FA (REG) we define the leaf language B = {1} ∈ REG over B = {0, 1}.Let A ∈ REG be given.Then there exists a DFA N which accepts A. We use N as the leaf automaton producing the leaf string 1 or 0 when accepting or rejecting.Thus we have: Of course, the computation tree of N is always balanced.
Finally we have to show Leaf FA (REG) ⊆ REG.Let A ∈ Leaf FA (REG) be a language over the alphabet Σ.Then there exist a DFA M and a leaf automaton N with the following property: x ∈ A ⇐⇒ M accepts leafstring N (x).Let the automata N and M be given by N = (Σ, Q N , δ N , s N , Γ, v) and M = (Γ, Q M , δ M , s M , F M ).For q ∈ Q N and a ∈ Σ we denote the branching degree by r(q, a) = |δ N (q, a)| and write δ N (q, a) = δ N,1 (q, a) . . .δ N,r(q,a) (q, a).
We construct an AFA M = (Σ, Q M , s M , F M , g) which accepts A. The set of states is defined by In the sequel we will denote a state q M ∈ Q M \ {s M } by a triple, e. g., q M = (q 0 , q e , q N ), with the following intuition: When starting in q M , M will accept if and only if the leaf string produced by the leaf automaton N starting in q N leads M from q 0 to q e .M follows the computation of N, while it guesses an accepting sequence of states of M. At the end M checks whether this sequence coincides with the sequence of states one gets when following M working on the leaf string.This will be done by using the principle of "divide and conquer." We define the function g as well as the set of final states F M by systems of equations as described in Sect. 2 (note that '+' and '•' are parts of the equation formalism, while '∨' and '∧' are used to specify Boolean functions): Note that the branching degree r depends on the state and the letter of the input, so the value of r might differ for different x in g s M ,x .Remember that s M ∈ F M ⇐⇒ ε s M = λ.The "divide and conquer" approach is directly reflected by the syntactic shape of the Boolean functions g s M ,x (and g q M ,x below): Similar to, e.g., the proof of Savitch's Theorem [BDG95, Theorem 2.27] or the proof of the PSPACE-completeness of QBF [BDG95, Theorem 3.29], the disjunctive normal-form expresses that there are "intermediate states" q 1 , . . ., q r−1 such that for all these states, the corresponding subcomputations are valid.
Again, r depends on q N and x, and we have q M ∈ F M ⇐⇒ ε q M = λ.Now we must show that the alternating automaton M accepts the language L( M) = A. The state q M = (q 0 , q e , q N ) ∈ Q M has the following intuitive meaning: Starting NFA N in state q N on the input y we obtain a leaf string w.Starting M in q M , the input y will be accepted if and only if this leaf string leads M from state q 0 to q e , i. e., if δ(q 0 , w) = q e .We prove this by induction on y: |y| = 0: For y = λ the leaf string is the letter v(q N ).Starting in q M = (q 0 , q e , q N ), y = λ will be accepted if and only if ε q M = λ.This is true for δ M q 0 , v(q N ) = q e , i. e., the leaf string v(q N ) leads M from q 0 to q e .Assuming this to be correct for all y ∈ Σ * , |y| < n, we now consider the case |y| = n: Let q M = (q 0 , q e , q N ) be the current state of M and y = y 1 • • • y n .In state q N , N branches into r = r(q N , y 1 ) = |δ N (q N , y 1 )| subtrees when it reads y 1 .According to the equation for g q M ,y 1 , M in state q M accepts y if and only if there exists a sequence of states q 1 , . . ., q r−1 ∈ Q M with the following property: In each subtree i (r resp.), i = 1, . . ., r − 1, the word y 2 • • • y n will be accepted when starting respectively in state q i−1 , q i , δ N,i (q N , y 1 ) or q r−1 , q e , δ N,r (q N , y 1 ) .Following our induction assumption this is true if and only if in each subtree M is transduced from q i−1 to q i (from q r−1 to q e resp.) by the corresponding leaf string.Thus M accepts y starting in q M if and only if M is lead from q 0 to q e by the whole leaf string.
Analogously, M accepts the input y, |y| > 0, starting from s M if there is additionally to the states q i ∈ Q M an accepting state q r ∈ F M such that δ * M (s M , leafstring N (y)) = q r .If y = λ then N produces the single letter leaf string v(s N ), and we have:

Thus we have L( M) = A. ❑
The result just given is dramatically different from corresponding results for other models: It is known that Leaf P (REG) = PSPACE and Leaf L (REG) = P.

Contextfree Languages
We found REG to be closed under the Leaf FA -operator, but it is well known that REG is closed under many operations.What about the other classes in Chomsky's hierarchy, e.g., CFL?First we show in Lemma 5.3 that every class defined via leaf languages is closed under intersection if the class of leaf languages is closed under a certain type of concatenation.Then it will be easy to see that CFL is not closed under the Leaf FA -operator.Furthermore we give some arguments for an upper bound of Leaf FA (CFL).
First, however, we observe the following: Proof.It is known that for every L ∈ CFL over some alphabet Σ and every $ ∈ Σ, the language By an easy modification of automaton N from the proof of Theorem 4.1 we obtain a leaf automaton M that, given an input a 1 • • • a n , produces a full binary computation tree whose leaf string is of the form $ * a 1 $ * • • • $ * a n $ * .Hence, M with leaf language L $ accepts L. ❑ Lemma 5.3 Let C be a class of languages with the following properties: , where m is in the middle of these three states according to the order of the computation tree.In all other states M ′ works just like M A or M B , respectively.For all x = λ we obtain For the special case of x = λ we now define: Analogously we define L ′ B and we get λ Our last result, that the leaf language class RE defines RE in the balanced as well as in the unbalanced case, is not surprising: Next, we show Leaf FA (RE) ⊆ RE: Let A = Leaf M (B), where B ∈ RE is given by the recursive and onto function f : N → B. The set of all inputs x for which M produces a given leaf string w is also enumerable: Simulate M on every x ∈ Σ * and if leafstring M (x) = w then output x.Let g : N × Σ * → Σ * be the corresponding recursive enumeration function.Now we use Cantor's dovetailing method to enumerate A, e.g., calculate in this order g 1, f (1) , g 2, f (1) , g 1, f (2) , g 3, f (1) , g 2, f (2) , g 1, f (3) . . .Finally, since RE is closed under padding with a neutral letter $, the proof of RE ⊆ BLeaf FA (RE) is the same as for context-free languages.❑

Conclusion
We examined the acceptance power of nondeterministic finite automata with different kinds of leaf languages.Comparing our results with those known for nondeterministic Turing machines with leaf language acceptance, we saw that if the leaf language class is a formal language class then we obtain a huge difference in computational power, but in the case of a resource-bounded leaf language class the difference between finite automata, Boolean circuits, and Turing machines (almost) disappears.This is due to the fact that in all three cases only the power of the devices to pad out their given input to a long leaf string is the central point.
It is known that the operator Leaf LOGT (•), i. e., leaf languages for nondeterministic logarithmic timebounded machines, is a closure operator: Leaf LOGT (C ) coincides with the closure of the class C under DLOGTIME reductions [JMT96].In the beginning the authors had the hope to be able to show that the operator Leaf FA (•) is also some form of closure operator.However, the results from Sect. 4 prove that this is not the case.If C is a reasonably large enough complexity class, then Leaf FA (C ) Leaf FA Leaf FA (C ) , hence the operator Leaf FA (•) lacks the property of being closed.In this sense, the Leaf FA -model is even more complicated than the Leaf LOGT -model.
The main remaining open question of course is if the upper and lower bounds obtained in this paper for Leaf FA (CFL) can be strengthened.Our results here leave a lot of room for improvement, and certainly one would expect to be able to give stronger bounds.Nevertheless, we have been unable so far to do so.An idea would be to follow the proof of Theorem 5.1.For each language A ∈ Leaf FA (CFL) one can construct an alternating pushdown automaton which accepts A. But unfortunately this yields not more than Leaf FA (CFL) ⊆ E, because in [CKS81] Chandra, Kozen, and Stockmeyer showed that the set ALT-PDA of all languages accepted by such automata equals E. One might hope that the lower bound PSPACE = Leaf NC 1 (CFL) could be transferred to our context -after all, there is a very strong connection between the class NC 1 and finite automata, since there are regular languages complete for NC 1 under very strict reductions such as uniform projections, see [BIS90].However our Theorem 5.4 shows that this hope is not justified; we have PSPACE ⊆ Leaf FA (CFL).
where # is a new symbol and 2. L ∈ C =⇒ L ∪ {λ} ∈ C .Then Leaf FA (C ) is closed under intersection.Proof.Let A = Leaf M A (L A ) and B = Leaf M B (L B ) with the leaf automata M A , M B (where, w.l.o.g., we assume that the state sets of M A and M B are disjoint) and the leaf languages L A , L B ∈ C over the alphabets Σ A , Σ B .Construct a leaf automaton M ′ with leafstring M ′ (x) = leafstring M A (x)#leafstring M B (x) for all x = λ and the new symbol # / ∈ Σ A ∪ Σ B in the following way: The set of states of M ′ consists of the states of M A and M B , a new initial state s with the value v(s) = #, and a new state m producing the leaf string # on every input.In state s there is a nondeterministic transition into m and the successors of the initial states of M A and M B