Karp-Miller Trees for a Branching Extension of VASS

We study BVASS (Branching VASS) which extend VASS (Vector Addition Systems with States) by allowing addition transitions that merge two conﬁgurations. Runs in BVASS are tree-like structures instead of linear ones as for VASS. We show that the construction of Karp-Miller trees for VASS can be extended to BVASS. This entails that the cov-erability set for BVASS is computable. This allows us to obtain decidability results for certain classes of equational tree automata with an associative-commutative symbol. Recent independent work by de Groote et al. implies that decidability of reachability in BVASS is equivalent to decidability of provability in MELL (multiplicative exponential linear logic), which is still an open problem. Hence our results are also a step towards answering this question in the afﬁrmative.


Introduction
The purpose of this paper is to study Branching VASS (BVASS), a natural extension of both vector addition systems with states (VASS) and Parikh images of context-free grammars, and to show that emptiness, coverability and boundedness are decidable for this common extension, by extending the usual Karp-Miller tree construction.This allows us to obtain decidability results for certain classes of two-way equational tree automata modulo the theory AC of one associative-commutative symbol [Ver03a], which arise naturally from the study of certain cryptographic protocols, and which were the initial motivation behind our extension.However recent independent work by de Groote et al. [dGGS04] also implies that decidability of reachability of configurations in BVASS is equivalent to decidability of provability in MELL (multiplicative exponential linear logic), which is still an open problem.Hence our results are a step towards a positive answer to this question.
For the time being, let us introduce semi-linear sets, VASS, Petri nets, and Parikh images of contextfree grammars, so as to explain what our extension is about.We apologize in advance for the length of the exposition, but we feel it is better to understand the concepts before we build on them.† Work done while PhD student at LSV, and partially supported by the ACI VERNAM, the RNTL project EVA and the ACI jeunes chercheurs "Sécurité informatique, protocoles cryptographiques et détection d'intrusions".Notation.We use a Prolog-like notation: although this is not standard, this will have several advantages, one being uniformity of notation.Fix a natural number p.We shall consider definite clauses of the form P(t) ⇐ P 1 (t 1 ), . . ., P n (t n ), where n ∈ N, P, P 1 , . . ., P n are so-called predicates (a.k.a., states), and t, t 1 , . . ., t n are terms built from constants ν ∈ N p , variables x, y, z, . . .(denoting p-tuples of natural numbers), and the symbol + denoting componentwise addition of p-tuples of natural numbers.An instance of such a clause is obtained by replacing all variables by actual p-tuples of natural numbers and doing the obvious simplifications.An integer program P is any finite set of definite clauses of the above format.A fact is any atom P(ν) with ν ∈ N p .Derivations (of P(ν)) from P are inductively defined so that, given any instance P(ν) ⇐ P 1 (ν 1 ), . . ., P n (ν n ) of a definite clause in P , given any derivations ∆ i of P i (ν i ) from P , 1 ≤ i ≤ n, the following is a derivation: (The base cases are when n = 0, in which case the chosen instance is the fact P(ν).)A fact is derivable from P iff there is a derivation of it from P .The language L P (P) of the predicate (or state) P in P is the set of p-tuples ν ∈ N p such that P(ν) is derivable from P .We say that ν ∈ N p is recognized at P in P iff P(ν) ∈ L P (P).
Semilinear sets.Recall that a linear set L of p-tuples of natural numbers is any set of the form L ν 0 ,B = {ν 0 + ν 1 + . . .+ ν k | k ∈ N and ν 1 , . . ., ν k ∈ B} for some finite set B ⊆ N p .A semilinear set is any finite union of linear sets.Semilinear sets are one of the fundamentals of automated verification systems, and are closed under union, intersection, complement, and projection; they are exactly the Presburger-definable subsets of N p [GS66].Now, given ν 0 , ∈ N p and some finite B ⊆ N p , consider the following set of Horn clauses: (1) It is easy to see that the set of p-tuples recognized at P in this program is exactly L ν 0 ,B .Given any semilinear set L, written as a union S n i=1 L ν i 0 ,B i , we can as easily write the set of all clauses P i (ν i 0 ), 1 ≤ i ≤ n and P i (x + ν) ⇐ P i (x), 1 ≤ i ≤ n, ν ∈ B i , where P 1 , . . ., P n are pairwise distinct predicates.The union of the sets of tuples recognized at each P i is L. In particular, any semilinear set can be represented as L P (R) for some R and some finite set P of definite clauses of the form (1) or Conversely, for every finite set P of what we shall call base/period clauses (of the form (1) or (3)), the languages L P (P) are semilinear, for every predicate P; this is a consequence of Parikh's Theorem, to be stated below.Note that derivations from such integer programs are just sequences of applications of clauses (3) ending in one clause (1).
Parikh images.What about allowing for more complex clause formats?One possibility is to replace clauses (3) by the more general addition clauses: (x and y being distinct variables) and keep the above clauses (1).(Clauses (3) are easily seen to be implementable through these two.)Addition clauses state that given any p-tuple recognized at Q, and given any p-tuple recognized at R, their sum is recognized at P. It turns out that these clause formats encode naturally Parikh images of context-free languages; this has been used in one form or another by several authors, we take our presentation from [Ver03c].Recall that the Parikh image of a set L of words over the finite alphabet A = {a 1 , . . ., a p } is the set of all p-tuples |w| = (|w| 1 , . . ., |w| p ), where w ranges over L, and |w| i is by convention the number of occurrences of a i in the word w.The construction goes as follows.Take any context-free grammar in Chomsky normal form, i.e., with productions of the form P → a i , or P → ε, or P → QR (where P, Q, R are non-terminals, and ε denotes the empty word).For each production P → a i , generate the clause P((0, . . ., 0, 1, 0, . . ., 0)), where the only '1' is at position i; for each production P → ε, generate the clause P((0, . . ., 0)); for each production P → QR, generate P(x + y) ⇐ Q(x), R(y).Then the language of P is the Parikh image of the language generated by the grammar with start symbol P [Ver03c].Parikh's Theorem [Par66], once recast in our setting (see [Ver03c]), states that given any program P consisting of clauses of the form (1), (3) and (4), the languages L P (P) are all semilinear sets, and effectively so.So there is a procedure that computes a set of base/period clauses (1), (2) from any such program P , in such a way that the languages of P are preserved, for each P in P .(A nice, generalized version of this appears in [A ÉI02].)Note that, while derivations in base/period programs are just sequences, derivations in the presence of addition clauses (4) exhibit a branching behavior.In a sense, Parikh's Theorem states that branching can be eliminated while preserving languages.
Petri nets and VASS.If, instead of allowing addition clauses (4), we allow two-way clauses of the form as another extension of base/period clauses, where ν • and • ν are elements of N p , then we get so-called vector addition systems with states (VASS) [HP79] ‡ .The languages L P (P) are called reachability sets (for state P) in this context.To simplify matters, we shall assume that min(ν • , • ν) = 0, meaning that for every index i, either the ith component of ν • or the ith component of • ν is zero.This entails no loss of generality as far as reachability, or coverability, or boundedness, is concerned [Reu89].E.g., )) for those purposes, with R a fresh state.In this case, there is no loss of generality either in abbreviating (5) as where δ ∈ Z p is a vector of integers, negative or positive, equal to ν • − • ν.Then, any derivation ending in an instance BVASS are clearly at least as expressive as Petri nets and VASS.At the moment, it is unknown whether we can effectively transform any BVASS into a VASS with the same reachability sets.(I.e., can we eliminate branching?)In fact, we do not know whether BVASS are strictly more expressive or just as expressive as VASS.An analogue of Parikh's Theorem would be needed here, but all known proof techniques for the latter that we know of fail on BVASS.Another extension to Petri nets that has been studied in the literature is ground rewrite systems modulo AC [MR98], which builds on the aforementioned decidability results for reachability in Petri nets.The latter do not seem to bear any relationship with BVASS.
Outline.Instead, we concentrate on generalizing the Karp-Miller construction to BVASS.While most of our arguments will look like the usual Karp-Miller construction, there is one difference.Remember that derivations in Petri nets are sequences of rule applications.The usual Karp-Miller construction organizes (approximants of) finite and infinite derivations, which we call the covering derivations, into a tree branching downwards, by sharing common prefixes; this Karp-Miller tree is finite by König's Lemma.With BVASS, derivations are trees branching upwards.There is little hope to organize such trees in a common structure (with both upward and downward branches, should it be called a jungle?).In particular, we lose the ability to use König's Lemma and conclude anything interesting this way.We show in this paper how König's Lemma can still be used, by building a special forest of covering derivations instead of burrowing our way through a jungle, to show that there are only finitely many covering derivations.The construction of covering derivations and their properties is set out in Section 2, the termination argument in Section 3.
We then apply this result to show that the emptiness, coverability and the boundedness problems for branching VASS are decidable, just like they are for VASS.Another consequence, which we briefly explain in Section 4, is that standard-two-way constant-only AC tree automata (extending the constant-only restriction of [Ver03c,Ver03b] with so-called standard + push clauses) have a decidable intersectionemptiness problem.Currently we know no alternative proof of this result, which is a justification of the usefulness of BVASS.We demonstrate a further extension of BVASS in Section 5, on which we basically know nothing, motivated by a more general notion of two-way constant-only AC tree automata.We conclude in Section 6.
Related work.While equational tree automata may have been the initial motivation behind this study, the interest of BVASS is however not limited to the narrow realm of equational tree automata.Recently de Groote et al. have independently defined vector addition tree automata (VATA), which are essentially BVASS, and shown that decidability of provability in MELL (multiplicative exponential linear logic) is equivalent to decidability of reachability in VATA (see [dGGS04] for details) § .In other words, decidability of provability in MELL is equivalent to decidability of reachability in BVASS.No decidability questions are answered in [dGGS04].Hence our Karp-Miller construction for BVASS can be seen as a step towards a positive answer to this open question.The fact that BVASS are a natural common generalization of two already well-established tools in computer science -Parikh images of context free languages, and VASS -and that they are useful in domains as diverse as equational tree automata and linear logic, confirms that BVASS are interesting objects to study.This connection between MELL and VATA (hence BVASS) generalizes the already known connection between ordinary VASS and the !-Horn fragment of MELL, which was used to obtain decidability result for the latter [Kan95].See also [Kan94,Kan96] for connections between different fragments of linear logic and VASS.
For related work on equational tree automata, see [Ohs01,Lug03].While these deal mainly with oneway variants, we have introduced two-way variants in order to deal with cryptographic protocols [GLRV05, GLV02, Ver03c, Ver03b, Ver03a].Our study of BVASS was initially prompted by certain classes of these automata.

Covering Derivations
Covering derivations for branching VASS are defined in much the same way as for VASS.Definition 1 below should therefore not surprise the cognoscenti.The only new item of the definition, compared to VASS, would be item 3, which is due to the presence of addition clauses.If this were removed, we would get a definition of something very similar to the individual (finite prefixes of) paths in Karp-Miller trees of ordinary VASS.As we have said earlier, defining the Karp-Miller tree (jungle?) would be impossible, or at least obscure, in our extended setting.
We check all needed properties of covering derivations here.The challenge will be to show that there are only finitely many covering derivations, see Section 3. § However 'BVASS' is not a new name that we have invented, and appears already in [VGL04].This extension of VASS first appears in print in [Ver03a] where it is simply called extended VASS.
Definition 1 (Covering Derivation) A generalized fact is any atom P(ν), where ν is in (N ∪ {∞}) p .Addition, comparison are defined componentwise, with the convention that Assume fixed a branching VASS V .A covering derivation ∆ is a finite tree, each of whose nodes is labeled with a generalized fact and a clause, constructed using the following rules: 1. Whenever P(ν) is a clause in V , the following is a covering derivation: 1 ) only occurs once (namely, at the bottom) in ∆ 1 , for every transition P(x is a covering derivation, where ν ′ is defined as the vector whose ith component is:

For every covering derivations
1 ) only occurs once in ∆ 1 and P 2 (ν ′ 2 ) only occurs once in ∆ 2 , for every addition clause is a covering derivation, where ν ′ is defined as the vector whose ith component is: Intuitively, covering derivations compute "limits" of facts derivable in a branching VASS.This is made precise by Propositions 1 and 2 below.Example 1 Consider a branching VASS with the following set of clauses Some facts derivable in this branching VASS are P 1 (2, 5), P 2 (3, 4), P 3 (5, 9), P 1 (2 + n, 5), P 3 (5 + n, 9), P 2 (3 + 4n, 4) for all n ≥ 0. Figure 1 shows an example of a covering derivation for this branching VASS.This covering derivation cannot be extended further, because the final fact P 1 (∞, 5) also occurs higher in the derivation, so items 2 and 3 of the definition do not apply.Intuitively, the meaning of P 1 (∞, 5) is that P 1 (n, 5) is derivable from C 1 -C 5 for arbitrarily high values of n ∈ N.This will be made precise in Proposition 2.
Proposition 1 Let V be a branching VASS.If a fact P(ν) is derivable, then there is a covering derivation ∆ of some generalized fact P(ν ′ ) such that for all Proof: We do induction on the size of the derivation of P(ν).We have the following cases: (i) If P(ν) is derivable using the clause P(ν), then use Rule 1 of Definition 1; this satisfies the requirements.
By induction hypothesis we have a covering derivation ∆ 1 of some generalized fact We pick a minimal such ∆ 1 .Consequently P 1 (ν ′ 1 ) does not occur except as conclusion in ∆ 1 .Clearly we have ν 1 + δ ≥ 0 and hence ν ′ 1 + δ ≥ 0. By using Rule 2 of Definition 1, we get a covering derivation ∆ with root labeled by a generalized fact P(ν ′ ) with the property that if ν Hence ∆ is the required covering derivation.(iii) Suppose P(ν 1 + ν 2 ) is derivable from the derivations of P 1 (ν 1 ) and P 2 (ν 2 ) using the clause P(x + y) ⇐ P 1 (x), P 2 (y).By induction hypothesis, we have covering derivations ∆ 1 and ∆ 2 of P 1 (ν ′ 1 ) and As in the previous case we may assume that P 1 (ν ′ 1 ) only occurs in ∆ 1 as the conclusion, and P 2 (ν ′ 2 ) only occurs in ∆ 2 as the conclusion.By using Rule 3 of Definition 1, we get a covering derivation ∆ of some P(ν ′ ) with the property that if ν • either e i is a two-way clause P i+1 (x + δ) ⇐ P i (x); • or e i is a pair C; Q(ν) of an addition clause C = P i+1 (x + y) ⇐ P i (x), Q(y), and of a fact Q(ν) that is derivable from V .Here it is understood that the order of atoms in the body of an addition clause is irrelevant.
We also say that π is a linear path from P 1 to P n .The elements e i are called edges.In the first case, the valuation v(e i ) of e i is δ; in the second case, it is ν. If which is a suffix of the previous one, has valuation (−2, −4) + (50, 5) = (48, 1), but is admissible for ν only when ν ≥ (2, 4).
We require the following auxiliary lemma to prove Proposition 2, which is the most crucial result of our discussion on branching VASS.
Lemma 1 Let V be a branching VASS.Let ∆ be a covering derivation with the property that, given any generalized fact P(ν ′ ) occurring in this derivation, we can find a (non-generalized) fact ν such that Suppose that ∆ 2 is a subderivation of P 2 (ν ′ 2 ) in ∆ of the following form, containing a (not necessarily proper) subderivation ∆ 1 (of P 1 (ν ′ 1 )).
By induction hypothesis there is a linear path π ′ from P 1 to P 3 which is admissible for ν ′ 1 with respect to J, and such that ν Let the required linear path π be the concatenation of π ′ with P 3 / / P 2 .We have 2 ) where, without loss of generality, ∆ 1 is contained in the subderivation leading to the left premise.
By induction hypothesis we get a linear path π ′ from P 1 to P 3 which is admissible for ν ′ 1 with respect to J, and such that ν By assumption (this is where we need it!),P(ν 4 ) is derivable from V for some vector ν 4 such that, for all i such that ν Let the required linear path π be the concatenation of π ′ with P 3 P 2 (x+y)⇐P 3 (x),P 4 (x);P 4 (ν 4 ) / / P 2 .This is a well-defined linear path.We have ν ≥ 0 and hence π is admissible for ν ′ 1 with respect to J. ✷ Example 3 Let us look at Example 1 again.Looking at Figure 1, take ∆ 2 to be the whole covering derivation (of P 1 (∞, 5)), and ∆ 1 to be the one on the left ending on P 3 (5, 9).Take J = {2}.

Now we are ready to prove the required result:
Proposition 2 Let V be a branching VASS.For every covering derivation ∆ of some generalized fact P(ν ′ ), and for any K ≥ 0 there is a tuple ν ∈ N p such that Proof: By induction on ∆.If P(ν ′ ) is a fact in V (rule 1 of Definition 1), then ν = ν ′ satisfies the requirements.Otherwise, look at the last rule in ∆: • If ∆ is constructed using rule 2 of Definition 1: then let I be the set of indices i such that ν ′ [i] = ∞, J be the set of all other indices, let I 1 be the set of indices i such that ν ′ 1 [i] = ∞ and J 1 the set of all other indices.Clearly I 1 ⊆ I, hence J ⊆ J 1 ; let I a be the set I \ I 1 of additional indices that are infinite in I compared to I 1 .The following picture may help: In the sequel, for any integer N, write N the vector (N, N, . . ., N).The number of components in N will always be clear from context.
Buildings paths π i for all i ∈ I a .By definition of I a , for each i ∈ I a there is a subderivation ∆ i of ∆ 1 that derives P(ν ′ i ) for some generalized fact ν ′ i such that: (a) . By induction hypothesis the assumptions of Lemma 1 are satisfied of ∆ 1 and the set of indices J 1 .From this lemma we get a linear path π ′ i from P to P 1 which is admissible for ν ′ i with respect to J 1 and such that: . Let π i be the path from P to P obtained by concatenating π ′ i with P 1 We now observe that: (e) for every j ∈ J 1 , ν ′ i [ j] is finite.Indeed, otherwise, by (a) ν ′ 1 [ j] would be infinite, contradicting the fact that j is in J 1 .
• Suppose ∆ is constructed using rule 3 of Definition 1: and J 2 its complement.Let I a be I \ (I 1 ∪ I 2 ).The picture is now: Buildings paths π i for all i ∈ I a .By induction hypothesis there are vectors ν 1 and ν 2 such that P 1 (ν 1 ) and P 2 (ν 2 ) are derivable, As in the previous case, define a linear path π i for each i ∈ I a , as follows.Contrarily to the previous case, π i will depend on ν 1 and ν 2 .
Since i ∈ I a , there is a subderivation ∆ i of ∆ b i , for some b i ∈ {1, 2}, that derives P(ν ′ i ) for some generalized fact ν ′ i such that: (a) By Lemma 1 there is a linear path π ′ i from P to P b i which is admissible for ν ′ i with respect to J b i and such that: (c) , and π K is admissible for ν ′ 1 + ν ′ 2 with respect to J 1 ∩ J 2 , π K is also admissible for ν 1 1 + ν 1 2 with respect to J 1 ∩ J 2 .Since the union of I 1 ∪I 2 and J 1 ∩J 2 is the set of all indices {1, . . ., p}, it follows that π K is admissible for ν 1 1 + ν 1 2 .This implies that P(ν Building the fact ν from π.Let therefore ν be ν The second claim follows from the fact that v(π)[J] = 0, from which we infer v(π K ) has only finite components, and by definition finite components of ν ′ are sums of corresponding components of ν ′ 1 and ν ′ 2 ).The first claim will follow from the two sub-claims and ν

Termination
Now we are left to prove that there are only finitely many covering derivations.This is Theorem 1 below.
Remark 1 Let V be a branching VASS, ∆ a covering derivation.Suppose that ∆ 2 is a subderivation of ∆ of the form We say that ν ′ 2 has at least as many infinite coordinates as ν ′ 1 .In case additionally ν ′ 1 [i] = ∞ and ν ′ 2 [i] = ∞ for some i, we say that ν ′ 2 has more infinite coordinates than ν ′ 1 .Otherwise, ν ′ 1 and ν ′ 2 have the same infinite coordinates.
Lemma 2 Let V be a branching VASS.Let ∆ be a covering derivation of the form with the same P, and such that ν ′ > ν ′ 0 , by which we mean that ν for some i.Then ν ′ has more infinite coordinates than ν ′ 0 .
Proof: Assume the contrary.From Remark 1, ν ′ and ν ′ 0 have the same infinite coordinates.Clearly ∆ must have been constructed by using Rule 2 or Rule 3 of Definition 1. Accordingly we have the following two cases: 1 has the same infinite coordinates as ν ′ .Then from the construction in Rule 2 of Definition 1, we must have contradicting the fact that ν ′ and ν ′ 0 have the same infinite coordinates.
where we may assume without loss of generality that ∆ 0 is a subderivation of ∆ 1 .From Remark 1, ν ′ and ν ′ 0 have the same infinite coordinates.Then from the construction in rule 3 of Definition 1, Define the height H(∆) of the covering derivation ∆ as the height of the corresponding tree.Formally: Definition 4 (Height) Taking the notations of Definition 1, the height H(∆) of the covering derivation ∆ is 1 if ∆ was created by rule 1, 1 + H(∆ 1 ) if by rule 2, and 1 + max(H(∆ 1 ), H(∆ 2 )) if by rule 3.
Given a branching VASS V , we define a total ordering on covering derivations, based on height, as follows.For every n ≥ 1, there are only finitely many, say k n , covering derivations of height n.Let us enumerate them without repetition, arbitrarily as ∆ n1 , . . ., ∆ nk n .Then define ⊑ by ∆ mi ⊑ ∆ n j if and only if m < n, or m = n and i ≤ j.
Lemma 3 For a branching VASS V , the total ordering ⊑ on covering derivations has the properties that: 2. given any covering derivation ∆, there are only finitely many covering derivations ∆ 1 such that ∆ 1 ⊑ ∆.
Theorem 1 Every branching VASS V has only finitely many covering derivations.Furthermore, the set of covering derivations of V can be effectively computed.
Proof: We will construct a forest of all possible derivations; be aware that each node in this forest will be a whole derivation, not just a fact.Recall also that a forest is just a set of trees, which we shall call the component trees of the forest.This forest is constructed iteratively by adding one node at a time, using the following rules: 1.Each covering derivation constructed using rule 1 of Definition 1 is a root node.These are the only root nodes, so that there will be only finitely many component trees in the forest.
2. Suppose ∆ is constructed using the derivation ∆ 1 as defined in Rule 2 of Definition 1, and ∆ 1 has been added in the forest, and ∆ has not been added.Then we add ∆ as a child of ∆ 1 .
3. Suppose ∆ 1 and ∆ 2 have been added in the forest, and ∆ is constructed using them as defined in Rule 3 of Definition 1 and has not been added in the forest.Since ⊑ is total (see Lemma 3), either ∆ 1 ⊑ ∆ 2 or ∆ 2 ⊑ ∆ 1 .Let ∆ ′ be the greater of ∆ 1 and ∆ 2 in ⊑.Then we add ∆ to the forest as a child of ∆ ′ .This is the crux of the proof: by choosing exactly one of ∆ 1 , ∆ 2 here, each node will have at most one parent, so we are indeed building a forest (not a jungle).The particular choice of ∆ ′ ensures that the forest is finitely branching; this used to be trivial in the case of VASS.
It is clear that in this way all the derivations are added in the forest eventually.We claim that this process ends, i.e., the forest is finite.Assume the contrary.Since the number of component trees is finite, one of them would be infinite.We see that the component trees are finitely branching.For any covering derivation ∆, the number of children created using Rule 2 above is limited by the number of clauses, and the number of children created using Rule 3 above is limited by the number of clauses times the finite number of covering derivations ∆ 1 such that ∆ 1 ⊑ ∆.Then by König's lemma, there would be an infinite path in the component tree, consisting of covering derivations . . .
✷ Propositions 1, 2, and Theorem 1 entail that the emptiness, coverability and boundedness problems are decidable for branching VASS, just as they are for VASS, as we show next.Given a set S of predicates and a subset J of {1, . . ., p}, the branching VASS V is S, J-bounded if and only if {ν[J]|ν ∈ N p and ∃P ∈ S such that P(ν) is derivable from V } is finite.It is now clear that V is S, J-bounded if and only all its covering derivations with some conclusion P(ν ′ ), P ∈ S, are such that ν ′ [i] = ∞ for all i ∈ J, and this is decidable by Theorem 1.The coverability problem for V , tuple ν 0 , set of states P and subset J of {1, . . ., p}, asks whether there is a fact P(ν) derivable from V such that P ∈ S and ν[J] ≥ ν 0 [J].This is equivalent to testing whether some covering derivation ends in some generalized fact P(ν ′ ) with P ∈ S and ν ′ [J] ≥ ν 0 [J], which is decidable by Theorem 1.The emptiness problem asks whether there is any tuple ν and any P ∈ S such that P(ν) is derivable: this is decidable, as a special case of coverability.

Application to AC Automata
We apply our results to equational tree automata which were the initial motivation for this work.Given a signature Σ of function symbols with fixed arities and an equational theory E over terms built using symbols from Σ, an E-tree automaton P [Ver03c, Ver03a, Ver03b, GLV02] is a set of definite clauses of the form P(t) ⇐ P 1 (t 1 ), . . ., P n (t n ) where P, P 1 , . . ., P n are (a.k.a.states), and t, t 1 , . . ., t n are terms built from symbols in Σ and variables x, y, z, . . . .This is similar to BVASS where predicates represent states.The difference here is that we are working with terms instead of tuples of natural numbers.See also [CDG + 97] (Chapter 7) which uses clausal notation to represent tree automata (in the absence of equational theories though).The additional advantage of this notation here is that it clarifies the relationship between BVASS and equational tree automata.Formally derivable ground atoms are defined using the rules: where σ is a ground substitution, and = E is the congruence on terms induced by the theory E. The language accepted by P is the set L P (P) of terms t such that P(t) is derivable from P , where P is a designated final state.Note that the usual notion of tree automata, which accept regular languages, are conveniently considered [CDG + 97] as E-tree automata, by letting E be the empty theory and by suitably restricting the form of clauses.
Consider a signature Σ containing a binary symbol + and constants a 1 , . . ., a p .AC is the theory stating that + is associative and commutative.We are also interested in the theory ACU which additionally says that 0 is unit of +, where 0 is a new constant added to the signature.Terms modulo ACU are exactly summations ∑ p i=1 n i a i , equivalently, tuples from N p .Terms modulo AC are exactly non-zero such tuples.The constant-only AC and ACU automata are built from the following clauses exclusively [Ver03c,Ver03b]: P(a) where a is a constant (9) P(0) (10) where clause (10) is present only in the ACU case.Considering terms as vectors, it is then natural to think of these automata as BVASS.P(a i ) corresponds to the clause P(0, . . ., 0, 1, 0, . . ., 0) where the only '1' is at position i.We don't require any two-way clause.The languages accepted in the ACU (resp.AC) case are then exactly L (resp.L \ {(0, . . ., 0)}) where L is a semilinear set.Now let's add standard + push clauses: Similar push clauses were considered in [Ver03c,Ver03b], except they were of the form where f is a free, i.e., non-equational symbol (in particular, not +).Standard + push clauses are a timid attempt at allowing equational (AC, ACU) symbols in the body of clauses.In Section 5, we make an attempt at being a bit less shy.We shall remain in the constant-only case throughout here (where all free function symbols are constants a), for simplicity.Clause (11) says that P should accept all the subterms (strict subterms in the AC case) of terms accepted at P 1 .Such clauses can be added to constant only ACU automata without increasing expressiveness, and the following table gives a linear time procedure for eliminating clauses (11):
emptiness is no longer easier than reachability!Here is the idea.To test whether two states P 1 and P 2 accept at least one common tuple, add the following clauses where P 0 , P 3 , P are fresh predicates: P 3 (x − y) ⇐ P 1 (x), P 2 (y) P 0 ((0, . . ., 0)) P(x − y) ⇐ P 0 (x), P 3 (y) Then there is some common tuple accepted at both P 1 and P 2 iff there is some tuple accepted at P. This result shows the power of subtraction clauses since now doing the Karp-Miller construction already involves solving a problem at least as difficult as that of VASS reachability.

Conclusion
We have studied a natural generalization of both Parikh images of context-free languages and of vector addition systems with states (VASS, special case: Petri nets), where derivations are both two-way (as in the latter) and branching (as in the former).For these so-called branching VASS, we have constructed an analogue of the Karp-Miller coverability tree construction for Petri nets.This allows us to conclude, like in the ordinary coverability tree construction, that emptiness, coverability, and boundedness are decidable for the class of branching VASS.The construction for branching VASS differs from the simpler case of VASS in that we construct covering derivations (analogues of finite prefixes of paths in ordinary Karp-Miller trees) in isolation (Section 2).Doing this, we lose the possibility to appeal directly to König's Lemma; we nonetheless managed to show that there are only finitely many covering derivations from a given branching VASS (Section 3), by a more technical argument that builds a forest whose nodes are covering derivations, with a subtle selection of the (necessarily unique) parent of each non-root node.
We have shown (Section 4) how this produced a simple proof that a natural extension of the constantonly AC automata considered in [Ver03c,Ver03b] with so-called standard +-push clauses actually reduce to the case without standard +-push clauses.This seems to require non-primitive-recursive time, however, contrarily to the ACU case, which reduces also, but in linear time.In [Ver03a], this result on the constantonly case has also been used to deal with the general case (extending the automata of [Ver03c,Ver03b] with standard +-push clauses, not just the constant-only automata).
In turn, the results of Section 4 prod us to explore further extensions of branching VASS.We have explained what challenges this entailed (Section 5).
Besides arising as a natural common generalization of two already well-established tools in computer science -Parikh images of context-free languages, and VASS -and having application in equational tree automata, branching VASS are also useful in the completely different domain of linear logic, since decidability of reachability in branching BVASS is equivalent to decidability of provability in multiplicative exponential linear logic, which is still an open problem.This confirms that branching VASS are interesting objects to study.
The result that there are only finitely many covering derivations can also be seen as a first step towards a positive answer to the question whether the reachability problem for branching VASS, and hence whether provability in MELL, is decidable or not.This is deferred to a later paper; additional difficulties lurk ahead, though, and notably there are no such things as Kirchoff's laws [Lam92] in the extended context.

Fig. 1 :
Fig. 1: A covering derivation for the BVASS in Example 1 This work: Branching VASS.The purpose of the present paper is to show that the Karp-Miller construction extends to the case of so-called branching VASS, or BVASS, which extend both Parikh images of Given a branching VASS V , the linear paths π of V are the sequences / / P n , where n ≥ 1, P 1 , . . ., P n are states (predicates) of V , and for each i, π 1 is the linear path P 1 / / P n+m then their concatenation π 1 π 2 is defined as the linear path / / P n+m Note that π 1 π 2 is defined only when the last predicate of π 1 is equal to the first predicate of π 2 .Clearly we have v(π1 π 2 ) = v(π 1 ) + v(π 2 ).Definition 3 (Admissible Linear Path) Given ν ∈ (N ∪ {∞}) p , the linear path π is said to be admissible for ν if and only if, for each prefix π ′ of π, we have ν + v(π ′ ) ≥ 0. π is admissible for ν with respect to I ⊆ {1, ..., p} if and only if ν[I] + v(π ′ )[I] ≥ 0 for all prefixes π ′ of π, where ν[I] denotes the tuple consisting of components ν[i] with i ∈ I.It is easy to see that if π is admissible for ν and P 1 (ν) is derivable in V (in which case ν would have no infinite coordinate,) then P n (ν + v(π)) is derivable.Also if π 1 is admissible for ν and π 2 is admissible for ν + v(π 1 ), then π 1 π 2 is admissible for ν.Example 2 The following is an example of a linear path./ / P 3 Note that the facts P 2 (3, 4) and P 1 (50, 5) are derivable in the branching VASS of Example 1.