Quantitative and Algorithmic aspects of Barrier Synchronization in Concurrency ˚

In this paper we address the problem of understanding Concurrency Theory from a combinatorial point of view. We are interested in quantitative results and algorithmic tools to reﬁne our understanding of the classical state explosion phenomenon arising in concurrency. This paper is essentially focusing on the the notion of synchronization from the point of view of combinatorics. As a ﬁrst step, we address the quantitative problem of counting the number of executions of simple processes interacting with synchronization barriers. We elaborate a systematic decomposition of processes that produces a symbolic integral formula to solve the problem. Based on this procedure, we develop a generic algorithm to generate process executions uniformly at random. For some interesting sub-classes of processes we propose very efﬁcient counting and random sampling algorithms. All these algorithms have one important characteristic in common: they work on the control graph of processes and thus do not require the explicit construction of the state-space.


Introduction
Schematically, the behaviour of a concurrent process can be seen as a set of atomic actions performed according to a certain ordering. In the concurrent paradigm, processes are decomposed in independent logical units often called threads (or sub-processes) which perform a subset of the atomic actions. Because a thread executes its actions independently of the others, a given process (set of threads) may have different possible executions.
One of the main problematic of concurrency theory is to check safety properties of processes i.e. check that all the possible executions are safe with respect to some logical proposition. In that context, the number of executions (which may be huge) is an obstruction. This is a symptom of the so-called "state explosion", a defining characteristic of concurrency. To understand, and possibly overcome, such "explosion", we study the combinatorial problem of counting the number of executions of processes with respect to their number of actions. On a more practical side, finding efficient counting algorithms (for process sub-classes) enables the statistical analysis of process behaviors, based on the sampling of random executions. This is the second problem we address in our research project on the combinatorial study of concurrent systems.
Our methodology is to study these problems by considering models of concurrent processes of increasing expressivity. We modelize these processes using discrete structures such as trees, partial orders and acyclic digraphs.
For example in [12] the processes we study can only perform atomic actions and fork child threads. This model is quite simple from a concurrency point of view but allows to study the fundamental "feature" of parallelism. In terms of combinatorics, we studied trees (representing processes) and their increasing labelings (representing their executions), using tools of analytic combinatorics (see [15]).
In [11] we enrich this primitive language with non-determinism: the mechanism allowing a process to choose between executing one or another thread. For example, a process controlling a coffee machine may execute a thread or another depending of the button pressed by its user.
Given a process, all of its threads are executed in a common environment and share resources (e.g. computer memory, network, time). To access these resources, threads have to agree on an order to do it (e.g. a thread read a file then another one writes inside). This communication may be achieved by another fundamental "feature" of concurrent processes: synchronization. In the present paper, our objective is to isolate that mechanism. For this, we introduce a simple process calculus (an abstract programming language) whose only non-trivial concurrency feature is a principle of barrier synchronization. This is here understood intuitively as the single point of control where multiple thread have to "meet" before continuing. This is one of the important building blocks for concurrent and parallel systems [19]. The main property of that process calculus is that it bridges semantics of concurrent processes and preorder. Particularly, the processes without deadlock (those which terminate) corresponds to partial orders.
As a first step, we show that counting executions of concurrent processes is a difficult problem, even in the case of our calculus with limited expressivity. Thus, one important goal of our study is to investigate interesting sub-classes for which the problem becomes "less difficult". To that end, we elaborate in this paper a systematic decomposition of arbitrary processes, based on only four rules: (B)ottom, (I)ntermediate, (T)op and (S)plit. Each rule explains how to remove one node from the control graph of a process while taking into account its contribution in the number of possible executions. Indeed, one main feature of this BITS-decomposition is that it produces a symbolic integral formula to solve the counting problem. Based on this procedure, we develop a generic algorithm to sample process executions uniformly at random. Since the algorithm is working on the control graph of processes, it provides a way to statistically analyze processes without constructing their state-space explicitly. In the worst case, the algorithm cannot of course overcome the hardness of the problem it solves. However, depending on the rules allowed during the decomposition, and also on the strategy (the order of applications of the rules) adopted, we isolate interesting sub-classes wrt. the counting and random sampling problem. We identify well-known structural sub-classes such as fork-join parallelism [17] and asynchronous processes with promises [21]. In particular for these sub-classes we develop dedicated counting and random sampling algorithms: once the strategy is well understood, we further can simplify the decomposition in order to exhibit algorithms that not really removes nodes one by one.
A larger sub-class that we find particularly interesting is what we call the "BIT-decomposable" processes, i.e. only allowing the three rules (B), (I) and (T) in the decomposition. The counting formula we obtain for such processes is of a linear size (in the number of atomic actions in the processes, or equivalently in the number of vertices in their control graph).

Related work
Our study intermixes viewpoints from concurrency theory, order-theory as well as combinatorics (especially enumerative combinatorics and random sampling). The heaps combinatorics (studied for example in [1]) provides a complementary interpretation of concurrent systems. One major difference is that this concerns "true concurrent" processes based on the trace monoid, while we rely on the alternative interleaving semantics. A related uniform random sampler for networks of automata is presented in [4]. Synchronization is interpreted on words using a notion of "shared letters". This is very different from the "structural" interpretation as joins in the control graph of processes. For the generation procedure [1] requires the construction of a "product automaton", whose size grows exponentially in the number of "parallel" automata. By comparison, all the algorithms we develop are based on the control graph, i.e. the space requirement remains polynomial (unlike, of course, the time complexity in some cases). Thus, we can interpret this as a space-time trade-of between the two approaches. A related approach is that of investigating the combinatorics of lassos, which is connected to the observation of state spaces through linear temporal properties. An uniform random sampler for lassos is proposed in [23]. The generation procedure takes place within the constructed state-space, whereas the techniques we develop do not require this explicit construction. However lassos represent infinite executions whereas for now we only handle finite (or finite prefixes) of executions.
A coupling from the past (CFTP) procedure for the uniform random generation of linear extensions is described, with relatively sparse details, in [20]. The approach we propose, based on the continuous embedding of partial order sets into the hypercube, is quite complementary. A similar idea is used in [3] for the enumeration of Young tableaux using what is there called the density method. The paper [18] advocates the uniform random generation of executions as an important building block for statistical model-checking. A similar discussion is proposed in [25] for random testing. The leitmotiv in both cases is that generating execution paths without any bias is difficult. Hence an uniform random sampler is very likely to produce interesting and complementary tests, if comparing to other test generation strategies.
Our work can also be seen as a continuation of the algorithm and order studies [24] orchestrated by Ivan Rival in late 1980's only with powerful new tools available in the modern combinatorics toolbox.

Outline of the paper
In Section 2 we introduce a minimalist calculus of barrier synchronization. We show that the control graphs of processes expressed in this language are isomorphic to arbitrary partially ordered sets (posets) of atomic actions. From this we deduce our rather "negative" starting point: counting executions in this simple language is intractable in the general case. In Section 3 we define the BITS-decomposition, and we use it in Section 4 to design a generic uniform random sampler. In Section 5 we discuss various sub-classes of processes related to the proposed decomposition, and for some of them we explain how the counting and random sampling problem can be solved efficiently. In Section 6 we propose an experimental study of the algorithm toolbox discussed in the paper.
Note that we provide online (i) the full source code developed in the realm of this work, as well as the benchmark scripts. This paper is an updated and extended version of papers [9] and [8]. It contains new material, especially the study of the interesting process sub-classes. The proofs in this extended version are also more detailed.

Modelization of processes
As a starting point, we recast our problem in combinatorial terms. The idea is to relate the syntactic domain of process specifications to the semantic domain of process behaviors. Our model of concurrent process is seen as a set of atomic actions associated with a set of precedence rules between some of theses actions. As mentioned above, we introduce, in this work, a synchronization feature, called barrier synchronization processes in order to modelize synchronization in concurrent systems with suitable properties to deal with a combinatorial study.

Synctatic and semantic domain
Let us start with the description of the process calculus we will deal with through the paper. First we describe its syntactic domain, i.e. the way the processes are built.
Definition 2.1 (Syntax of barrier synchronization processes). We consider countably infinite sets A of (abstract) atomic actions (denoted by Greek letters α, β, γ, . . . in the following), and B of barrier names (denoted by capital letters B, C, G, . . . ). The set P of processes is defined by the following grammar: (atomic action and prefixing) | xByP (synchronization) | νpBqP (barrier and scope) | P Q (parallel) where P, Q P P, α P A and B P B.
The language has very few constructors and is purposely of limited expressivity (there is no constructor with infinite control flow such as recursion). Processes in this language can only perform atomic actions, fork child processes and interact using a basic principle of synchronization barrier.
Informally, the operator . allows to execute consecutively two processes, while the operator gives the opportunity to execute two processes in parallel. The two other operators allow to fork and synchronize sub-processes.
Example 2.1. We present here three basic examples allowing us to illustrate valid processes and then, after having described the semantics we are interested in, to give their behaviors.
(2) νpBq rα 1 .xBy α 2 .0 xBy β 1 .0 γ 1 .xBy 0s . (3) The first example (1) is built putting two distinct atomic actions consecutively, followed by the termination of the process. The second example (2) is nothing else than putting in parallel composition the first example and a copy of it. Since both sub-processes terminate, there is no further need of 0 in the whole process. In all the paper we consider all the atomic actions to be distinct, thus their names convey no combinatorial meaning.
Finally we exhibit a process (3) dealing with the notion of barrier. First a new barrier name B is created. The sub-processes underlying in its scope contain somewhere a synchronization according this barrier B. The use of this operator will be highlighted with its semantic behavior.
According to our grammar, we can also build the following process α 1 .xByα 2 .0. In fact, there is no specification constraint forbidding a synchronization for an unknown barrier. But as the reader will see after the semantic description, the latter example will get invalid through the semantics we will define.
In the semantic domain, we study for a given process the set of all its possible executions. The formal definition of each of the constructors are given below by an operational semantics (see Definition 2.2). But before going into details we present how the processes presented as examples will behave.
Our simplest example (1) is composed of just one execution path (or execution). In fact the process must execute sequentially the action α 1 followed by α 2 and then it reaches 0. This execution is denoted in the following as pα 1 , α 2 q.
To conclude that series of examples we show the use of the synchronization constructors of the language. The ν constructor binds a barrier name inside the scope of a process. In some sense, νpBq broadcast the barrier knowledge of B to every sub-processes in its scope. The chevrons constructor xBy performs the synchronization of a process on the bounded barrier B: a sub-process reaching xBy stops its execution until all the sub-processes containing xBy (i.e. knowing B) reach this step. Let us focus on our example (3).
The process starts with the declaration of a barrier B, then three sub-processes are put in parallel, all of them being in the scope of B, i.e. containing a synchronization step xBy. Thus the whole process first performs the actions α 1 or γ 1 (in the two possible interleaving orders, either pα 1 , γ 1 q or pγ 1 , α 1 q). We then reach the state in which all the sub-processes agree to synchronize on barrier B (it was not the case before thus the synchronization could not appear earlier). The remaining process to execute is: νpBq rxBy α 2 .0 xBy β 1 .0 xBy 0s .
So the barriers can be "crossed" and the executions can end by any interleaving of α 2 and β 1 . Finally, the semantics of the whole example is the set of the four executions pα 1 , γ 1 , α 2 , β 1 q, pα 1 , γ 1 , β 1 , α 2 q, pγ 1 , α 1 , α 2 , β 1 q and pγ 1 , α 1 , β 1 , α 2 q. Now we state more formally these executions by the mean of an operational semantics. An operational semantics describes how a process is executed given its inductive syntactic structure. The formalism of operational semantics is similar to the one of sequent calculus: above the line there is a conjunction of "hypothesis", the bottom part corresponds to the derivation of a process (the performing of a step). In that context, an execution of a process is formalized as a sequence of derivation steps.
The operational semantics below characterizes processes transitions of the form P α Ý Ñ P 1 in which P can perform action α to reach its (direct) derivative P 1 .
Definition 2.2 (Operational semantics). The operational semantics related to the process language is the following: The rule (act) allows to derive a process prefixed by an action. The rules (lpar) and (rpar) derives the left or the right process of a parallel composition, if both sides can be derived then both rules can be applied: that is the interleaving semantics.
The rule (sync) above explains the synchronization semantics for a given barrier B. The rule is nontrivial given the broadcast semantics of barrier synchronization. The definition is based on two auxiliary functions. First, the function sync B pP q produces a derivative process Q in which all the possible synchronizations on barrier B in P have been effected. If Q has a sub-process that cannot yet synchronize on B, then the predicate wait B pQq is true and the synchronization on B is said incomplete. In this case the rule (sync) does not apply, however the transitions within P can still happen through (lift).
For the sake of comprehension, an example of a derivation of the process (3) is given in the next section.

The control graph of a process
By using the semantic domain we define the notion of execution of a process.

Definition 2.3 (Execution)
. An execution σ of a process P is a finite sequence pα 1 , . . . , α n q such that there exist a set of processes P 1 α1 , . . . , P 1 αn and a path P α1 ÝÑ P 1 α1 . . . αn Ý Ý Ñ P 1 αn with P 1 αn Û (no transition is possible from P 1 αn ). We assume that the occurrences of the atomic actions in a process expression all have distinct labels, α 1 , . . . , α n . This is allowed since the actions are uninterpreted in the semantics (cf. Definition 2.2). Thus, each action α in an execution σ can be associated to a unique position, which we denote by σpαq. For example if σ " pα 1 , . . . , α k , . . . , α n q, then σpα k q " k.
As announced before, we give an example of a derivation for the process (3).
In the same way, we can use the (lift) again to perform the transition P 1 γ1 Ý Ñ P 2 " xBy α 2 .0 xBy β 1 .0 xBy 0. Now, the rule (sync) (with Q " sync B pP 2 q " α 2 .0 β 1 .0 0 and wait B pQq " false) allows to "consume" the barrier B and so to continue the computation with the transition P 2 β1 ÝÑ α 2 .0 0 0. It remains one sub-process α 2 .0 and two ended processes 0, all in parallel. So the rule (lpar) allows to take the transition α2 ÝÑ and ends this example.
Until now, we only presented examples of processes which can be derived with the operational semantics. Of course, that is not always the case.
Definition 2.4 (Deadlocks). Let P be a process. We say that an execution of P reaches a deadlock situation (or just a deadlock) if none of the rules of the operational semantics can be applied. In that case we say that P is deadlocked.
Example 2.3. The example α 1 .xByα 2 .0 we presented above is deadlocked. In fact, no synchronization on B is possible. But there are also cases that are more intricate.
Let P be the process νpBqνpCq rxByxCy α.0 xCyxBy β.0s. Because of the alternation of barriers in different orders in the two parallel sub-processes, P is deadlocked.
The problem of detecting deadlocks is an important question in the context of concurrent systems and often a difficult one (e.g. PSPACE-complete for Petri nets [14]). However, due to the limited expressivity of barrier synchronization processes, it is easier here. To show this, we introduce the causal ordering relation over the atomic actions of a process. Definition 2.5 (Cause, direct cause). Let P be a process. An action α of P is said a cause of another action β, denoted by α ă β, if and only if for any execution σ of P we have σpαq ă σpβq. Moreover, α is a direct cause of β, denoted by α ă β if and only if α ă β and there is no γ such that α ă γ ă β. The relation ă obtained from P is denoted by POpP q.
A partially ordered set (or poset) P is a couple pS, ĺ P q where S is a set of elements and ĺ P is a binary relation over the elements S which is reflexive, antisymmetric and transitive. When there is no ambiguity we will denote the relation by ĺ and get S and P mixed up. Given a poset P , a linear extension of P is a total ordering ă (a connected, antisymmetric and transitive relation) of its element such that if @a, b P P, a ĺ P ñ a ă b. We may denote a linear extension by x 1 ă x 2 ă . . . or px 1 , x 2 , . . . q. Proposition 2.1. POpP q is a partially ordered set (poset) with covering ă, capturing the causal ordering of the actions of P . Executions of P are equivalent to linear extensions in POpP q.
A directed acyclic graph (or DAG) is a directed graph D " pV, Aq where V is the vertex set and A Ă VˆV is the arc set and such that there is no directed path from a vertex to itself.
The covering relation Ñ (or covering DAG) of a poset P is an irreflexive, antisymmetric and intransitive relation such that @a, b P P, a Ñ b ñ Ec P P, a ĺ c^c ĺ b. The vertex set of the covering DAG is the set of elements of P .
A labeling of a graph of vertex set V is a bijection γ : V Ñ t1, . . . , |V |u associating a unique integer to each vertex. When a graph is labeled each vertex can be identified by its label.
Given a poset, there is a natural injection from the set of its linear extensions to the set of the labelings of its covering DAG which is obtained by labeling the vertices by their rank in a linear extension.
The covering of a partial order is by construction a DAG, hence the description of POpP q itself is simply the transitive closure of the covering, yielding Opn 2 q edges over n elements. The worst case (maximizing the number of edges) is a complete bipartite graph with two sets of n vertices each connected by n 2 edges (cf. Fig. 1).
νpBq rα 1 .xBy α 2 .xBy . . . α n .xBy xBy.β 1 xBy.β 2 . . . xBy.β n s For most practical concerns we will only consider the covering, i.e. the intransitive DAG obtained by the transitive reduction of the order. It is possible to direclty construct this control graph, according to the following definition.
Definition 2.6 (Construction of control graphs). Let P be a process. Its control graph is ctgpP q " xV, Ey, constructed inductively as follows: Given a control graph Γ, the notation x ; Γ corresponds to prefixing the graph by a single atomic action. The set sourcespEq corresponds to the sources of the edges in E, i.e. the vertices without an incoming edge. And Â xBy Γ removes an explicit barrier node and connect all the processes ending in B to the processes starting from it. In effect, this realizes the synchronization described by the barrier B.
Theorem 2.2. Let P be a process, then P has a deadlock if and only if ctgpP q has a cycle. Moreover, if P is deadlock-free (hence it is a DAG) then pα, βq P ctgpP q if and only f α ă β (hence the DAG is intransitive).
Proof idea: The proof is not difficult but slightly technical. The idea is to extend the notion of execution to go "past" deadlocks, thus detecting cycles in the causal relation. The details are given in Appendix A not to overload the core of the paper.
In Fig. 2 (top) we describe a system Sys written in the proposed language, together with the covering of POpSysq, i.e. its control graph (bottom). We also indicate the number of its possible executions, a question we address next.

The counting problem
One may think that in such a simple setting, any behavioral property, such as the counting problem that interests us, could be analyzed efficiently e.g. by a simple induction on the syntax. However, the devil is well hidden inside the box because of the following fact.  Theorem 2.3. Let U be a partially ordered set. Then there exists a barrier synchronization process P such that POpP q is isomorphic to U .
Proof sketch: Consider G the (intransitive) covering DAG of a poset U . We suppose each vertex of G to be uniquely identified by a label ranging over α 1 , α 2 , . . . , α n . The objective is to associate to each such vertex labeled α a process expression P α . The construction is done backwards, starting from the sinks (vertices without outgoing edges) of G and bubbling-up until its sources (vertices without incoming edges).
There is a single rule to apply, considering a vertex labeled α whose children have already been processed, i.e. in a situation depicted as follows: α In the special case α is a sink we simply define P α " xB α yα.0. In this construction it is quite obvious that α ă β i for each of the β i 's, provided the barriers B α , B β1 , . . . , B β k are defined somewhere in the outer scope.
At the end we have a set of processes P α1 , . . . , P αn associated to the vertices of G and we finally define P " νpB α1 q . . . νpB αn q rP α1 . . . P αn s.
That POpP q has the same covering as U is a simple consequence of the construction.
Corollary 2.4. Let P be a non-deadlocked process. Then xα 1 , . . . , α n y is an execution of P if it is a linear extension of POpP q. Consequently, the number of executions of P is equal to the number of linear extensions of POpP q.
We now reach our "negative" result that is the starting point of the rest of the paper: there is no efficient algorithm to count the number of executions, even for such simplistic barrier processes.
Corollary 2.5. Counting the number of executions of a (non-deadlocked) barrier synchronization process with n atomic actions is 7P -complete (ii) . This is a direct consequence of [13] since counting executions of processes boils down to counting linear extensions in (arbitrary) posets.
3 BITS-Decomposition of a process: shrinking a process to obtain a symbolic enumeration of executions We describe in this section a generic (and symbolic) solution to the counting problem, based on a systematic decomposition of finite posets (thus, by Theorem 2.2, of process expressions) through their covering DAG (i.e. control graphs).

Decomposition scheme
x y x y x y In Fig. 3 we introduce the four decomposition rules that define the BITS-decomposition. The first three rules are somehow straightforward. The (B) rule (resp. (T) rule) allows to consume a node with no outgoing (resp. incoming) edge and one incoming (resp. outgoing) edge. In a way, these two rules consume the "pending" parts of the DAG. The (I) rule allows to consume a node with exactly one incoming and outgoing edge. The final (S) rule takes two incomparable nodes x, y and decomposes the DAG in two variants: the one for x ă y and the one for the converse y ă x.
We now discuss the main interest of the decomposition: the incremental construction of an integral formula that solves the counting problem. The calculation is governed by the equations specified below the rules in Fig. 3, in which the current formula Ψ is updated according to the definition of Ψ 1 in the equations (iii) .
Note that in the (S) rule Ψ xăy (resp. Ψ yăx ) denotes the integral formula computed over the DAG with the added arc y Ñ x (resp. x Ñ y).
(ii) A function f is in 7P if there is a polynomial-time non-deterministic Turing machine M such that for any instance x, f pxq is the number of executions of M that accept x as input. See for example [2]. (iii) Here Ψ 1 does not denote the derivative, it is just a convenient notation when iterated.
Theorem 3.1. The integral formula built by the BITS-decomposition is equal to the number of linear extensions of the corresponding poset. Moreover, the applications of the BITS-rules are confluent, in the sense that all the sequences of (valid) rules reduce the DAG to an empty graph (iv) .
The precise justification of the integral computation and the proof for the theorem above are postponed to Section 3.2 below. We first consider an example.
x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 T x1 x 2 x 3 x 4 x 5 x 6 x 7 x 2 x 4 x 3 x 6 x 5 x 7 The DAG to decompose (on the left) is of size 8 with nodes x 1 , . . . , x 8 . The decomposition is nondeterministic, multiple rules apply, e.g. we could "consume" the node x 7 with the (I) rule. Also, the (S)plit rule is always enabled. In the example, we decide to first remove the node x 1 by an application of the (T) rule. We then show an application of the (S)plit rule for the incomparable nodes x 3 and x 4 . The decomposition should then be performed on two distinct DAGs: one for x 3 ă x 4 and the other one for x 4 ă x 3 (the one pictured in the figure above). We illustrate the second choice, and we further eliminate the nodes x 7 then x 5 using the (I) rule, etc. Ultimately all the DAGs are decomposed and we obtain the following integral computation: The result means that there are exactly 14 distinct linear extensions in the example poset.

Embedding in the hypercube: the order polytope
The justification of our decomposition scheme is based on the continuous embedding of posets into the hypercube, as investigated in [26].
Definition 3.1 (Order polytope). Let P " pE, ăq be a poset of size n. Let C be the unit hypercube defined by C " tpx 1 , . . . , x n q P R n | @i, 0 ď x i ď 1u. For each constraint x i ă x j P P we define the convex subset S i,j " tpx 1 , . . . , x n q P R n | x i ď x j u, i.e. one of the half spaces obtained by cutting R n with the hyperplane tpx 1 , . . . , x n q P R n | x i´xj " 0u. Thus, the order polytope C P of P is (iv) At the end of the decomposition, the DAG is in fact reduced to a single node, which is removed by an integration between 0 and 1.
Each linear extension, seen as a total order, can similarly be embedded in the unit hypercube. Then, the order polytopes of the linear extensions of a poset P form a partition of the poset embedding C p as illustrated in Figure 4. Fig. 4: From left to right: the unit hypercube, the embedding of the total order 1 ă 2 ă 3 and the embedding of the poset P " pt1, 2, 3u, t1 ă 2uq divided in its three linear extensions.
The number of linear extensions of a poset P , written |LE pP q|, is then characterized as a volume in the embedding. ) Let P be a poset of size n then its number of linear extensions is |LE pP q| " n!¨VolpC P q where VolpC P q is the volume, defined by the Lebesgue measure, of the order polytope C P .
The integral formula introduced in the BITS-decomposition corresponds to the computation of VolpC p q, hence we may now give the key-ideas of Theorem 3.1.
Proof of Theorem 3.1: We begin with the (S) rule. Applied on two incomparable elements x and y, the rule partitions the polytope in two regions: one for x ă y and the other for y ă x. Obviously, the respective volume of the two disjoint regions must be added. We focus now on the (I) rule. In the context of Lebesgue integration, the classic Fubini's theorem allows to compute the volume V of a polytope P as an iteration on integrals along each dimension, and this in all possible orders, which gives the confluence property. Thus, 1 P ppx, y, z, . . . qqdxdydz . . . , 1 P being the indicator function of P such that 1 P ppx, y, z, . . . qq " ź α actions 1 Pα pαq, with P α the projection of P on the dimension associated to α. By convexity of P , the function 1 Py is the indicator function of a segment rx, zs. So the following identity holds: ş P 1 Py pyqdy " ş z x dy. Finally, the two other rules (T) and (B) are just special cases (taking x " 0, alternatively z " 1).

Uniform random generation of process executions
In this section we describe a generic algorithm for the uniform random generation of executions of barrier synchronization processes. The algorithm is based on the BITS-decomposition and its embedding in the unit hypercube. It has two essential properties. First, it is directly working on the control graph (equivalently on the corresponding poset), and thus does not require the explicit construction of the statespace of processes. Second, it generates possible executions of processes at random according to the uniform distribution. This is a guarantee that the sampling is not biased and reflects the actual behavior of the processes.
Algorithm 1 Uniform sampling of a simplex of the order polytope The input of Algorithm 1 is a poset over the set of points tx 1 , . . . , x n u (or equivalently its covering DAG). The decomposition scheme of Section 3 produces an expression as an integral formula I of the form ş 1 0 F py n , . . . , y 1 q dy n¨¨¨d y 1 with F a symbolic integral formula over the points x 1 , . . . , x n . The y i variables represent a permutation of the poset points giving the order followed along the decomposition. Thus, the variable y i corresponds to the i-th removed point during the decomposition. We remind the reader that the evaluation of the formula I gives the number of linear extensions of the partial order. Now, starting with the complete formula, the variables y 1 , y 2 , . . . will be eliminated, in turn, in an "outside-in" way. Algorithm 1 takes place at the i-th step of the process. At this step, the considered formula is of the following form: Note that in the subformula f py i q the variable y i can only occur (possibly multiple times) as an integral bound.
In the algorithm, the variable C gets the result of the numerical computation of the integral I at the given step. Next we draw (with UNIFORM) a real number U uniformly at random between the integration bounds a and b. Based on these two intermediate values C and U , we perform a numerical solving of variable t in the integral formula corresponding to the slice of the polytope along the hyperplan y i " U . The result, a real number between a and b, is stored in variable Y i . The justification of this step is further discussed in the proof sketch of Theorem 4.1 below.
As long as I contains an integral, the algorithm is applied recursively by substituting the variable y i in the integral bounds of I by the numerical value Y i . If no integral remains, all the computed values Y i 's are returned. As illustrated in Example 4.1 below, this allows to select a specific linear extension in the initial partial ordering. The justification of the algorithm is given by the following theorem.
(v) The Python/SageMath implementation of the random sampler is available at the following location: https://gitlab.com/ ParComb/combinatorics-barrier-synchro/blob/master/code/RandLinExtSage.py Proof: The problem is reduced to the uniform random sampling of a point p in the order polytope P . This is a classical problem about marginal densities that can be solved by slicing the polytope and evaluating incrementally the n continuous random variables associated to the coordinates of p. More precisely, during the calculation of the volume of the polytope P , the last integration (of a univariate polynomial ppyq) done from 0 to 1 corresponds to integrating according to the variable y along the subsets defined by the polytope P So, the polynomial ppyq{ ş 1 0 ppyqdy is nothing but the density function of the random variable Y . Thus, we can generate Y according to this density and fix it. When this is done, we can inductively continue with the previous integrations to draw all the random variables associated to the coordinates of p. The linear complexity of Algorithm 1 follows from the fact that each partial integration deletes exactly one variable (which corresponds to one node). Of course at each step a possibly costly computation of the counting formula is required.
We now illustrate the sampling process based on Example 3.1 (page 12).
Example 4.1. First we assume that the whole integral formula has already been computed. To simplify the presentation we only consider (S)plit-free DAGs i.e. decomposable without the (S) rule. Note that it would be easy to deal with the (S)plit rule: it is sufficient to uniformly choose one of the DAGs processed by the (S) rule w.r.t. their number of linear extensions.
For example, taking back the DAG of Example 3.1, the DAG with constraint "x 4 ă x 3 " will be choosed with probability 8 14 : the number of its linear extension divided by the number of linear extension of the "full" DAG. Thus the following formula holds: In the equation above, the sub-formula between parentheses would be denoted by f px 2 q in the explanation of the algorithm. Now, let us apply the Algorithm 1 to that formula in order to sample a point of the order polytope. In the first step the normalizing constant C is equal to 8! 8 , we draw U uniformly in r0, 1s and so we compute a solution of 8! 8 ş t 0 . . . dx 2 " U . That solution corresponds to the second coordinate of a the point we are sampling. And so on, we obtain values for each of the coordinates: " X 1 " 0.064 . . . , X 2 " 0.081 . . . , X 3 " 0.541 . . . , X 4 " 0.323 . . . , X 5 " 0.770 . . . , X 6 " 0.625 . . . , X 7 " 0.582 . . . , X 8 " 0.892 . . .
These points belong to a simplex of the order polytope. Note that almost surely each coordinates are different. To find the corresponding linear extension we compute the rank of that vector, i.e. the order induced by the values of the coordinates correspond to a linear extension of the original DAG: This is ultimately the linear extension returned by the algorithm.

Characterization of important process sub-classes and link with BIT-decomposition
Thanks to the BITS decomposition scheme, we can generate a counting formula for any (deadlock-free) process expressed in the barrier synchronization calculus, and derive from it a dedicated uniform random sampler. However the (S)plit rule generates two summands, thus if we cannot find common calculations between the summands the resulting formula can grow exponentially in the size of the process. If we avoid splits in the decomposition, then the counting formula remains of linear size. This is, we think, a good indicator that the sub-class of so-called "BIT-decomposable" processes is worth investigating for its own sake. In this section, we first give some illustrations of the expressivity of this sub-class, and we then study the question of what it is to be not BIT-decomposable. Also, the first two subsections are extended results based on previously published papers. The first subsection extends the results of [6], [8], [7] and [10] by identifying the fragment of barrier synchronization calculus corresponding to the studied partial orders and providing unpublished proofs. Moreover, the second subsection presents results on a family of processes which is a generalization of the one studied in [10].

From tree posets to fork-join parallelism
In the following interesting sub-classes of processes, we aim at deriving quantitative properties as the number of processes of a given size, or the average number of executions. To deal with such questions the context of analytic combinatorics is natural. So let us first recall some general notions of analytic combinatorics. This formalism will also allow us to reprove classical results like hook length formulas by using our BIT-decomposition.

Combinatorial classes and specifications
A combinatorial class A is a set of discrete structures (words, graphs, etc) with a size function |¨| : A Ñ N such that for every non-negative integer n every set A n " tα P A | |α| " nu is finite. We denote a n the cardinal of A n .
A combinatorial class may be a labeled one if its elements are labeled as defined before. The ordinary (resp. exponential) generating function A (resp.Ã) associated to an unlabeled (resp. labeled) combinatorial class A is defined by : Apzq " ÿ ně0 a n z nÃ pzq " ÿ ně0 a n z n n! Labeled combinatorial classes may be defined using symbolic specifications. This equational language allows to define, inductively, labeled and unlabeled combinatorial classes using the following operators (where A and B are labeled combinatorial classes): E (neutral class), Z (atomic class) A`B (disjoint union), A ‹ B (labeled product), SEQpAq (sequence of elements of A), SETpAq (set of elements of A), A˝‹ B (boxed product). Then the so-called symbolic method translates these definitions in terms of generating function.
A typical example is the symbolic specification of the class C of Cayley trees (labeled spanning trees of graphs) : C " Z ‹ SET pCq Whose exponential generating function verifies: Cpzq " z¨exp´CpzqT he boxed product (vi) forces the smallest label to be present in the left-hand structure. Thus it allows to define classes of increasingly labeled structures. For example, we can transform the previous specification into the one of the class G of increasingly labeled Cayley trees : For a comprehensive study of symbolic specifications and generating functions, one can read the first chapter of [15].

Tree processes
If the control-graph of a process is decomposed with only the B(ottom) rule (or equivalently the T(op) rule), then it is rather easy to show that its shape is that of a tree. These are processes that cannot do much beyond forking sub-processes. For example, based on our language of barrier synchronization it is very easy to encode processes whose control-graphs are the (rooted) binary trees: T ::" 0 | α.pT T q or e.g. T ::" 0 | νB pα.xBy0 xByT xByT q .
The good news is that the combinatorics on trees is well-studied. This study relies on the combinatorial interpretation of processes as discrete structures then the use of tools from the theory of analytic combinatorics (see [15] for a reference).
The equations (4) are very similar to the combinatorial specification B of binary trees i.e.
B " E`ZˆB 2 , which is the way we study syntactic processes. Concerning the semantic, as mentioned in Corollary 2.4, executions of a process P correspond to linear extensions of the poset POpP q. Another point of view is to consider increasing labelings of the covering DAG which are isomorphic to linear extensions. Hence we can derive from the previous unlabeled specification for B the combinatorial class of binary tree processes, a labeled specification for R the combinatorial class of their executions: R " E`Z˝‹ R 2 .
In the paper [11] we provide a thorough study of such processes, and in particular we describe very efficient counting and uniform random generation algorithms. Of course, this is not a very expressive sub-class in terms of concurrency.

Fork-join processes and Multi Bulk Synchronous Parallel computing
Thankfully, many results on trees generalize rather straightforwardly to fork-join parallelism, a sub-class we characterize inductively in Fig. 5. Informally, this proof system imposes that processes use their synchronization barriers according to a stack discipline. When synchronizing, only the last created barrier is available, which exactly corresponds to the traditional notion of a join in concurrency. The Fig. 6 gives an example of fork-join process P where the colored vertices correspond to "forks" and their relatives "joins" (note that h is both a fork and a join vertex). Like for binary tree processes we can design a combinatorial specification of the combinatorial class F of fork-join processes: F " E`ZˆF`ZˆF 2ˆF .
Let us explain this specification from the proof system of Fig. 5. The first term E corresponds to the axiom (the leftmost rule) of Fig. 5; the second term ZˆF corresponds to the processes prefixed by an action; the last term ZˆF 2ˆF corresponds to processes composed of two parallel processes (third rule) prefixed by a barrier declaration (B added in the stack β in the fourth rule) and such that the next barrier reached should have the same name as the last barrier stacked (fifth rule). That computation model is more realistic than the tree processes. Actually, the Multi Bulk Synchronous Parallel (Multi-BSP) model of computations (see the seminal paper [28]) can be seen as a fork-join model of computations. The Multi-BSP model defines a tree of nested computational components: the leaves are the processors and the inner vertices are computers and more. For example, a height 4 tree would be a data center (the root of the tree), composed of server racks (depth 1), each composed of servers (depth 2) with several multi-core processors (depth 3). Then the Multi-BSP model sets that each vertex obey to the original BSP model. The BSP model states that processing units computations are divided in superstep composed of (asynchronous) computations, communications requests (between processing units) and ending by a barrier synchronization during which the communications are processed. So supersteps at depth i correspond to fork-join processes where i barriers names are visible, put another way it corresponds to sub-DAGs of depth i from the root.

The ordered product
Like in the case of binary tree processes we can derive the class of increasingly labeled fork-join processes corresponding to their executions. But unlike the previous case, the boxed product is not expressive enough to give a specification of such increasingly labeled class. Here we need a global constraint over the labels such that the labels of the upper part (corresponding to the zˆF 2 term) are smaller than the one of the bottom part of the poset (the last F term). That is the purpose of the ordered product, introduced in the context of species theory (see [5]), that we studied with an analytic combinatorics point of view in [8].
Definition 5.1. Let A and B be two labeled combinatorial classes and α and β be two structures respectively in A and in B. We define the class of labeled structures induced by α and β: α ‹ β " pα, f |α| pβqq | f |α| p¨q shifts the labels from β by |α| ( . Note that f |α| is a relabeling function which shifts the labels of β (from 1 to |beta|) by |α|. So the pair pα, f |α| pβqq as labels from 1 to |α| inside the α part and from 1`|α| to |β|`|α| inside the f |α| pβq part. This guarantees that the set α ‹ β is a set of well-labeled objects.
We extend the ordered product to combinatorial classes: In fact, the ordered product of A ‹ B contains objects from the product A ‹ B such that all the labels of component of A are smaller that the ones of the component of B.
As usual, this operator over combinatorial classes translates into an operator over generating functions. Before introducing that translation we first recall the classical integral transforms: the combinatorial Laplace and the Borel transforms (vii) . From a combinatorial point of view, they define a bridge between exponential generating functions and ordinary generating functions. More precisely, we have respectively L c˜ÿ ně0 a n z n n!¸" ÿ ně0 a n z n ; B c˜ÿ ně0 a n z n¸" ÿ ně0 a n z n n! .
From a functional point of view, the combinatorial Laplace and the Borel transforms correspond respectively to Lcpf q " where the real constant c is greater than the real part of all singularities of f p1{tq{t.
Analogously to the traditional Laplace transform, the product of Laplace transforms can be expressed with a convolution product: We denote by f˚g the combinatorial convolution ş z 0 f ptqg 1 pz´tqdt`gp0qf pzq. Proposition 5.1. Let A and B be two labeled combinatorial classes. The exponential generating function Cpzq, associated to C " A ‹ B, satisfies the three following equations (according to the context: formal or integrable functions) The proof necessitates some background on the use of combinatorial Borel and Laplace transform. The reader will find some general ideas in Appendix B.
Proof: Using Definition 5.1, we note that an object from C is given by an object from A and one from B only by shifting the labels of the second one. Thus the number of objects of size n in C is given by ř n´1 k"1 A k¨Bn´k . Note that the sum can also be derived more directly from a computation of the general term of BpLpApzqqL pBpzqqq.
Observe that the ordered product gives a combinatorial interpretation of this adapted convolution. Note that the integral interpretation is valid when both generating function Apzq and Bpzq are integrable in their domain of definition. However, for example if Apzq " 1{p1´zq, although L c Apzq is not analytic, the function Apzq can be a component of the ordered product.

Combinatorics of fork-join processes
The introduction of the ordered product allows us to define several classes of increasingly labeled fork-join processes with different constraints. Here we focus on the class F of fork-join processes with -nested fork nodes (i.e. at most 2 processes can be run in parallel) which modelizes Multi-BSP architectures with levels of components. The specification of such process is built the same way than a specification for simple varieties of trees of height : Thanks to the ordered product we can define a specification N for these fork-join processes with increasing labelings corresponding to their executions: Proposition 5.2. The generating function N of the class N verifies the following equations where Apzq e Bpzq is the colored product defined in [8] by L c pB c pApzqq¨B c pBpzqqq.
Proof: The derivation is direct using the following standard properties of the combinatorial Laplace and Borel transforms: L cˆż Apzq˙" z L c pApzqq and L c pApzq 2 q " L c pApzqq e L c pApzqq.
Proposition 5.3. L c pN q is a rational function with numerator P pzq and denominator Q pzq of degree d that are smaller thand satisfying Moreover P and Q are coprime and have only simple roots.
Proof: Before proving that claim by induction, we recall a basic property of combinatorial Laplace transform L c pe az q " 1 1´az .
For the base case N 0 pzq the proof is direct: N 0 pzq " exppzq´1 and so L c pN 0 q " z 1´z . Now suppose, for some ě 1, that N ´1 pzq " P ´1 pzq Q ´1 pzq where P ´1 and Q ´1 are polynomials of degree d ´1 . Then by proposition 5.2 and induction hypothesis we have By partial fraction decomposition we can write where the α, β and γ are complex constants. So the combinatorial Borel transform of that function is a sum of α p ´1q i exp´β p ´1q i z¯. Thus we have By Laplace transform, that sum of exponential factors becomes a partial fraction expansion containing pd ´1`1 qpd ´1`2 q 2 poles (the β i and their products). Due to eventual cancellation, if fact the later equations is an upper bound for d and it is denotedd in the proposition statement. Every pole is simple by induction hypothesis (all the β i are different). Thus L c pN pzqq is a rational function with the claimed properties.
Using a computer algebra system like [27], we compute L c pN 2 pzqq: For " 3 the numerator and the denominator are of degree 66, thus the calculation becomes very hard. The proof is a direct application of singularity analysis.

Hook-length formula
To conclude that section we present the hook-length formula (which we introduced in [8]). Hook length formulas in trees allow to compute the number of possible increasing labeling of a tree structure. He we obtain the extension for fork-join processes. That formula has the benefit of emphasizing the correspondence between the fork-join processes and the class of series-parallel posets. In the decomposition both the (B) and the (I) rule are needed, but following a tree-structured strategy. Using a good strategy allow to obtain the result very efficiently as we will note.
For this, we need to define two kind of sub-structures found in fork-join covering DAGs. Let P be a fork-join process. A largest series component X of P is a connected sub-process of P whose direct ancestor is a fork node, and whose direct descendant is the corresponding join node. The set of largest series components of P is denoted by Se P . Similarly, a largest parallel component Y of P is a disconnected sub-process composed by the two largest series components associated to the same pair of fork/join nodes. The set of largest parallel components of P is denoted by Pa P .
Proof: Here we provide a new proof (different from the one given in [8]) based on the BIT rules. The theorem can be demonstrated using Möhring's formula [22], however a direct proof based on the integral formula of the BI-decomposition is proposed here.
The proof relies on an induction on the size of the process P . Suppose the result is correct for fork-join processes of size smaller than n. Take into account the process P of size n. First suppose P is a series of its root p and a second fork-join process Q. Thus the size of Q is n´1. Then, by inductive assumption on Q, we have However, the last integration for Q in the context of P is between α and 1 instead of 0 and 1. Thus We deduce |LE pP q| " |LE pQq|; furthermore Se Q " Se P and Pa Q " Pa P , so the hook-length formula for P is satisfied. Let us suppose P has a root p that is a fork node. We use its encoding as a tree to easily describe P . The root p has three subtrees P 1 , P 2 and Q. The recursive strategy and the inductive assumption reduces all these three substructures to three nodes p 1 , p 2 and q. The last integration for P 1 and P 2 are between p and q, thus The last integration for Q is between p and 1, thus Then we can, for example, reduce p 1 ,p 2 ,q and finally p with respectively the rules I, I, B and B. Thus |LE pP q| |P |! " ż 1 0ˆż 1 pˆż q pˆż q p Ψ P1 dp 1˙¨ΨP2 dp 2˙¨ΨQ dq˙dp.
Let us recall the following equation, proved by repeated integration by parts ż 1 a p1´xq r¨p x´aq s dx " r! s! pr`s`1q! p1´aq r`s`1 .
Using this last result we compute |LE pP q| |P |! " Note that By induction hypothesis we have @A P tP 1 , P 2 , Qu, |LE pAq| " and so 6 can be rewritten: which ends the proof by induction.
Corollary 5.6. For a fork join process of size n the counting problem is of complexity Opnq in number of arithmetic operations. It exists a uniform sampler (using an optimal number of random bits up to a constant factor) with complexity Opn ? nq on average.

Proof:
The counting algorithm is easily derived from the hook-length formula. First we need to compute and memoize the values of the factorial of the integers from 1 to n. Then a traversal of the graph in a "bottom-up" fashion allows to collect the sizes of the largest series and parallel components. At each step a constant number of arithmetic operations is done (because the factorials have been precomputed), and so the Opnq complexity. The uniform sampler proceeds by induction. If the process falls in the ZˆF class then it draws a linear extension of the sub-process in F prefixed by an action. Else, the process falls in the ZˆF 2ˆF class. In that case a linear extension is sampled for each sub-process, then the two extensions of the up processes are shuffled and concatenated to the one of the bottom process. The number of random bits used by the shuffling procedure is the key to achieve the claimed optimality. Details are given in [7] and out of the scope here. To show the Opn ? nq time complexity note that each vertex is manipulated a number of times proportional to its depth in the tree-like structure, and so the sum of these numbers is proportional to the path length of the tree: Opn ? nq in average in this tree model (see for example in [15, p. 185]).

Asynchronism with promises
We now discuss another interesting sub-class of processes that can also be characterized inductively on the syntax of our process calculus, but this time using the three BIT-decomposition rules (in a controlled manner). The stack discipline of fork-join processes imposes a form of synchronous behavior: all the forked processes must terminate before a join may be performed. To support a form of asynchronism, a basic principle is to introduce promise processes. In concurrent programming, it is often the case that the logic of a program is mainly iterative (step by step) and implemented in a "main" thread, but time consuming computations are needed to go through all the program. In that case it is convenient to spawn "promise" threads at the beginning of the master thread that will gather the results only when needed. A very common encounter of that method is the rendering of web pages. The rendering of the whole page (the main thread) is not blocked by the loading of a video because the loading is done in a promise thread. In Fig. 7 we define a simple inductive process structure composed as follows. A main control thread can perform atomic actions (at any time), and also fork a sub-process of the form νpBq pP Qq but with a strong restriction: • a single barrier B is created for the sub-processes to interact, • the left sub-process P must be the continuation of the main control thread, • the right sub-process Q must be a promise, which can only perform a sequence of atomic actions and ultimately synchronize with the control thread.
We are currently investigating this class as a whole, but we already obtained interesting results for the arch-processes in [10]. An arch-process follows the constraint of Fig. 7 but adds further restrictions. The main control thread can still spawn an arbitrary number of promises, however there must be two separate phases for the synchronization. After the first promise synchronizes, the main control thread cannot spawn any new promise. In [10] a supplementary constraint is added (for the sake of algorithmic efficiency): each promise must perform exactly one atomic action, and the control thread can only perform actions when all the promises are running. In this paper, we remove this rather artificial constraint considering a larger, and more useful process sub-class.
In Fig. 8 (left) is represented the structure of a generalized arch-process. The a i 's actions are the promise forks, and the synchronization points are the c j 's. The constraint is thus that all the a i 's occur before the c j 's.
Theorem 5.7. The number of executions of a promise process can be calculated in Opn 2 q arithmetic operations, using a dynamic programming algorithm based on memoization.
Proof: Start with a a generalized arch-process P and denote by P its number of executions. To simplify the approach, let us first modify the first promise. First we replace the promise from a 1 to c 1 and containing the sequence b 1,1 , . . . b 1,s1 by two promises both from a 1 to c 1 . The first promise contains only b 1,1 and the second one contains the rest of the sequence (if it remains actions) b 1,2 , . . . , b 1,s1 . Let us denote byP this new process. The number P is equal to the number of executions P ofP divided by s 1 , because now b 1,1 is shuffled with b 1,2 , . . . , b 1,s1 .
Let us now introduce some inclusion-exclusion argument in order to count the number of executions ofP . The basic idea is the following, but we will refine it. If we replace the synchronization of b 1,1 in c 1 , later in the control thread by another in c k , then we allow new executions that are not correct forP thus in order to remove them we remove the number of executions of the process where a new promise starting at c 1 and synchronizing at c k and containing only b 1,1 .
In the right hand-side of Fig. 8 we go one step further. There we focus on the control thread and the promise associated to b 1,1 . To obtain a clear representation we omit to draw the other promises. Thus the representation associated toP is the leftmost one, in black (but the first promise withb 1,1 , . . . b 1,s1 is divided in two promises). Let us denote the partially colored processes A (in red), B (in blue) and C (in green). Thus the number of executions P " A´ B` C .
Let us denote byÃ the process A where b 1,1 is removed. The executions of A are such that b 1,1 can appear everywhere between a 1 and c k in the executions ofÃ. Thus A " pn´2q¨ Ã . And remark thatÃ is a promise process (of size n´1), thus we can go recursively inside it to compute its number of executions.
In the process B, we can insert the action b 1,1 in the front of the promise starting at a k , i.e. just before b k,1 . Doing this reduces the numbers of executions (due to the shuffling) by a factor 1{ps k`1 q and the new process is now a promise process.
Finally, for the process C we can insert b 1,1 just before a k,1 but this choice reduces the numbers of executions by a factor 1{pr k`1 q but the new process is now a promise process. With the same argument as before we can continue recursively.
Finally, the proof of Theorem 5.7, for a promise process P is derived from the fact that you must consider all promise processes induced by our transformations, but where the only two values that are changing through the recursive calls are r k and s k . In fact both sequences pa k,r q r"1,...,r k and pb k,s q s"1,...,s k can be increased at most by n nodes (arriving from promises). Thus we deduce that using a dynamical programming approach with memoization of the calculated values gives the value P is Opn 2 q arithmetic operations.
From this counting procedure we developed a uniform random sampler following the principles of the recursive method, as described in [16].

Algorithm 2 Uniform random sampling
We suppose here that all the promises do contain a single action. We must take care of a factor in the counting part of the algorithm. if PromiseCountpAq " 0 then Theorem 5.8. Let P be a promise-process of size n. Algorithm 2 is a uniform sampler of the linear extensions of P with Opn 4 q time-complexity in the number of arithmetic operations.
Here we remark a big combinatorial change by comparing promise processes to arch-processes (from paper [10]). In fact, in the latter case the sub-problem induced by the second process (associated to B) was exactly the same as the one of P. And thus, there the uniform recursive sampling could be obtained efficiently in Opnq arithmetic operations (once a quadratic time complexity pre-computation has be memoized).
Proof: One notable aspect is that in order to get rid of the forbidden case of executions associated to the "virtual" promise B we cannot only do rejection (because the induced complexity would be exponential).
Thus in the promise process, we adapt the recursive method by proceeding by case analysis: for each possibility for the insertion of b 1,1 in the main control thread we compute the relative probability for the associated process P. Thus for each action, we have at most n possibility of insertion, thus n problems analogous to the pre-computation to calculate. And globally we have at most n actions to insert in the control thread. This gives the complexity Opn 4 q. In this section, we put into use the various algorithms for counting and generating process executions uniformly at random. Tab. 1 summarizes these algorithms and the associated worst-case time complexity bounds (when known). We implemented all the algorithms in Python 3, and we did not optimize for efficiency, hence the numbers we obtain only give a rough idea of their performances. For the sake of reproducibility, the whole experimental setting is available in the companion repository, with explanations about the required dependencies and usage. The computer we used to perform the benchmark is a standard laptop PC with an I7-8550U CPU, 8Gb RAM running Manjaro Linux. As an initial experiment, the example of Fig. 2 is BIT-decomposable, so we can apply the BIT and CFTP algorithms. The counting (of its 1975974 possible executions) takes about 0.3s and it takes about 9 milliseconds to uniformly generate an execution with the BIT sampler, and about 0.2s with CFTP. For "small" state spaces, we observe that BIT is always faster than CFTP.  (viii) The CFTP algorithm is the only one we did not design, but only implement. Its complexity is Opn 3¨l og nq (randomized) expected time. (viii) For arch-processes of size 100 with 2 arches or 32, the CFTP algorithm timeouts (30s) for almost all of the input graphs.

Experimental study
For a more thorough comparison of the various algorithms, we generated random processes (uniformly at random among all processes of the same size) in the classes of fork-join (FJ) and arch-processes as discussed in Section 5, using our own Arbogen tool (ix) or an ad hoc algorithm for arch-processes (presented in the companion repository). For the fork-join structures, the size is simply the number of atomic actions in the process. It is not a surprise that the dedicated algorithms we developed in [7] outperforms the other algorithms by a large margin. In a few seconds it can handle extremely large state spaces, which is due to the large "branching factor" of the process "forks". The arch-processes represent a more complex structure, thus the numbers are less "impressive" than in the FJ case. To generate the arch-processes (uniformly at random), we used the number of atomic actions as well as the number of spawned promises as main parameters. Hence an arch of size 'n:k' has n atomic actions and k spawned promises. Our dedicated algorithm for arch-process is also rather effective, considering the state-space sizes it can handle. In less than a minute it can generate an execution path uniformly at random for a process of size 200 with 66 spawned promises, the state-space is in the order of 10 130 . Also, we observe that in all our tests the observable "complexity" is well below Opn 4 q. The reason is that we perform the pre-computations (corresponding to the worst case) in a just-in-time (JIT) manner, and in practice we only actually need a small fractions of the computed values. However the random sampler is much more efficient with the separate pre-computation. As an illustration, for arch-processes of size 100 with 32 arches, the sampler becomes about 500 times faster. However the memory requirement for the pre-computation grows very quickly, so that the JIT variant is clearly preferable.
In both the FJ and arch-process cases the current implementation of the BIT algorithms is not entirely satisfying. One reason is that the strategy we employ for the BIT-decomposition is quite "oblivious" to the actual structure of the DAG. As an example, this strategy handles fork-joins far better than archprocesses. In comparison, the CFTP algorithm is less sensitive to the structure, it performs quite uniformly on the whole benchmark. We are still confident that by handling the integral computation with an ad-hoc method, the BIT algorithms could handle much larger state-spaces. For now, they are only usable up-to a size of about 40 nodes (already corresponding to a rather large state space).

Conclusion and future work
The process calculus presented in this paper is quite limited in terms of expressivity. In fact, as the paper makes clear it can only be used to describe (intransitive) directed acyclic graphs! However we still believe it is an interesting "core synchronization calculus", providing the minimum set of features so that processes are isomorphic to the whole combinatorial class of partially ordered sets. Of course, to become of any practical use, the barrier synchronization calculus should be complemented with e.g. non-deterministic choice (as we investigate in [11]).
Moreover, the extension of our approach to iterative processes remains full of largely open questions. Another interest of the proposed language is that it can be used to define process (hence poset) subclasses in an inductive way. We give two illustrations in the paper with the fork-join processes and promises. This is complementary to definitions wrt. some combinatorial properties, such as the "BITdecomposable" sub-classes. The class of arch-processes (which we study in [10] and the promise processes introduced here) is also interesting: it is a combinatorially-defined sub-class of the inductivelydefined asynchronous processes with promises. We see as quite enlightening the meeting of these two (ix) Arbogen is uniform random generation for context-free grammar structures: cf. https://github.com/fredokun/ arbogen. distinct points of view: concurrency theory and combinatorics. Even for the "simple" barrier synchronizations, our study is far from being finished because we are, in a way, also looking for "negative" results. The counting problem is hard, which is of course tightly related to the infamous "combinatorial explosion" phenomenon in concurrency. In fact we believe that the problem remains intractable for the class of BIT-decomposable processes, but this is still an open question that we intend to investigate further. By delimiting more precisely the "hardness" frontier, we hope to find more interesting sub-classes for which we can develop efficient counting and random sampling algorithms.

A Appendix: Extended semantics
In this appendix we give a detailed proof for Theorem 2.2, which establishes the connection between processes and their control graph. One limitation of the semantics given in the main body of the paper is that deadlocks are not recorded: deadlocked executions simply stops. We thus consider in Fig. 9 a more detailed semantics that preserve all the information of the process executions, especially by keeping track of the barrier used in the synchronization steps. Proof: This is by rule induction on the standard semantics.
This means that any execution σ of the standard semantics can be translated to an extended execution σ with explicit barriers.
Definition A.1 (Extended execution of a process). An extended execution σ of P is a finite sequence xµ 1 , . . . , µ n y such that there is a set of processes P 1 µ1 , . . . , P 1 µn and a path P µ1 ù ñ P 1 µ1 . . . µn ù ñ P 1 µn with P 1 µn oe. The extended behavior of a process P is the set of all its extended executions. An important property is that even for a deadlocked process there exists (at least) an extended execution eventually reaching a termination.
Proof: This is trivial by induction on the syntax since except for terminated processes (e.g. 0 or an equivalent form such as pνBq0, 0 0, etc.) it is a simple fact that at least one rule of Fig. 9 is enabled. Now the connection between normal and extended executions is straightforward. Proposition A.3. Let P a deadlock-free process and σ one of its extended executions. Then there is a normal execution σ of P that is exactly σ with all its explicit barriers removed.
Proof: This is by definition of the executions and Proposition A.1, of course assuming that deadlock-free process always have normal transition until their completion.
We now promote the causal relations to extended executions.
Definition A.2 (Extended cause, extended direct cause). Let P be a process. An action α of P is said an extended cause of another action β, denoted by αďβ, iff for any extended execution σ of P we have σpαq ď σpβq. Moreover, α is an extended direct cause of β, denoted by αăβ iff αăβ and there is no γ such that αăγăβ.
For deadlock-free processes the normal and extended causal relation coincide.
Proof: This is a direct consequence of Proposition A.3.
We are now concerned with deadlocked processes.
Proposition A.5. A process P has a deadlock if and only if there is an extended execution σ and a barrier B such that the event xBy is repeated at least twice in σ.
Proof: A simple observation is that the only rule that can generate an immediate deadlock is (sync). So a deadlocked process P must have a subprocess of the form νpBq Q such that rule (sync) only can be triggered but for sync B pQq " Q 1 we have wait B pQ 1 q " true. In the extended executions the event xBy will still be recorded for Q. But going back to the standard semantics, there must be one of the subprocesses of Q 1 of the form xByR since wait B pQ 1 q " true and such that Q 1 is distinct from Q (otherwise the deadlock is caused by another barrier). Eventually in at least one of the executions of Q 1 another event xBy will occur because the extended executions are guaranteed deadlock-free (by Proposition A.2). Finally, since Q 1 is a derivative of Q it must be the case that the event xBy occurs twice in at least one execution σ going through both Q and Q 1 .
Hitherto, we have all the required properties concerning the extended executions, we thus turn to the control graph construction, now extended with explicit barriers. Definition A.3 (Construction of extended control graphs). Let P be a process term. Its extended control graph is ectgpP q " xV, Ey, constructed inductively as follows: » -----ectgp0q " xH, Hy ectgpα.P q " α ; ectgpP q ectgpνpBqP q " ectgpP q ectgpxByP q " xBy ; ectgpP q ectgpP Qq " ectgpP q Y ectgpQq The main difference with the normal control graph is that the barrier synchronizations are not removed along the construction.
If we only consider the atomic actions, then we have the very interesting property that the normal and extended control graph indeed coincide. We denote by α ;`β a path in ectgpP q such that α and β are atomic actions, and in the considered path only barrier events may occur.