Combinatorial Route to Algebra: The Art of Composition&Decomposition

We consider a general concept of composition and decomposition of objects, and discuss a few natural properties one may expect from a reasonable choice thereof. It will be demonstrated how this leads to multiplication and co- multiplication laws, thereby providing a generic scheme furnishing combinatorial classes with an algebraic structure. The paper is meant as a gentle introduction to the concepts of composition and decomposition with the emphasis on combinatorial origin of the ensuing algebraic constructions.


Introduction
A great deal of concrete examples of abstract algebraic structures are based on combinatorial constructions. Their advantage comes from simplicity steaming from the use of intuitive notions of enumeration, composition and decomposition which oftentimes provide insightful interpretations and neat pictorial arguments. In the present paper we are interested in clarifying the concept of composition/decomposition and the development of a general scheme which would furnish combinatorial objects with an algebraic structure.
In recent years the subject of combinatorics has grown into maturity. It gained a solid foundation in terms of combinatorial classes consisting of objects having size, subject to various constructions transforming classes one into another, see e.g. [FS09,BLL98,GJ83]. Here, we will augment this framework by considering objects which may compose and decompose according to some internal law. This idea was pioneered by G.-C. Rota who considered monoidal composition rules and introduced the concept of section coefficients showing how they lead to the co-algebra and bi-algebra structures [JR79]. It was further given a firm foundation by A. Joyal [Joy81] on the grounds of the theory of species. In the present paper we discuss this approach from a modern perspective and generalize the concept of composition in order to give a proper account for indeterminate (non-monoidal) composition laws according to which two given objects may combine in more than one way. Moreover, we will provide a few natural conditions one might expect from a reasonable composition/decomposition rule, and show how they lead to algebra, co-algebra, bi-algebra and Hopf algebra structures [Bou89,Swe69,Abe80]. Our treatment has the virtue of a direct scheme translating combinatorial structures into algebraic ones -it only has to be to checked whether the law of composition/decomposition of objects satisfies certain properties. We note that these ideas, however implicit in the construction of some instances of bi-algebras, have never been explicitly exposed in full generality. We will illustrate this framework on three examples of classical combinatorial structures: words, graphs and trees. For words we will provide three different composition/decomposition rules leading to the free algebra, symmetric algebra and shuffle algebra (the latter one with a non-monoidal composition law) [Lot83,Reu93]. In the case of graphs, except of trivial rules leading to the commutative and co-commutative algebra, we will also describe the Connes-Krimer algebra of trees [Kre98,CK98]. One can find many other examples of monoidal composition laws in the seminal paper [JR79]; for instances of non-monoidal rules see e.g. [GL89, BDH + 10]. A comprehensive survey of a recent development of the subject with an eye on combinatorial methods can be found in [Car07].
The paper is written as a self-contained tutorial on the combinatorial concepts of composition and decomposition explaining how they give rise to algebraic structures. We start in Section 2 by briefly recalling the notions of multiset and combinatorial class. In Section 3 we precise the notion of composition/decomposition and discuss a choice of general conditions which lead to the construction of algebraic structures in Section 4. Finally, in Section 5 we illustrate this general scheme on a few concrete examples.

Multiset
A basic object of our study is a multiset. It differs from a set by allowing multiple copies of elements and formally can be defined as a pair (A, m), where A is a set and m : A −→ N 1 is a function counting multiplicities of elements. (i) For example the multiset {a, a, b, c, c, c} is described by the underlying set A = {a, b, c} and the multiplicity function m(a) = 2, m(b) = 1 and m(c) = 3. Note that each set is a multiset with multiplicities of all elements equal to one. It is a usual practice to drop the multiplicity function m in the denotation of a multiset (A, m) and simply write A as its character should be evident from the context (in the following we will mainly deal with multisets!). Extension of the conventional set-theoretical operations to multisets is straightforward by taking into account copies of elements. Accordingly, the sum of two multisets (A, whilst the product is defined as We note that one should be cautious when comparing multisets and not forget that equality involves the coincidence of the underlying sets as well as the multiplicities of the corresponding elements. Similarly, inclusion of multisets (A, m A ) ⊂ (B, m B ) should be understood as the inclusion of the underlying sets A ⊂ B with the additional condition m A (x) m B (x) for x ∈ A.
(i) For equivalent definition of a multiset based on the SEQ construction subject to appropriate equivalence relation see [FS09] p. 26.

Combinatorial class
In the paper we will be concerned with the concept of combinatorial class C which is a denumerable collection of objects. Usually, it is equipped with the notion of size | · | : C −→ N which counts some characteristic carried by objects in the class, e.g. the number of elements they are build of. The size function divides C into disjoint subclasses C n = {Γ ∈ C : |Γ | = n} composed of objects of size n only. Clearly, we have C = n∈N . A typical problem in combinatorial analysis consists in classifying objects according to the size and counting the number of elements in C n . In the sequel we will often use the multiset construction. For a given combinatorial class C it defines a new class MSET(C) whose objects are multisets of elements taken from C. We note that the size of Γ ∈ MSET(C) is canonically defined as the sum of sizes of all its elements, i.e. |Γ | = γ∈Γ |γ|.

Combinatorial Composition & Decomposition
We will consider a combinatorial class C consisting of objects which can compose and decompose within the class. In this section we precisely define both concepts and discuss a few natural conditions one might expect from a reasonable composition/decomposition rule. Composition of objects in a combinatorial class C is a prescription how from two objects make another one in the same class. In general, such a rule may be indeterminate that is allow two given objects to compose in a number of ways. Furthermore, it can happen that some of these various possibilities produce the same outcome. See Fig. 1 for illustration. Therefore, a complete description of composition should keep a record of all the options which is conveniently attained by means of the multiset construction. Here is the formal definition:

Definition 1 (Composition)
For a given combinatorial class C the composition rule is a mapping assigning to each pair of objects Γ 2 , Γ 1 ∈ C the multiset, denoted by Γ 2 Γ 1 , consisting of all possible compositions of Γ 2 with Γ 1 , where multiple copies keep an account of the number of ways in which given outcome occurs. Sometimes, for brevity we will write (Γ 2 , Γ 1 ) ; Γ ∈ Γ 2 Γ 1 .
Note that this definition naturally extends to the mapping : which for given two multisets Γ 2 , Γ 1 ∈ MSET(C) take their elements one by one, compose and collect the results all together, i.e.
At this point the concept of composition is quite general, and its further development obviously depends on the choice of the rule. One supplements this construction with additional constraints. Below we discuss some natural conditions one might expect from a reasonable composition rule.
(C1) Finiteness. It is sensible to assume that objects compose only in a finite number of ways, i.e. for each Γ 2 , Γ 1 ∈ C we have (C2) Triple composition. Composition applies to more that two objects as well. For given Γ 3 , Γ 2 , Γ 1 ∈ C one can compose them successively and construct the multiset of possible compositions. There are two possible scenarios however: one can either start by composing the first two (Γ 2 , Γ 1 ) ; Γ and then composing the outcome with the third (Γ 3 , Γ ) ; Γ , or change the order and begin with (Γ 3 , Γ 2 ) ; Γ followed by the composition with the first (Γ , Γ 1 ) ; Γ . It is plausible to require that both scenarios lead to the same multiset. This condition is a sort of associativity property which in a compact form reads Note that it justifies dropping of the brackets in the denotation of triple composition Γ 3 Γ 2 Γ 1 . Clearly, the procedure generalizes to multiple compositions and Eq. (5) entails analogous condition in this case as well.
(C3) Neutral object. Often, in a class there exists a neutral object, denoted by Ø, which composes with elements of the class only in a trivial way, i.e. (Ø, Γ ) ; Γ and (Γ, Ø) ; Γ . In other words, for each Γ ∈ C we have Note that if Ø exists, it is unique.
(C4) Symmetry. Sometimes the composition rule is such that the order in which elements are composed is irrelevant. Then for each Γ 2 , Γ 1 ∈ C the following commutativity condition holds

Decomposition
Suppose that a combinatorial class C allows for decomposition of objects, i.e. splitting into ordered pairs of pieces from the same class. In general, there might be various ways of splitting an object following a given rule and, moreover, some of them may yield the same result. See Fig. 2 for illustration. The whole collection of possibilities is again properly described by the notion of multiset. Hence, we have the definition

Definition 2 (Decomposition)
Decomposition rule in a combinatorial class C is a mapping which for each object Γ ∈ C defines the multiset, denoted by Γ , comprised of all pairs (Γ , Γ ) which are splittings of Γ , with multiple copies keeping a record of possible decompositions producing the same outcome. Concisely, we will write Γ ; Extension of the definition to the mapping · : MSET (C) −→ MSET (C × C) is straightforwardly given by collecting all together decompositions of elements taken from Γ ∈ MSET(C), i.e.
Below, analogously as in Section 3.1 we consider some general conditions which one might require from a reasonable decomposition rule. We observe that most of them are in a sense dual to those discussed for the composition rule, which reflects the opposite character of both procedures. Note, however, that the decomposition rule is so far unrelated to composition -insofar as the latter might be even not definedand the conditions should be treated as independent.
(D1) Finiteness. One may reasonably expect that objects decompose in a finite number of ways only, i.e.
for each Γ ∈ C we require (D2) Triple decomposition. Decomposition into pairs naturally extends to splitting an object into three pieces Γ ; (Γ 3 , Γ 2 , Γ 1 ). An obvious way to carry out the multiple splitting is by applying the same procedure repeatedly, i.e. decomposing one of the components obtained in the preceding step. Following this prescription one usually expects that the result does not depend on the choice of the component it is applied to. In other words, we require that we end up with the same collection of triple decompositions when splitting Γ ; (Γ , Γ 1 ) and then splitting the left component Γ ; (Γ 3 , Γ 2 ), as in the case when starting with Γ ; (Γ 3 , Γ ) and then splitting the right component Γ ; (Γ 2 , Γ 1 ). This condition can be seen as a sort of co-associativity property for decomposition, and in explicit form boils down to the following equality between multisets of triple decompositions The above procedure directly extends to splitting into multiple pieces Γ ; (Γ n , ...Γ 1 ) by iterated decomposition. Clearly, the condition of Eq. (11) asserts the same result no matter which way decompositions are carried out. Hence, we can consistently define the multiset consisting of multiple decompositions of an object as with the convention Γ (1) = Γ .
(D3) Void object. Oftentimes, a class contains a void (or empty) element Ø, such that objects decompose in a trivial way. It should have the property that any object Γ = Ø splits into a pair containing either Ø or Γ in exactly two ways and Ø ; (Ø, Ø). Clearly, if Ø exists, it is unique.
(D4) Symmetry. For some rules the order between components in decompositions is immaterial, i.e. it allows for the exchange (Γ , Γ ) ←→ (Γ , Γ ). In this case we have the following symmetry condition and multiplicities of (Γ , Γ ) and (Γ , Γ ) in Γ are the same.
(D5) Finiteness of multiple decompositions. Recall multiple decompositions Γ ; (Γ n , ...Γ 1 ) considered in condition (D4) and observe that we may go with the number of components to any n ∈ N. However, if one takes into account only nontrivial decompositions, i.e. such that do not contain void Ø components, it is often the case that the process terminates after a finite number of steps. In other words, for each Γ ∈ C there exists N ∈ N such that for all n N one has

Compatibility
Now, let us take a combinatorial class C which admits both composition and decomposition of objects at the same time. We give a simple compatibility condition for both procedures to be consistently combined together.
(CD1) Composition-decomposition compatibility. Suppose we are given a pair of objects (Γ 2 , Γ 1 ) ∈ C × C which we want to decompose. We may think of two consistent decomposition schemes which involve composition as an intermediate step. We can either start by composing them together Γ 2 Γ 1 and then splitting all the resulting objects into pieces, or first decompose each of them separately into Γ 2 and Γ 1 and then compose elements of both multisets in a component-wise manner. One may reasonably expect the same outcome no matter which procedure is applied. The formal description of compatibility comes down to the equality of multisets We remark that this property implies that the void and neutral object of conditions (D3) and (C3) are the same, and hence common denotation Ø.
Oftentimes, composition/decomposition of objects come alongside with the notion of size. It is usually the case when their defining rules make use of the same characteristics which are counted by the size function. Here is a useful condition connecting these concepts: (CD2) Compatibility with size. It may happen that the considered composition rule preserves size. This means that when composing two objects (Γ 2 , Γ 1 ) ; Γ sizes of both components add up, i.e.
This requirement boils down to the restriction on the mapping of Eq.
(2) in the following way: (ii) : A parallel condition for decomposition implies that after splitting Γ ; (Γ , Γ ) the original size of an object distributes between the parts, i.e.
This translates into the constraint on the mapping of Eq. (8) as: In the following, we will assume that there is a single object of size zero, i.e. C 0 = {Ø}.

Construction of Algebraic Structures
We will demonstrate the way in which combinatorial objects can be equipped with natural algebraic structures based on the composition/decomposition concept. The key role in the argument play the conditions discussed in Section 3 which provide a route to systematic construction of the algebra, co-algebra, bialgebra and Hopf algebra structures. We note that most of combinatorial algebras can be systematically treated along these lines.

Vector space
For a given combinatorial class C we will consider a vector space C over a field K which consists of (finite) linear combinations of elements in C, i.e.
Addition of elements and multiplication by scalars in C has the usual form Clearly, elements of C are independent and span the whole vector space. Hence, C comes endowed with the distinguished basis which, in addition, carries a combinatorial meaning. We will call C the combinatorial basis of C .

Multiplication & co-multiplication
Having defined the vector space C built on a combinatorial class C we are ready to make use of its combinatorial content. Below we provide a general scheme for constructing an algebra and co-algebra structures [Bou89] based on the notions of composition and decomposition discussed in Section 3. Suppose C admits composition as defined in Section 3.1. We will consider a bilinear mapping * : defined on basis elements Γ 2 , Γ 1 ∈ C as the sum of all possible compositions of Γ 2 with Γ 1 , i.e.
Note, that although all coefficients in the defining Eq. (25) are equal to one, some of the terms in the sum may appear several times; this is because Γ 2 Γ 1 is a multiset. One rightly anticipates that multiplicities of elements will play the role of structure constants of the algebra. Such defined mapping is a natural candidate for multiplication and we have the following statement Proposition 1 (Algebra) The vector space C with the multiplication defined in Eq. (25) forms an associative algebra with unit (C , +, * , Ø) if conditions (C1) -(C3) hold. Under condition (C4) it is commutative.
Proof: Condition (C1) guarantees that the sum in Eq. (25) is finite -hence it is well defined. Conditions (C3) and (C4) directly translate into the existence of the unit element Ø and commutativity respectively. Associativity is the consequence of bilinearity of multiplication and condition (C2) which asserts equality of multisets resulting from two scenarios of triple composition (Γ 3 , Γ 2 , Γ 1 ) ; Γ ; it is straightforward to check for basis elements that 2 Now, we will consider C equipped with the notion of decomposition as described in Section 3.2. Let us take a linear mapping defined on basis elements Γ ∈ C as the sum of all splittings into pairs, which in explicit form reads Repetition of terms in Eq. (28) leads after simplification to coefficients which are multiplicities of elements in the multiset Γ . These numbers are sometimes called section coefficients, see [JR79]. We will also need a linear mapping which extracts the expansion coefficient standing at the void Ø. It is defined on basis elements Γ ∈ C in a canonical way These mappings play the role of co-multiplication and co-unit in the construction of a co-algebra as explained in the following proposition Proposition 2 (Co-algebra) If conditions (D1) -(D3) are satisfied the mappings ∆ and ε defined in Eqs. (28) and (30) respectively are the co-multiplication and co-unit which make the vector space C into a co-algebra (C , +, ∆, ε). It is co-commutative if condition (D4) holds.
Proof: The sum in Eq. (28) is well defined as long as the number of compositions is finite, i.e. condition (D1) is satisfied. From equivalence of triple splittings Γ ; (Γ 3 , Γ 2 , Γ 1 ) obtained in two possible ways considered in condition (D2), one readily verifies for a basis element that which by linearity extends on all C proving co-associativity of co-multiplication defined in Eq. (28). The co-unit ε : C −→ K by definition should satisfy the equalities where the identification K ⊗ C = C ⊗ K = C is implied. We will check the first one for a basis element Γ by direct calculation

Bi-algebra and Hopf algebra structure
We have seen in Propositions 1 and 2 how the notions of composition and decomposition lead to algebra and co-algebra structure respectively. Both schemes can be combined together so to furnish C with a bi-algebra structure.
Proof: The structure of a bi-algebra requires that the co-multiplication ∆ : C ⊗ C −→ C and the co-unit ε : C −→ K of the co-algebra preserve multiplication in C . Thus, we need to verify for basis elements Γ 1 and Γ 2 that with component-wise multiplication in the tensor product G ⊗ G on the right-hand-side, and that with terms on the right-hand-side multiplied in the field K.
We check Eq. (34) directly by expanding both sides using definitions of Eqs. (25) and (28). Accordingly, the left-hand-side takes the form while the right-hand-side reads A closer look at condition (CD1) and Eq. (16) shows a one-to-one correspondence between terms in the sums on the right-hand-sides of Eqs.  Finally, let us take a linear mapping
We will prove that S given in Eq. (39) satisfies the condition of Eq. (40). We start by considering an auxiliary linear mapping Φ : End(C ) −→ End(C ) defined as Observe that under the assumption that Φ is invertible the first equality in Eq. (40) can be rephrased into the condition Now, our objective is to show that Φ is invertible and calculate its inverse explicitly. By extracting identity we get Φ = Id + Φ + and observe that such defined Φ + can be written in the form where¯ = Id − is the complement of projecting on the subspace spanned by Γ = Ø, i.e.
We claim that he mapping Φ is invertible with the inverse given by (iii) In order to check that the above sum is well defined one analyzes the sum term by term. It is not difficult to calculate n-th iteration of Φ + explicitly We note that in the above formula products of multiple decompositions arise from repeated use of the property of Eq. In conclusion, by construction the linear mapping S of Eq. (39) satisfies the first equality in Eq. (40); the second equality can be checked analogously. Therefore we have proved S to be an antipode thus making C into a Hopf algebra. We remark that by a general theory of Hopf algebras, see [Swe69,Abe80], the property of Eq. (40) implies that S is an anti-morphism and it is unique. Moreover, if C is commutative or co-commutative S is an involution, i.e. S • S = Id. We should also observe that the definition of the antipode given in Eq. (39) admits construction by iteration and S(Ø) = Ø. Finally, whenever composition/decomposition is compatible with the notion of size in class C we have a grading in the algebra C as explained in the following proposition: Proposition 4 (Grading) Suppose we have a bi-algebra structure (C , +, * , Ø, ∆, ε) constructed as in Theorem 1. If condition (CD2) holds, then C is a graded Hopf algebra with grading given by size in C, i.e.
where C n = {Γ ∈ C : |Γ | = n}, and * : Proof: Note, that condition (CD2) implies (D5), and hence C is a Hopf algebra by Theorem 3. Furthermore, condition (CD2) asserts a proper action of * and ∆ on subspaces C n built of objects of the same size. 2

A special case: Monoid
Let us consider a simplified situation by taking a determinate composition law of the form which means that objects compose in a unique way. In other words, for each Γ 2 , Γ 1 ∈ C the multiset Γ 2 Γ 1 of Definition 1 is always a singleton. Observe that conditions (C2) and (C3) are equivalent to the requirement that (C, ) is a monoid. We note that the case of a commutative monoid was thoroughly investigated by S.A. Joni and G.-C. Rota in [JR79] and further developed by A. Joyal [Joy81]. In this context it is convenient to consider collections C ⊂ C such that each element of C can be constructed as a composition of a finite number of elements from C, i.e.
We call C a generating class if it is the smallest (in the sense of inclusion) subclass of C with this property. It has the advantage that when establishing a decomposition rule satisfying (CD1) one can specify it on the generating class C in arbitrary way and then consistently extend it using Eq. (16) to the whole class C by defining We note that from a practical point of view this way of introducing the decomposition rule is very convenient as it restricts the the number of objects to be scrutinized to a smaller class C and automatically guarantees compatibility of composition and decomposition rules, i.e. (CD1) is satisfied by construction. Moreover, inspection of other properties is usually simpler in this context as well. For example, if composition preserves size Eq. (17), then it is enough to check Eq. (19) on C and condition (CD2) automatically holds on the whole C.
There is a canonical way in which decomposition can be introduced in this setting. Namely, one can define it on the generating elements Γ ∈ C in a primitive way, i.e.
Observe that such defined decomposition rule upon extension via Eq. (54) satisfies all the conditions (D1) -(D5), which clears the way to the construction of a Hopf algebra. We note that objects having the property of Eq. (55) are usually called primitive elements, for which we have

Examples
Here, we will illustrate how the general framework developed in Sections 3 and 4 works in practice. We

Words
Let A = {l 1 , l 2 , ..., l n } be a finite set of letters -an alphabet. We will consider a combinatorial class A consisting of (finite) words built of the alphabet A, i.e. A = A * = { Ø, l 1 , l 1 l 1 , l 1 l 2 , ... , l i1 ... l i k , ... } where Ø is an empty word. Size of a word will be defined as its lengths (number of letters): | l i1 ... l i k | = k and |Ø| = 0. Algebraic structure in A can be introduced in a few ways as explained below [Lot83,Reu93].
Observe that (A, ) is a monoid and A is a generating class. We define decomposition of generators (letters) in the primitive way, i.e.
and extend it to the whole class A using Eq. (54). One checks that each decomposition of a word comes down to the choice of a subword which gives the first component of a splitting (the reminder constitutes the second one), i.e.
(iv) We adopt the convention that a sequence of letters indexed by the empty set is the empty word Ø.
Note such defined composition/decomposition rule is compatible with size and hence condition (CD2) holds. Application of the scheme discussed in previous sections provide us with the mappings which make A into a graded co-commutative Hopf algebra. It is called a free algebra. Note that if the alphabet consists of more than one letter then multiplication is non-commutative.
In conclusion, we observe that if the alphabet consists of one letter only A = {x}, then the construction starts from the the class of words P = {Ø, x, xx, xxx, ...} and leads to the algebra of polynomials in one variable P = K[x] = n i=0 α i x i : α i ∈ K . In this case, we have

Symmetric algebra
Now, let an alphabet A = {l 1 , l 2 , ..., l n } be endowed with a linear order l 1 < l 2 < ... < l n . We will consider words arranged in a non-decreasing order and define the pertaining class S in the form In this case, simple concatenation of words is not a legitimate composition rule and one has to amend it by additional reordering of letters where σ is a unique permutation of {1, 2, ..., m + n} such that i σ(1) ... i σ(m+n) . Clearly, (S, ) is a monoid generated by A. The simplest choice of primitive decomposition for the generators l i = {(Ø, l i ), (l i , Ø)} extends to the whole class as follows Observe apparent similarity of Eqs. (58) and (60) to Eqs. (67) and (68), with the only difference that words in the latter two are ordered. Construction of the symmetric algebra S follows the proposed scheme, and the mappings l in n ... l i1 1 * l jn n ... l j1 define a graded Hopf algebra structure which is both commutative and co-commutative. Note that in Eq. (69) the repeating letters were grouped together and denoted as powers. As a byproduct of this notation one immediately observes that the symmetric algebra S is isomorphic to the algebra of polynomials in many commuting variables K[x 1 , ..., x n ].

Shuffle algebra
We will consider the class of words A and define composition as any shuffle which mixes letters of the words preserving their relative order. For example, for two words "shuffle" and "mix": "shmiufxfle" and "mixshuffle" are allowed compositions whilst "shufflemxi" is not. Note that there are always several possible shuffles for given two (nonempty) words, and hence the use of multiset construction in definition of the composition rule where the index set runs over all permutations σ of the set {1, 2, ..., m + n} which preserve the relative order of 1, 2, ..., m and m + 1, m + 2, ...m + n respectively. One checks that a compatible decomposition rule is given by cutting a word into two parts and exchanging the prefix with the suffix, i.e.
Note that this is the instance of a non-monoidal composition law. Following the scheme of Section 4 we arrive at the Hopf algebra structure given by the mappings l i1 ... l im * l i1 ... l in = σ(1)<...<σ(m) σ(m+1)<...<σ(m+n) We remark that for such constructed Shuffle algebra the multiplication of Eq. (75) is commutative and the co-product of Eq. (76) is not co-commutative.

Graphs
Let us consider a class of undirected graphs G which one graphically represents as a collection of vertices connected by edges (we exclude isolated vertices). More formally a graph is defined as a mapping Γ : E −→ V (2) prescribing how the edges E are attached to vertices V , where V (2) is a set of unordered pairs of vertices (not necessarily distinct); for a rigorous definition see [Ore90,Wil96,Die05]. Let the size of a graph be the number of its edges |Γ | = |E|. An obvious composition rule in G consist in taking two graphs Γ 2 , Γ 1 ∈ G and drawing them one next to another, i.e.
Observe that for a given graph Γ ∈ G each subset of its edges L ⊂ E induces a subgraph Γ | L : L −→ V (2) which is defined by restriction of Γ to the subset L. Likewise, the remaining part of the edges R = E − L gives rise to a subgraph Γ | R . Thus, by considering ordered partitions of the set of edges into two subsets L + R = E, i.e. L ∪ R = E and L ∩ R = ∅, we end up with pairs (Γ | L , Γ | R ) of disjoint graphs. For example: One checks that conditions (D1) -(D5) and (CD2) hold, and we obtain a graded Hopf algebra G with the grading given by the number of edges. Its structure is given by So defined algebra of graphs is both commutative and co-commutative.

Trees and Forests
A rooted tree is a graph without cycles with one distinguished vertex, called the root. Let T denote the class of rooted trees. A forest is a collection of rooted trees and the pertaining combinatorial class has the specification F = MSET(T ). Size of a tree (forest) is defined as the number of vertices. We will consider class F and define composition of forests as a multiset union (like for graphs), i.e.
For example: Note that (F, ) is a (commutative) monoid generated by the rooted trees T . For a given a tree τ ∈ T one distinguishes subtrees τ r ⊂ τ which share the same root with τ , called proper subtrees (the empty tree Ø is considered as a proper subtree as well). Observe that the latter obtains by trimming τ to the required shape τ r , and the branches which are cut off form a forest of trees denoted by τ c (with the roots next to the cutting). Decomposition of a tree is defined as any splitting τ ; (τ c , τ r ) into a pair consisting of a proper subtree taken in the second component and the remaining forest in the first one. In other words where the disjoint union ranges over proper subtrees τ r of τ , and τ c is a forest of trees which 'complements' τ r to τ . For example: enumerates all possible decompositions of a tree (at the top row proper subtrees τ r are drawn in black while the completing forests τ c are drawn in gray).
Since trees T generate forests F, we extend the decomposition rule to any forest Γ = τ n ... τ 1 ∈ F using Eq. (54) and obtain τ n ... τ 1 = τ r n ⊂τn , ... , τ r 1 ⊂τ1 {(τ c n ... τ c 1 , τ r n ... τ r 1 )} , which comes down to trimming some of the branches off the whole forest and gathering them in the first component Γ c = τ c n ... τ c 1 whilst keeping the rooted parts in the second one Γ r = τ r n ... τ r 1 . Hence, we will briefly write Following the construction of Section 4 one obtains a graded Hopf algebra F with the grading given by the number of vertices. The required mappings take the form ε(Γ ) = 0 , ε(Ø) = 1 , Such constructed algebra of forests F is commutative but not co-commutative. We remark that this Hopf algebra was first introduced by J. C. Butcher [But72,Bro04] and recently it was rediscovered by A. Connes and D. Kreimer [Kre98,CK98] in the context of renormalization in quantum field theory.