A Branch-and-Reduce Algorithm for Finding a Minimum Independent Dominating Set

An independent dominating set D of a graph G = (V,E) is a subset of vertices such that every vertex in V \ D has at least one neighbor in D and D is an independent set, i.e. no two vertices of D are adjacent in G. Finding a minimum independent dominating set in a graph is an NP-hard problem. Whereas it is hard to cope with this problem using parameterized and approximation algorithms, there is a simple exact O(1.4423^n)-time algorithm solving the problem by enumerating all maximal independent sets. In this paper we improve the latter result, providing the first non trivial algorithm computing a minimum independent dominating set of a graph in time O(1.3569^n). Furthermore, we give a lower bound of \Omega(1.3247^n) on the worst-case running time of this algorithm, showing that the running time analysis is almost tight.


Introduction
During the last years the interest in the design of exact exponential time algorithms has grown significantly. Several nice surveys have been written on this subject. In Woeginger's first survey [34], he presents the major techniques used to design exact exponential time algorithms. We also refer the reader to the survey of Fomin et al. [13] discussing some more recent techniques for the design and the analysis of exponential time algorithms. In particular, they discuss Measure & Conquer and lower bounds.
In a graph G = (V, E), a subset of vertices S ⊆ V is independent if no two vertices of S share an edge, and S is dominating if every vertex from V \ S has at least one neighbor in S. In the Maximum Independent Set problem (MIS), the input is a graph and the task is to find a largest independent set in this graph. In the Minimum Dominating Set problem (MDS), the input is a graph and the task is to find a smallest dominating set in this graph. A natural and well studied combination of these two problems asks for a subset of vertices of minimum cardinality that is both independent and dominating. This problem is called Minimum Independent Dominating Set (MIDS). It is also known as Minimum Maximal Independent Set, since every independent dominating set is a maximal independent set. Whereas there has been a lot of work on MIS and MDS in the field of exact algorithms, the best known exact algorithm for MIDS -prior to our work -trivially enumerates all maximal independent sets.
Known results. The MIS problem was among the first problems shown to be NP-hard [16]. It is known that a maximum independent set of a graph on n vertices can be computed in O(1.4423 n ) time by combining a result due to Moon and Moser, who showed in 1965 that the number of maximal independent sets of a graph is upper bounded by 3 n/3 [27] (see also [25]), and a result due to Johnson, Yannakakis and Papadimitriou, providing in [22] a polynomial delay algorithm to generate all maximal independent sets. Moreover many exact algorithms for this problem have been published, starting in 1977 by an O(1.2600 n ) algorithm by Tarjan and Trojanowski [31]. To date, the fastest known exponential space algorithms for MIS have been designed by Robson. His algorithm from 1986 [29] has running time O(1.2108 n ) and his unpublished computer-generated algorithm from 2001 [30] has running time O(1.1889 n ). Among the currently leading polynomial space algorithms, there is a very simple algorithm with running time O(1.2210 n ) by Fomin et al. [11,14] from 2006, an O(1.2132 n ) time algorithm by Kneis et al. [23] from 2009, and a very recent O(1.2127 n ) time algorithm by Bourgeois et al. [3].
The MDS problem is also well known to be NP-hard [16]. Until 2004, the only known exact exponential time algorithm to solve MDS asked for trivially enumerating the 2 n subsets of vertices.
The year 2004 saw a particular interest in providing some faster algorithms for solving this problem. Indeed, three papers with exact algorithms for MDS were published. In [15] Fomin et al. present an O(1.9379 n ) time algorithm, in [28] Randerath and Schiermeyer establish an O(1.8899 n ) time algorithm and Grandoni [20] obtains an O(1.8026 n ) time algorithm.
In 2005, Fomin et al. [12,14] use the Measure & Conquer approach to obtain an algorithm with running time O(1.5263 n ) and using polynomial space. By applying a memorization technique they show that this running time can be reduced to O(1.5137 n ) when allowing exponential space usage. Van Rooij and Bodlaender [32] further improved the polynomial-space algorithm to O(1.5134 n ) and the exponential-space algorithm to O(1.5063 n ). By now, the fastest published algorithm is due to Van Rooij et al. In [33], they provide a O(1.5048 n ) time needing exponential space to solve the more general counting version of MDS, i.e. the problem of computing the number of distinct minimum dominating sets.
It is known that a minimum independent dominating set (a mids, for short) can be found in polynomial time for several graph classes like interval graphs [5], chordal graphs [10], cocomparability graphs [24] and AT-free graphs [4], whereas the problem remains NP-complete for bipartite graphs [6] and comparability graphs [6]. Concerning approximation results, Halldórsson proved in [21] that there is no constant ǫ > 0 such that MIDS can be approximated within a factor of n 1−ǫ in polynomial time, assuming P = N P . The same inapproximation result even holds for circle graphs and bipartite graphs [8].
The problem has also been considered in parameterized approximability. Downey et al. [9] have shown that it is W [2]-hard to approximate k-Independent Dominating Set with a factor g(k), for any computable function g(k) ≥ k. In other words, unless W [2] = F P T , there is no algorithm with running time f (k) · n O(1) (where f (k) is any computable function independent of n) which either asserts that there is no independent dominating set of size at most k for a given graph G, or otherwise asserts that there is one of size at most g(k), for any computable function g(k) ≥ k.
The first exponential time algorithm for MIDS has been observed by Randerath and Schiermeyer [28]. They use the result due to Moon and Moser [27] as explained previously and an algorithm enumerating all the maximal independent sets to obtain an O(1.4423 n ) time algorithm for MIDS. In 2006, an earlier conference version of this paper claimed an O(1.3575 n ) time algorithm [18]. However, a flaw concerning the main reduction rule was discovered by the authors and is repaired in the present paper. Very recently, Bourgeois et al. [2] proposed a branch-and-reduce O(1.3417 n ) time algorithm, reusing several of the ideas introduced in [18]. Our results. In this paper we present an O(1.3569 n ) time algorithm for solving MIDS using the Measure & Conquer approach to analyze its running time. As the bottleneck of the algorithm in [28] are the vertices of degree two, we develop several methods to handle them more efficiently such as marking some vertices and a reduction described in Subsection 3.1 to a constraint satisfaction problem. Combined with some elaborated branching rules, this enables us to lower bound shrewdly the progress made by the algorithm at each branching step, and thus to obtain a polynomial-space algorithm with running time O(1.3569 n ). Furthermore, we obtain a very close lower bound of Ω(1.3247 n ) on the running time of our algorithm, which is very rare for non trivial exponential time algorithms. This paper is organized as follows. In Section 2, we introduce the necessary concepts and definitions. Section 3 presents the algorithm for MIDS. We prove its correctness and an upper bound on its worst-case running time in Section 4. In Section 5, we establish a lower bound on its worst-case running time, which is very close to the upper bound and we conclude with Section 6. In a branch-and-reduce algorithm, a solution for the current problem instance is computed by recursing on smaller subinstances such that an optimal solution, if one exists, is computed for at least one subinstance. If the algorithm considers only one subinstance in a given case, we speak of a reduction rule, otherwise of a branching rule.
Consider a vertex u ∈ V of degree two with two non adjacent neighbors v 1 and v 2 . In such a case, a branch-and-reduce algorithm will typically branch into three subcases when considering u: either u, or v 1 , or v 2 are in the solution set. In the third branch, one can consider that v 1 is not in the solution set as the second branch considers all solution sets containing v 1 . In order to memorize that v 1 is not in the solution set but still needs to be dominated, we mark v 1 . Remark. It is possible that such an independent dominating set does not exist in a marked graph, for example if some marked vertex has no free neighbor.
Finally, we introduce the notion of an induced marked subgraph.

Computing a mids on Marked Graphs
In this section we present an algorithm solving MIDS on marked graphs, assuming that no marked vertex has F -degree larger than 4.
From the previous definitions it follows that a subset D ⊆ V is a mids of a graph G ′ = (V, E) if and only if D is a mids of the marked graph G = (V, ∅, E). Hence the algorithm of this section is able to solve the problem on simple graphs as well. Also due to the definitions, edges incident to two marked vertices are irrelevant; throughout this paper we assume that there are no such edges.
Given a marked graph G = (F, M, E), consider the graph G[F ] induced by its free vertices. In the following subsection we consider the special case when G[F ] is a disjoint union of cliques with some additional properties.

G[F ] is a disjoint union of cliques
Assume in this subsection that the graph G[F ] is a disjoint union of cliques such that: • each clique has size at most 4, and • each marked vertex has at most 4 free neighbors.
We will transform this instance G = (F, M, E) of MIDS into an instance (X, D, C) of the Constraint Satisfaction Problem (CSP). Let us briefly recall some definitions about CSP. Given a finite set X = {x 1 , x 2 , . . . , x n } of n variables over domains D(x i ), 1 ≤ i ≤ n, and a set C of q constraints, CSP asks for an assignment of values to the variables, such that each variable is assigned a value from its domain, satisfying all the constraints. Formally, (d, p)-CSP is defined as follows: Question: Is there a function f assigning to each variable Given a marked graph G = (F, M, E) fulfilling the previous conditions, we describe the construction of a (4, 4)-CSP instance. We label the cliques such that for at least one r, 1 ≤ r ≤ j, the value of w ir is k r and {u, v kr ir } is an edge of the graph.
Finally, each marked vertex u i leads to a constraint t i , R i of the set C. Due to the conditions on the given marked graph, the size of the domain of each variable is at most 4 and the number of variables involved in each constraint is at most 4.
We now use the following theorem of Angelsmark [1] showing that it is possible to restrict our attention to (2, 4)-CSP. The constructive proof of this theorem shows how to transform a (d, p)-CSP instance on n variables into a set of (e, p)-CSP instances on at most n variables each, such that the (d, p)-CSP instance has a solution if and only if at least one of the (e, p)-CSP instances has a solution. The number of (e, p)-CSP instances of this construction is bounded by Π i>e (i/e + ǫ) ni ≤ (d/e + ǫ) n , where n i is the number of variables with domain size i in the (d, p)-CSP instance and ǫ > 0 can be taken arbitrarily small.
We use this construction to transform our (4, 4)-CSP instance into a set of Π i>2 (i/2 + ǫ) Ni (2, 4)-CSP instances, where N i is the number of cliques of size i in G[F ]. Then, it is not hard to see that there exists a mids for G if and only if at least one of the (2, 4)-CSP instances has an assignment of the variables which satisfies all the constraints of this CSP instance. Given a satisfying assignment f to such a CSP instance, the set We obtain the following theorem.
is the running time needed to solve a (2, 4)-CSP instance on n variables, for any ǫ > 0.
The theorem can be combined with the following result of Moser and Scheder [26] providing an algorithm for solving (2, 4)-CSP.
) or it can be decided within the same running time that the marked graph has no mids, for any ǫ > 0.
We remark that the procedure of Corollary 7 will not be a bottleneck in the final running time analysis of our algorithm, even if we use the 1.6 n · n O(1) by Dantsin et al. [7] to solve (2, 4)-CSP instances instead of Theorem 6.

The Algorithm
In this subsection, we give Algorithm ids computing the size of a mids of a marked graph. Although the number of branching rules is quite large it is fairly simple to check that the algorithm computes the size of a mids (if one exists). It is also not difficult to transform ids into an algorithm that actually outputs a mids. In the next section we prove the correctness and give a detailed analysis of the running time of Algorithm ids.
Once it has selected a vertex u, the algorithm makes recursive calls (that is, it branches) on subinstances of the marked graph. There are different ways the algorithm branches and we give the most common ones now. Let v 1 , . . . , v dF (u) denote the free neighbors of u, ordered by increasing F -degree. The branching procedure branch all(G, u) explores all possibilities that u or a free neighbor of u is in the solution set. It returns The branching procedure branch mark(G, u) additionally makes sure that the free neighbors of u are considered by increasing F -degree and when considering the possibility that v i is in the solution set, it marks all vertices v j , j < i. It returns Finally, the branching procedure branch one(G, u) considers the two possibilities where u is in the solution set or where u is not in the solution set. In the recursive call corresponding to the second possibility, u is marked. The procedure returns The branching procedure branch all is favored over branch mark if branch mark would create marked vertices of degree at least 5. Thus, starting with a graph where all the marked vertices have F -degree at most 4, Algorithm ids will keep this invariant. This property allows us to use the procedure described in the previous subsection whenever the graph induced by its free vertices is a collection of cliques of size at most 4. The correctness and running time analysis of ids are described in the next section. let v be the free neighbor of u is complete bipartite then let B be partitioned into two independent sets X and Y return min

Correctness and Analysis of the Algorithm
In our analysis, we assign so-called weights to free vertices. Free vertices having only marked neighbors can be handled without branching. Hence, it is an advantage when the F -degree of a vertex decreases. The weights of the free vertices will therefore depend on their F -degree. Let n i denote the number of free vertices having F -degree i. For the running time analysis we consider the following measure of the size of G: with the weights w i ∈ [0, 1]. In order to simplify the running time analysis, we make the following assumptions: • w 0 = 0, Proof. An instance I is atomic if Algorithm ids does not make a recursive call on input I. Let P [k] denote the maximum number of atomic subinstances recursively processed to compute a solution for an instance of size k. As the time spent in each call of ids, excluding the time spent by the corresponding recursive calls, is polynomial, except for Case (4), it is sufficient to show that for a valid choice of the weights, P [k] = O(1.35684 k ), and that the time spent in Case (4) does not exceed P [k]. Each recursive call made by the algorithm is on an instance with at least one edge fewer, which means that the running time of ids can be upper bounded by a polynomial factor of P [k]. Moreover, as no reduction or branching rule increases k, P [k] can be bounded by analyzing recurrences based on the measure of the created subinstances in those cases where the algorithm makes at least 2 recursive calls. We will analyze these cases one by one.
Case (1) A marked vertex that has no free neighbor cannot be dominated. Thus, such an instance has no independent dominating set.
Case (2) In this case, G[F ] is a disjoint union of cliques and u is a vertex from a clique of size ℓ ≥ 6 in G[F ]. The branching branch all(G, u) creates ℓ subinstances whose measure is bounded by k − ℓw 3 . The corresponding recurrence relation is P [k] ≤ ℓP [k − ℓw 3 ]. For ℓ ≥ 6, the tightest of these recurrences is when ℓ = 6: Case (3) In this case, G[F ] is a disjoint union of cliques and u is a vertex from a clique of size 5 in G[F ]. The branching branch one(G, u) creates 2 subinstances whose measure is bounded by k − 5w 3 and k − w 3 , respectively. Note that the marked vertex which is created in the second branch has F -degree 4. The corresponding recurrence is Case (4) The graph induced by the free vertices is a disjoint union of cliques of size no more than 4. Corollary 7 is applied on the remaining marked graph and we note that the number n i of vertices of F -degree i, 1 ≤ i ≤ 3, in this graph is no more than n 1 ≤ µ/w 1 ≤ n/w 1 , n 2 ≤ µ/w 2 ≤ n/w 2 and n 3 ≤ µ/w 3 ≤ n/w 3 with n 1 + n 2 + n 3 ≤ n.
Case (5) A marked vertex u with exactly one free neighbor v must be dominated by v. Thus, v is added to the mids and all its neighbors are deleted.

Case (6) If there is a subset B of free vertices such that G[B]
induces a complete bipartite graph and no vertex of B is adjacent to a free vertex outside B, then the algorithm branches into two subcases. Let X and Y be the two maximal independent sets of G[B]. Then a mids contains either X or Y . In both cases we delete B and the marked neighbors of either X or Y . The smallest possible subset B satisfying the conditions of this case is a P 3 , that is a path on three vertices, as |B| > 2. Note that all smaller complete bipartite graphs are cliques and will be handled by Case (4). Since we only count the number of free vertices, we obtain the following recurrence: It is clear that any complete bipartite component with more than three vertices would lead to a better recurrence. Case (7) If there is a subset C of three free vertices which form a clique and exactly one vertex v ∈ C has free neighbors outside C, the algorithm either includes v in the solution set or it excludes this vertex. In the first branch, all the neighbors of v are deleted (including C). In the second branch, note that v is not marked. Indeed, v's F -degree might be too high to be marked, and v's neighborhood contains a clique component in G[F ] of which one vertex is in every independent dominating set of the resulting marked graph, making the marking of v superfluous. We distinguish two cases based on the number of free neighbors of some free vertex u ∈ N (v) \ C.
1. Vertex u has one free neighbor. In the first branch, all of N [v] are deleted, and in the second branch, v is removed, u's F -degree decreases to 0, and the F -degree of both vertices in C \{v} decreases to 1. This gives the recurrence: 2. Vertex u has F -degree at least 2. Then we obtain the recurrence: Case (8) If there is a free vertex u such that d F (u) = 1, a mids either includes u or its free neighbor v 1 . Vertex v 1 cannot have F -degree one because this would contradict the first choice criterion (a) of u. For the analysis, we consider two cases: 1. d F (v 1 ) = 2. Let x 1 denote the other free neighbor of v 1 . Note that d F (x 1 ) = 1 as this would have been handled by Case (6). We consider again two subcases: (a) d F (x 1 ) = 2. When u is chosen in the independent dominating set, u and v 1 are deleted and the degree of x 1 decreases to one. When v 1 is chosen in the independent dominating set, u, v 1 and x 1 are deleted from the marked graph. So, we obtain the following recurrence for this case: Vertices u and v 1 are deleted in the first branch, and u, v 1 and x 1 are deleted in the second branch. The recurrence for this subcase is: 2. d F (v 1 ) ≥ 3. At least one free neighbor of v 1 has F -degree at least 2, otherwise Case (6) would apply. Therefore the recurrence for this subcase is: Case (9) If there is a free vertex u such that d F (u) = 2 and u has a neighbor of F -degree at most 4 (as the neighbors v 1 , v 2 of u are ordered by increasing F -degree, v 1 has F -degree at most 4), the algorithm uses branch mark(G, u) to branch into three subcases. Either u belongs to the mids, or v 1 is taken in the mids, or v 1 is marked and v 2 is taken in the mids. We distinguish three cases: In this case, due to the choice of the vertex u by the algorithm, all free vertices of this connected component T in G[F ] have F -degree 2. T cannot be a C 4 (a cycle on 4 vertices) as this is a complete bipartite graph and would have been handled by Case (6). In the branches where u or v 1 belong to the mids, the three free vertices in N [u] or N [v 1 ] are deleted and two of their neighbors (T is a cycle on at least 5 vertices) have their F -degree reduced from 2 to 1. In the branch where v 1 is marked and v 2 is added to the mids, N [v 2 ] is deleted and by Case (5), the other neighbor x 1 of v 1 is added to the mids, resulting in the deletion of N [x 1 ] as well. In total, at least 5 free vertices of F -degree 2 are deleted in the third branch. Thus, we have the recurrence for this case.
The vertices v 1 and v 2 are not adjacent, otherwise Case (7) would apply. In the last branch, v 1 is marked and v 2 is added to the solution. If v 1 and v 2 have a common neighbor besides u, then the last branch is atomic because Case (1) applies as no vertex can dominate v 1 . Otherwise, the reduction rule of Case (5) applies in the last branch and the other neighbor x 1 = u is added to the solution as well. Thus, we have the recurrence 3. 3 ≤ d F (v 1 ) ≤ 4. We distinguish between two cases depending on whether there is an edge between v 1 and v 2 .
(a) v 1 and v 2 are not adjacent. Branching on u, v 1 and v 2 leads to the following recurrence: (b) v 1 and v 2 are adjacent. We distinguish two subcases depending on whether there is a degree-2 vertex in N 2 (u).
i. There is a degree-2 vertex in N 2 (u). Then, ii. No vertex in N 2 (u) has degree 2. Then, Case (10) If there is a free vertex u such that d F (u) = 2 and none of the above cases apply, then v 1 and v 2 have degree at least 5 and the algorithm branches into the three subinstances of branch all(G, u): either u, v 1 , or v 2 belongs to the mids, leading to the recurrence Case (11) If all neighbors of u have degree 3, then the connected component in G[F ] containing u is 3-regular due to the selection criteria of u. As (by criterion (a)) this component is not a clique, has at most one edge. This means that there are at least 4 edges with one endpoint in N F (v) and the other endpoint in N 2 F (v). If |N 2 F (v)| = 2, the recurrence corresponding to the branching branch one(G, v) is if and if |N 2 F (v)| = 3 it is a mixture of the above two recurrences and is majorized by one or the other.

Case (12)
If u has a neighbor v of F -degree 4, then the algorithm uses the branching procedure branch one(G, v). If v is taken in the mids, 5 vertices of degree at least 3 are removed from the instance. If v is marked, the F -degree of u decreases from 3 to 2. The corresponding recurrence is Case (13) If u has a neighbor v of F -degree 5, then the algorithm either takes u in the mids, or v, or it marks both u and v (note that v will have F -degree 4). The recurrence corresponding to this case is Case (14) In this case, N F [u] is a clique and v 3 is the only vertex from this clique that has free neighbors outside N F [u]. The algorithm either takes v 3 in the mids or deletes it. Note that N F (v 3 ) includes a clique and that any mids of G[F \ {v 3 }, M ] contains one vertex from this clique, which makes the marking of v 3 superfluous.
Case (15) We distinguish two cases based on the neighborhood of v 3 .
1. v 3 is adjacent to v 1 and v 2 . Then, v 1 is not adjacent to v 2 , otherwise Case (14) would apply.
In the second branch, v 2 's F -degree drops to 1 and in the third branch, v 1 's neighbor in N 2 F (u) is also selected by Case (5). This gives the recurrence 2. v 3 is not adjacent to v 1 or to v 2 . In the last branch, 7 vertices are deleted and one vertex is marked, giving Case (16) In this case, u has at least two neighbors of degree at least 6. The recurrence corresponding to the branching branch all(G, u) is Case (17) If u has degree 4, the algorithm branches along branch one(G, u), giving the recurrence Case (18) If u has degree ℓ ≥ 5, the algorithm branches along branch all(G, u). The corresponding recurrence is P [k] ≤ (ℓ + 1)P [k − (ℓ + 1)w 3 ], the tightest of which is obtained for ℓ = 5: Finally the values of weights are computed with a convex optimization program [19] (see also [17]) to minimize the bound on the running time. Using the values w 1 = 0.8482 and w 2 = 0.9685 for the weights, one can easily verify that P [k] = O(1.35684 k ). In particular by this choice of the weights, the running-time required by Corollary 7 to solve the CSP instance whenever Case  In order to analyze the progress of the algorithm during the computation of a mids, we used a non standard measure. In this way we have been able to determine an upper bound on the size of the subinstances recursively processed by the algorithm, and consequently we obtained an upper bound on the worst case running time of Algorithm ids. However the use of another measure or a different method of analysis could perhaps provide a "better upper bound" without changing the algorithm but only improving the analysis.
How far is the given upper bound of Theorem 8 from the best upper bound we can hope to obtain? In this section, we establish a lower bound on the worst case running time of our algorithm. This lower bound gives a really good estimation on the precision of the analysis. For example, in [12] (see also [14]) Fomin et al. obtain a O(1.5263 n ) time algorithm for solving the dominating set problem and they exhibit a construction of a family of graphs giving a lower bound of Ω(1.2599 n ) for its running time. They say that the upper bound of many exponential time algorithms is likely to be overestimated only due to the choice of the measure for the analysis of the running time, and they note the gap between their upper and lower bound for their algorithm. However, for our algorithm we have the following result: To prove Theorem 9 on the lower bound of the worst-case running time of algorithm ids, consider the graph G l = (V l , E l ) (see Fig. 1) defined by: We denote by G ′ l = (V, ∅, E) the marked graph corresponding to the graph G l = (V, E). For a marked graph G = (F, M, E) we define δ F = min u∈F {d F (u)} and M inDeg = {u ∈ F s.t. d F (u) = δ F } as the set of free vertices with smallest F -degree.
We denote the highest F -degree of the free neighbors of the vertices in M inDeg by } be the set of candidate vertices that ids can choose in Case (9). W.l.o.g. suppose that when |CandidateCase9| ≥ 2 and ids would apply Case (9), it chooses the vertex with smallest index (e.g. if CandidateCase9 = {u 1 , v l }, the algorithm would choose u 1 ).
Lemma 10. Let G ′ l be the input of Algorithm ids. Suppose that ids only applies Case (9) in each recursive call (with respect to the previous rule for choosing a vertex). Then, in each call of ids where the remaining input graph has more than four vertices, one of the following two properties is fulfilled:  Proof. We prove this result by induction. It is not hard to see that CandidateCase9 = {u 1 , v l } for G ′ l and that Property (1) is verified. Suppose now that Property (1) is fulfilled. Then there exists an integer k, 1 ≤ k ≤ l − 1, such that CandidateCase9 = {u k , v l }. Since ids applies Case (9) respecting the rule for choosing the vertex in CandidateCase9, the algorithm chooses vertex u k . Then we branch on three subinstances: If we suppose now that Property (2) is fulfilled, branching on a vertex v k gives us the same kind of subproblems. Now, we prove that, on input G l , Algorithm ids applies Case (9) as long as the remaining graph has "enough" vertices.
Lemma 11. Given the graph G ′ l as input, as long as the remaining graph has more than four vertices, Algorithm ids applies Case (9) in each recursive call.
Proof. We prove this result also by induction. First, when the input of the algorithm is the graph G ′ l , it is clear that none of Cases (1) to (8) can be applied. So, Case (9) is applied since CandidateCase9 = ∅ according to Lemma 10. Consider now a graph obtained from G ′ l by repeatedly branching using Case (9). By Lemma 10, the remaining graph has no marked vertices (this excludes that Cases (1) and (5) are applied). It has no clique component induced by the set of free vertices since the graph is connected and there is no edge between u l−1 and v l (this excludes Cases (2)-(4)). The free vertices do not induce a bipartite graph since {v l−1 , u l , v l } induces a C 3 (this excludes Case (6)). There is no clique C such that only one vertex of C has neighbors outside C: the largest induced clique in the remaining graph has size 3 and each of these cliques has at least two vertices having some neighbors outside the clique (this excludes Case (7)). Also, according to Lemma 10, the remaining graph has no vertex of degree 1 (this excludes Case (8)) and CandidateCase9 = ∅. Consequently, the algorithm applies Case (9). Figure 2 gives a part of the search tree illustrating the fact that our algorithm recursively branches on three subinstances with respect to Case (9).
Proof of Theorem 9. Consider the graph G ′ l and the search tree which results from branching using Case (9) until k vertices, 1 ≤ k ≤ 2l, have been removed from the given input graph G ′ l (G ′ l has 2l vertices). Denote by L[k] the number of leaves in this search tree. It is not hard to see that this leads to the following recurrence (see the notes in the proof of Lemma 10): and therefore L[k] ≥ 1.3247 k . Consequently, the maximum number of leaves that a search tree for ids can contain, given an input graph on n vertices, is Ω(1.3247 n ).

Conclusions and Open Questions
In this paper we presented a non trivial algorithm solving the Minimum Independent Dominating Set problem. Using a non standard measure on the size of the considered graph, we proved that our algorithm achieves a running time of O(1.3569 n ). Moreover we showed that Ω(1.3247 n ) is a lower bound on the running time of this algorithm by exhibiting a family of graphs for which our algorithm has a high running time.
A natural question here is: is it is possible to obtain a better upper bound on the running time of the presented algorithm by considering another measure or using other techniques. Or is it possible that this upper bound is tight?