Bears with Hats and Independence Polynomials

Consider the following hat guessing game. A bear sits on each vertex of a graph G , and a demon puts on each bear a hat colored by one of h colors. Each bear sees only the hat colors of his neighbors. Based on this information only, each bear has to guess g colors and he guesses correctly if his hat color is included in his guesses. The bears win if at least one bear guesses correctly for any hat arrangement. We introduce a new parameter—fractional hat chromatic number ˆ µ , arising from the hat guessing game. The parameter ˆ µ is related to the hat chromatic number which has been studied before. We present a surprising connection between the hat guessing game and the independence polynomial of graphs. This connection allows us to compute the fractional hat chromatic number of chordal graphs in polynomial time, to bound fractional hat chromatic number by a function of maximum degree of G


Introduction
In this paper, we study a variant of a hat guessing game.In these types of games, there are some entitiesplayers, pirates, sages, or, as in our case, bears.A bear sits on each vertex of graph G.There is some adversary (a demon in our case) that puts a colored hat on the head of each bear.A bear on a vertex v sees only the hats of bears on the neighboring vertices of v but he does not know the color of his own hat.Now to defeat the demon, the bears should guess correctly the color of their hats.However, the bears can only discuss their strategy before they are given the hats.After they get them, no communication is allowed, each bear can only guess his hat color.The variants of the game differ in the bears' winning condition.
The first variant was introduced by Ebert [8].In this version, each bear gets a red or blue hat (chosen uniformly and independently) and they can either guess a color or pass.The bears see each other, i.e.
Václav Blažej, Pavel Dvořák, Michal Opler they stay on vertices of a clique.They win if at least one bear guesses his color correctly and no bear guesses a wrong color.The question is what is the highest probability that the bears win achievable by some strategy.Soon, the game became quite popular and it was even mentioned in NY Times [27].
Winkler [31] studied a variant where the bears cannot pass and the objective is to maximize the number of bears that correctly guess their hat color.A generalization of this variant for more than two colors was studied by Feige [11] and Aggarwal [1].Butler et al. [6] studied a variant where the bears are sitting on vertices of a general graph, not only a clique.For a survey of various hat guessing games, we refer to theses of Farnik [10] or Krzywkowski [22].
In this paper, we study a variant of the game introduced by Farnik [10], where each bear has to guess and they win if at least one bear guesses correctly.He introduced a hat guessing number HG of a graph G (also named as hat chromatic number and denoted µ in later works) which is defined as the maximum h such that bears win the game with h hat colors.We study a variant where each bear can guess multiple times and we consider that a bear guesses correctly if the color of his hat is included in his guesses.We introduce a parameter fractional hat chromatic number μ of a graph G, which we define as the supremum of h g such that each bear has g guesses and they win the game with h hat colors.
Although the hat guessing game looks like a recreational puzzle, connections to more "serious" areas of mathematics and computer science were shown-like coding theory [9,19], network coding [14,26], auctions [1], finite dynamical systems [12], and circuits [32].In this paper, we exhibit a connection between the hat guessing game and the independence polynomial of graphs, which is our main result.This connection allows us to compute the optimal strategy of bears (and thus the value of μ) of an arbitrary chordal graph in polynomial time.We also prove that the fractional hat chromatic number μ is equal, up to a logarithmic factor, to the maximum degree of a graph, i.e., μ(G) = Ω(∆/ log ∆) and μ(G) = O(∆).Finally, we compute the exact value of μ of graphs from some classes, like paths, cycles, and cliques.
We would like to point out that the existence of the algorithm computing μ of a chordal graph is far from obvious.Butler et al. [6] asked how hard is to compute µ(G) and the optimal strategy for the bears.Note that a trivial non-deterministic algorithm for computing the optimal strategy (or just the value of µ(G) or μ(G)) needs exponential time because a strategy of a bear on v is a function of hat colors of bears on neighbors of v (we formally define the strategy in Section 2).It is not clear if the existence of a strategy for bears would imply a strategy for bears where each bear computes his guesses by some efficiently computable function (like linear, computable by a polynomial circuit, etc.).This would allow us to put the problem of computing µ into some level of the polynomial hierarchy, as noted by Butler et al. [6].On the other hand, we are not aware of any hardness results for the hat guessing games.The maximum degree bound for μ does not imply an exact efficient algorithm computing μ(G) as well.This phenomenon can be illustrated by the edge chromatic number χ ′ of graphs.By Vizing's theorem [7,Chapter 5], it holds for any graph G that ∆(G) ≤ χ ′ (G) ≤ ∆(G) + 1.However, it is NP-hard to distinguish between these two cases [18].Organization of the Paper.We finish this section with a summary of results about the variant of the hat guessing game we are studying.In the next section, we present notions used in this paper and we define formally the hat guessing game.In Section 3, we formally define the fractional hat chromatic number μ and compare it to µ.In Section 4, we generalize some previous results to the multi-guess setting.We use these tools to prove our main result in Section 5 including the poly-time algorithm that computes μ for chordal graphs.The maximum degree bound for μ and computation of exact values of paths and cycles are provided in Section 6.

Related and Follow-up Works
As mentioned above, Farnik [10] introduced a hat chromatic number µ(G) of a graph G as the maximum number of colors h such that the bears win the hat guessing game with h colors and played on G.He proved that µ(G) ≤ O ∆(G) where ∆(G) is the maximum degree of G.
Since then, the parameter µ(G) was extensively studied.The parameter µ for multipartite graphs was studied by Gadouleau and Georgiu [13] and by Alon et al. [2].Szczechla [30] proved that µ of cycles is equal to 3 if and only if the length of the cycle is 4 or it is divisible by 3 (otherwise it is 2).Bosek et al. [5] gave bounds of µ for some graphs, like trees and cliques.They also provided some connections between µ(G) and other parameters like chromatic number and degeneracy.They conjectured that µ(G) is bounded by some function of the degeneracy d(G) of the graph G.They showed that such function has to be at least exponential as for every This result was improved by He and Li [16] who showed that for every . Since μ(G) is lower-bounded by Ω ∆(G)/ log ∆(G) (as we show in Section 6) it holds that μ can not be bounded by any function of degeneracy as there are graph classes of unbounded maximum degree and bounded degeneracy (e.g.trees or planar graphs).Recently, Kokhas et al. [20,21] studied a non-uniform version of the game, i.e., every bear may have a different number of possible hat colors.They considered cliques and almost cliques.They also provided a technique to build a strategy for a graph G whenever G is made up by combining G 1 and G 2 with known strategies.We generalize some of their results and use them as "basic blocks" for our main result.
After the presentation of the preliminary version of this paper [4], Latyshev and Kokhas [24] extended ideas presented in this paper to reason about the standard hat chromatic number.In particular, they found a family of graphs of unbounded maximum degree such that for each graph G in the family holds that µ(G) = 4  3 ∆(G); thus they disproved a conjecture that µ(G) ≤ ∆(G) + 1 stated in Bosek, et al. [5] and Farnik [10] that was previously noticed by Alon et al. [2].

Preliminaries
We use standard notions of the graph theory.For an introduction to this topic, we refer to the book by Diestel [7].We denote a clique as K n , a cycle as C n , and a path as P n , each on n vertices.The maximum degree of a graph G is denoted by ∆(G), where we shorten it to ∆ if the graph G is clear from the context.The neighbors of a vertex v are denoted by N (v).We use N [v] to denote the closed neighborhood of v, i.e.N [v] = N (v) ∪ {v}.For a set U of vertices of a graph G, we denote by G \ U a graph induced by vertices V (G) \ U , i.e., a graph arising from G by removing the vertices in U .
A hat guessing game is a triple H = (G, h, g) where E) is an undirected graph, called the visibility graph, • h ∈ N is a hatness that determines the number of different possible hat colors for each bear, and • g ∈ N is a guessing number that determines the number of guesses each bear is allowed to make.
The rules of the game are defined as follows.On each vertex of G sits a bear.The demon puts a hat on the head of each bear.Each hat has one of h colors.We would like to point out, that it is allowed that bears on adjacent vertices get a hat of the same color.The only information the bear on a vertex v knows are the colors of hats put on bears sitting on neighbors of v. Based on this information only, the bear has to guess a set of g distinct colors according to a deterministic strategy agreed to in advance.We say bear guesses correctly if he included the color of his hat in his guesses.The bears win if at least one bear guesses correctly.
Formally, we associate the colors with natural numbers and say that each bear can receive a hat colored by a color from the set S = [h] = {0, . . ., h − 1}.A hats arrangement is a function φ : V → S. A strategy of a bear on v is a function Γ v : S |N (v)| → S g , and a strategy for H is a collection of strategies for all vertices, i.e. (Γ v ) v∈V .We say that a strategy is winning if for any possible hats arrangement φ : V → S there exists at least one vertex v such that φ(v) is contained in the image of Γ v on φ, i.e., φ(v) ∈ Γ v (φ(u)) u∈N (v) .Finally, the game H is winning if there exists a winning strategy of the bears.
As a classical example, we describe a winning strategy for the hat guessing game (K 3 , 3, 1).Let us denote the vertices of K 3 by v 0 , v 1 and v 2 and fix a hats arrangement φ.For every i ∈ [3], the bear on the vertex v i assumes that the sum j∈ [3] φ(v j ) is equal to i modulo 3 and computes its guess accordingly.It follows that for any hat arrangement φ there is always exactly one bear that guesses correctly, namely the bear on the vertex v i for i = j φ(v j ) (mod 3).Some of our results are stated for a non-uniform variant of the hat guessing game.A non-uniform game is a triple G = (V, E), h, g where h = (h v ) v∈V and g = (g v ) v∈V are vectors of natural numbers indexed by the vertices of G and a bear on v gets a hat of one of h v colors and is allowed to guess exactly g v colors.Other rules are the same as in the standard hat guessing game.To distinguish between the uniform and non-uniform games, we always use plain letters h and g for the hatness and the guessing number, respectively, and bold letters (e.g.h, g) for vectors indexed by the vertices of G.
For our proofs we use two classical results.First one is the inclusion-exclusion principle for computing a size of a union of sets.
Proposition 1 (folklore) For a union A of sets A 1 , . . ., A n , it holds that The other one is the rational root theorem, which we use to derive an algorithm for computing an exact value of μ, if the value is rational.
Theorem 1 (Rational root theorem [23]) If a polynomial a n x n + . . .a 1 x + a 0 has integer coefficients, then every rational root is of the form p/q where p and q are coprimes, p is a divisor of a 0 , and q is a divisor of a n .

Fractional Hat Chromatic Number
From the hat guessing games, we can derive parameters of the underlying visibility graph G. Namely, the hat chromatic number µ(G) is the maximum integer h for which the hat guessing game (G, h, 1) is winning, i.e., each bear gets a hat colored by one of h colors and each bear has only one guess-we call such game a single-guessing game.In this paper, we study a parameter fractional hat chromatic number μ(G) which arises from the hat multi-guessing game and is defined as Observe that µ(G) ≤ μ(G).Farnik [10] and Bosek et al. [5] also study multi-guessing games.They considered a parameter µ g (G) that is the maximum number of colors h such that the bears win the game (G, h, g).The difference between µ g and μ is the following.If µ g (G) ≥ k, then the bears win the game (G, k, g) and μ ≥ k g .If μ(G) ≥ p q , then there are h, g ∈ N such that p q = h g and the bears win the game (G, h, g).However, it does not imply that the bears would win the game (G, p, q).In this section, we prove that if the bears win the game (G, h, g) then they win the game (G, kh, kg) for any constant k ∈ N. The opposite implication does not hold-we discuss a counterexample at the end of this section.Unfortunately, this property prevents us from using our algorithm, which computes μ, to compute also µ of chordal graphs.
Moreover, by definition, the parameter μ does not even have to be a rational number.In such a case, for each p, q ∈ N, it holds that • If p q < μ(G) then there are h, g ∈ N such that p q = h g and the bears win the game (G, h, g).
• If p q > μ(G) then the demon wins the game (G, p, q).
For example, the fractional hat chromatic number μ(P 3 ) of the path P 3 is irrational.In the case of an irrational μ(G), our algorithm computing the value of μ of chordal graphs outputs an estimate of μ(G) with arbitrary precision.We finish this section with a proof that the multi-guessing game is in some sense monotone.
Proof: We derive a winning strategy for the game H k from a winning strategy for H.Each bear interprets a color in Let A v be guesses of the bear on v in the game H.For the game H k , a strategy of the bear on v is to make guesses It is straight-forward to verify that this is a winning strategy for H k . 2 Lemma 1 Let G = (V, E), h, g be a winning hat guessing game.Let r ′ be a rational number such that r ′ ≤ h/g.Then, there exist numbers h ′ , g ′ ∈ N such that h ′ /g ′ = r ′ and the hat guessing game (G, h ′ , g ′ ) is winning.
Proof: Let p, q ∈ N such that r ′ = p/q and GCD(p, q) = 1.
By Observation 1 for k = ℓ/h, the game (G, h, ḡ) is winning.Let h ′ = ℓ and g ′ = ℓ • q/p.Since p/q ≤ h/g by the assumption, it holds that g ′ ≥ ḡ.Thus, the bears have a strategy for (G, h ′ , g ′ ), as we increased the number of guesses and the hatness does not change (h It is straight-forward to prove a generalization of Lemma 1 for non-uniform games.However, for simplicity, we state it only for the uniform games.By the proof of the previous lemma, we know that we can use a strategy for (G, h, g) to create a strategy for a game However, it is unclear whether this also holds in general, i.e., given a winning strategy for a fractional hat chromatic number h/g, is it always possible to have a winning strategy for a decreased fraction h ′ /g ′ < h/g where the hatness h ′ and the guessing number g ′ can be changed arbitrarily?It is (i) GCD stands for the greatest common divisor and LCM stands for the least common multiple.true for cliques.We show in Section 4 that the bears win the game (K n , h, g) if and only if h/g ≤ n.However, it is not true in general.For example, for n large enough it holds that μ(P n ) ≥ 3, as we show in Section 6 that μ(P n ) converges to 4 when n goes to infinity.However, Butler et al. [6] proved that µ(T ) = 2 for any tree T .Thus, the bears lose the game (P n , 3, 1).

Basic Blocks
In this section, we generalize some results of Kokhas et al. [20,21] about cliques and strategies for graph products, which we use for proving our main result.The single-guessing version of the next theorem (without the algorithmic consequences) was proved by Kokhas et al. [20,21].
Moreover, if there is a winning strategy, then there is a winning strategy (Γ v ) v∈V such that each Γ v can be described by two linear inequalities whose coefficients can be computed in linear time.
Proof: The proof follows the proof of Kokhas et al. [21] for the single-guessing game.First, suppose that v∈V g v /h v < 1 and fix some strategy of bears.A bear on v guesses correctly the color of his hat in exactly (g v /h v )-fraction of all possible hat arrangements.Thus, if the sum is smaller than one, there is a hat arrangement where no bear guesses the color of his hat correctly.Now suppose the opposite inequality holds, i.e., v∈V The bears cover the set [ℓ] by disjoint intervals Q i of length d i • g i .A bear on v i makes his guesses according to a hypothesis that s is in an interval Q i and we will show that he guesses correctly if s ∈ Q i .More formally, for b i = j<i d j • g j we define the interval and ℓ is divisible by d i , he makes at most g i guesses.If s is in Q i then the bear on v i guesses the color of his hat correctly, because s = s i + c i • d i (mod ℓ) and thus the bear on v i includes the color c i in his guesses.
Note that the union Q of all intervals Q i is exactly the set By assumption, we have that {0, . . ., ℓ − 1} ⊆ Q.Since 0 ≤ s < ℓ by definition, it follows that s has to be in some interval Q i .
For the "moreover" part, the bear on a vertex v i guesses all colors Observe that s i is a linear function of hat colors of bears sitting on the vertices different from v and the coefficients b i and d j can be computed in linear time. 2 By Theorem 2, we can conclude the following corollary.
Kokhas et al. [20] provided another proof of analogue of Theorem 2 for the single-guessing game, which can be generalized with similar ideas.However, the second proof does not imply a polynomial time algorithm for computing the strategy on cliques.For the interested reader, we provide the second proof of Theorem 2 in Appendix A.
Further, we generalize a result of Kokhas and Latyshev [20].In particular, we provide a new way to combine two hat guessing games on graphs G 1 and G 2 into a hat guessing game on graph obtained by gluing G 1 and G 2 together in a specific way. Let be graphs, let S ⊆ V 1 be a set of vertices inducing a clique in G 1 , and let v ∈ V 2 be an arbitrary vertex of G 2 .The clique join of graphs G 1 and G 2 with respect to S and v is the graph and E contains all the edges of E 1 , all the edges of E 2 that do not contain v, and an edge between every w ∈ S and every neighbor of v in G 2 .See Figure 1 for a sketch of a clique join.
Fig. 1: The clique join of graphs G1 and G2 with respect to S and v.
), h ′′ , g ′′ be two hat guessing games and let S ⊆ V ′ be a set inducing a clique in G ′ and v ∈ V ′′ .Set G to be the clique join of graphs G ′ and G ′′ with respect to S and v.If the bears win the games H ′ and H ′′ , then they also win the game H = (G, h, g) where Proof: Using winning strategies (Γ ′ v ) v∈V ′ and (Γ ′′ v ) v∈V ′′ for H ′ and H ′′ respectively, let us construct a winning strategy for H.For every bear u ∈ S, we interpret his color as a tuple . Also, we define an imaginary hat color of the bear on vertex v as s = ( u∈S c ′′ u ) mod h ′′ v .Every bear on w ∈ V ′ \ S plays according to the strategy Γ ′ w using only the color c ′ u for his every neighbor u ∈ S. Every bear on w ∈ V ′′ \ {v} plays according to the strategy Γ ′′ w using the imaginary hat color s of v.And finally, every bear on vertex w ∈ S computes a set of guesses A w by playing the strategy Γ ′ w and a set of guesses B by playing the strategy Γ ′′ v .Since the bear on w can see every other vertex of S, he computes the set Finally, the bear on w guesses the set A w × B w .Fix an arbitrary hat arrangement.In the simulated hat guessing game H ′ , there is a vertex u 1 such that the bear on u 1 guessed correctly.If u 1 ̸ ∈ S then it also guessed correctly in H.Likewise, there is a bear on a vertex u 2 in the simulated hat guessing game H ′′ that guessed correctly and we are done if u 2 ̸ = v.The remaining case is when u 1 ∈ S and u 2 = v.Thus, the bear on v includes the color s in his guesses in the game H ′′ .It follows that for each w ∈ S holds that if (c ′ w , c ′′ w ) is a hat color of the bear on w, then c ′′ w ∈ B w .Since u 1 ∈ S, the bear on u 1 includes his hat color We remark that Lemma 2 generalizes Theorem 3.1 and Theorem 3.5 of [20] not only by introducing multiple guesses but also by allowing for more general ways to glue two graphs together.Thus, it provides new constructions of winning games even for single-guessing games.
S Fig. 2: Applying Lemma 2 on winning hat guessing games (C4, 3, 1) (see [30]) and (K3, 3, 1), we obtain a winning hat guessing game (G, h, 1) where G is the result of identifying an edge in C4 and K4, and h is given in the picture.

Independence Polynomial
The multivariate independence polynomial of a graph G = (V, E) on variables x = (x v ) v∈V is First, we describe the connection between the multi-guessing game and the independence polynomial informally and later prove the mentioned statements formally.Consider the game (G, h, g) and fix a strategy of bears.Suppose that the demon put on the head of each bear a hat of random color (chosen uniformly and independently).Let A v be an event that the bear on the vertex v guesses correctly.Then, the probability of A v is exactly g/h.Moreover, for any independent set I it holds that A v is independent on all events A w for w ∈ I, w ̸ = v.Thus, we can use the inclusion-exclusion principle (Proposition 1) to compute the probability that A v occurs for at least one v ∈ I, i.e., at least one bear sitting on some vertex of I guesses correctly.
Assume that no two bears on adjacent vertices guess correctly their hat colors at once; it turns out that if we plug −g/h into all variables of the non-constant terms of −P G , then we get exactly the fraction of all hat arrangements on which the bears win.The non-constant terms of P G correspond (up to sign) to the terms of the formula from the inclusion-exclusion principle.Because of that, we have to plug −g/h into the polynomial P G .
To avoid confusion with the negative fraction −g/h, we define signed independence polynomial as We also introduce the monovariate signed independence polynomial U G (x) obtained by plugging x for each variable x v of Z G .
Note that the constant term of any independence polynomial P G (x) equals to 1, arising from taking I = ∅ in the sum from the definition of P G .When U G (g/h) = 0 and no two adjacent bears guess correctly at the same time, then the bears win the game (G, h, g) because the fraction of all hat arrangements, on which at least one bear guesses correctly, is exactly 1, however, the proof is far from trivial.
Slightly abusing the notation, we use Z G ′ (x) to denote the independence polynomial of an induced subgraph G ′ with variables x restricted to the vertices of G ′ .The independence polynomial P G can be expanded according to a vertex v ∈ V in the following way.
The analogous expansions hold for the polynomials Z G and U G as well.This expansion follows from the fact that for any independent set I of G, it holds that either v is not in I (the first term of the expansion), or v is in I but in that case, no neighbor of v is in I (the second term).The formal proof of this expansion of P G was provided by Hoede and Li [17].
For a graph G, we let R(G) denote the set of all vectors r ∈ [0, ∞) V such that Z G (w) > 0 for all 0 ≤ w ≤ r, where the comparison is done entry-wise.For the monovariate independence polynomial U G , an analogous set to R(G) would be exactly the real interval [0, r) where r is the smallest positive root of U G .(Note that Z G (0) = 1 and U G (0) = 1.) Our first connection of the independence polynomial to the hat guessing game comes in the shape of a sufficient condition for bears to lose.Consider the following beautiful connection between the Lovász Local Lemma and the independence polynomial due to Scott and Sokal [28].
Theorem 3 ([28] Theorem 4.1) Let G = (V, E) be a graph and let (A v ) v∈V be a family of events on some probability space such that for every v, the event A v is independent of {A w | w ̸ ∈ N [v]}.Suppose that p ∈ [0, 1] V is a vector of real numbers such that for each v we have P (A v ) ≤ p v and p ∈ R(G).
Proof: Suppose for a contradiction that H is winning, and fix a strategy of the bears.We let the demon assign a hat to each bear uniformly at random and independently from the other bears.Let A v be the event that the bear on the vertex v guesses correctly.Observe, that P (A v ) = gv hv and the probability that the bears lose is precisely P v∈V Āv .
Let us show that the event A v is independent of all events A w such that w ̸ ∈ N [v].Observe, that fixing arbitrary hat arrangement φ on V \ {v} uniquely determines the guesses of bears on all vertices except for N (v).In particular, for every vertex w ̸ ∈ N [v], we know whether the bear on w guessed correctly and thus the probability of A w conditioned by φ is either 0 or 1.On the other hand, the probability of A v conditioned by φ is still gv hv .Therefore, A v is independent of any subset of {A w | w ̸ ∈ N [v]}.The claim follows since the graph G and vector r satisfies the conditions of Theorem 3 and we obtain that P ( v∈V Āv ) ≥ Z G (r) > 0. Therefore, there exists some hat arrangement in which all bears guess incorrectly. 2 A strategy for a hat guessing game H is perfect if it is winning and in every hat arrangement, no two bears that guess correctly are on adjacent vertices.We remark that perfect strategies exist, for example the strategy for a single-guessing game on a clique K n and exactly n colors [20], or for a multi-guessing game on a clique K n and h/g = n (Corollary 1).The following proposition shows that a perfect strategy can occur only when r = (g v /h v ) v∈V (note g v ≤ h v by definition) lies in some sense just outside of R(G).
Proposition 3 If there is a perfect strategy for the hat guessing game (G, h, g) then for r = (g v /h v ) v∈V we have that Z G (r) = 0 and Z G (w) ≥ 0 for every 0 ≤ w ≤ r.
Proof: Fix a perfect strategy and set m = v∈V h v to be the total number of possible hat arrangements.For any subset S ⊆ V , let n S be the number of hat arrangements such that every bear on vertex v ∈ S guesses correctly (other bears are not forbidden from guessing correctly).We claim that for any independent set I ⊆ V , we have n I = m • v∈I gv hv .Observe that by assigning the hats to the bears on V \ I, we fix the guesses of all bears on I. Every bear on a vertex v ∈ I guesses correctly exactly g v out of h v of his hat assignments.Thus the total number of hat arrangements where every bear on the independent set I guesses correctly is exactly On the other hand, the perfect strategy guarantees that for any non-empty S that is not an independent set, n S = 0.This allows us to use the inclusion-exclusion principle and count the exact total amount of hat arrangements such that at least one bear guesses correctly Finally, the total amount of hat arrangements when at least one bear guesses correctly must be exactly m since the bears win.Therefore, we get Z G (r) = 0.
We prove the remaining claim in two steps.First, we show that for every induced subgraph G ′ of G it holds that Z G ′ (r) ≥ 0. To that end, consider a modified hat guessing game where only bears on the vertices of G ′ are allowed to guess and they play according to the original perfect strategy.By the same argument as before, we can count the total amount of hat arrangements that are guessed correctly by this modified strategy as m • (1 − Z G ′ (r)).It implies Z G ′ (r) ≥ 0 as the total number of hat arrangements is m.
Now consider any 0 ≤ w ≤ r.Let v 1 , . . ., v n be an arbitrary ordering of the vertices of G and let us define vectors w i for 0 ≤ i ≤ n as Notice that w 0 = r, w n = w, and the vectors w i correspond to switching the coordinates of r into the coordinates of w one by one.We prove by induction on i that for every induced subgraph We already proved the fact for i = 0. Let i ≥ 1 and let G ′ be an arbitrary induced subgraph of G.
) ≥ 0 and we are done.Otherwise, we have where we first partition the independent sets of G ′ according to their incidence with v i and then replace w i with w i−1 where the inequality holds since w vi ≤ r vi and Z G ′ \N (vi) (w i−1 ) ≥ 0 from induction.Finally, we notice that we obtained the independent polynomial Z G ′ evaluated in w i−1 and apply induction.Thus, Z G (w) ≥ 0 as w = w n and G is an induced subgraph of itself. 2 Scott and Sokal [28,Corollary 2.20] proved that Z G (w) ≥ 0 for every 0 ≤ w ≤ r if and only if r lies in the closure of R(G).Therefore, Proposition 3 further implies that if a perfect strategy for the game (G, h, g) exists, then r = (g v /h v ) v∈V lies in the closure of R(G).And since r cannot lie inside R(G) due to Proposition 2, it must belong to the boundary of the set R(G).
The natural question is what happens outside of the closure of R(G).We proceed to answer this question for chordal graphs.
A graph G is chordal if every cycle of length at least 4 has a chord.For our purposes, it is more convenient to work with a different equivalent definition of chordal graphs.For a graph G = (V, E), a clique tree of G is a tree T whose vertex set is precisely the subsets of V that induce maximal cliques in G and for each v ∈ V the vertices of T containing v induces a connected subtree.Gavril [15] showed that G is chordal if and only if there exists a clique tree of G.
Theorem 4 Let G = (V, E) be a chordal graph and let r = (r v ) v∈V be a vector of rational numbers from the interval [0, 1].If r ̸ ∈ R(G) then there are vectors g, h ∈ N V such that g v /h v ≤ r v for every v ∈ V and the hat guessing game (G, h, g) is winning.
Proof: We prove the theorem by induction on the size of the clique tree of G. Let 0 ≤ w ≤ r be a witness that r ̸ ∈ R(G), i.e., Z G (w) ≤ 0.
If G is itself a complete graph, then Z G (w) ≤ 0 implies that v∈V w v ≥ 1 and v∈V r v ≥ v∈V w v ≥ 1.Thus, if we take the minimal vectors g, h ∈ N V such that g v /h v = r v for each v, the assumptions of Theorem 2 are satisfied and the hat guessing game (G, h, g) is winning.
Otherwise, the clique tree of G contains at least 2 vertices and we pick its arbitrary leaf C. Let R be the set of vertices that belong only to the clique C, and let S = C \ R. We aim to split the graph into apply induction to obtain winning strategies on these graphs, and then combine them into a winning strategy on G; see Figure 3.
If v∈C r v ≥ 1, then the game is winning already on the clique G[C] due to Theorem 2, by letting g v /h v = r v for each v ∈ C. Therefore, we can assume v∈C r v < 1 which implies v∈C w v < 1.We define vectors w ′ = (w ′ v ) v∈V \R and r ′ = (r ′ v ) v∈V \R as where α r = 1 − v∈R r v and α w = 1 − v∈R w v .Observe that 0 < α r ≤ α w and that for every In other words, r ′ and w ′ are both vectors of numbers from [0, 1] such that w ′ ≤ r ′ .
To simplify the rest of the proof, we introduce the following notation.For any u ∈ V , let Z G (x; u) denote the independence polynomial restricted only to the independent sets containing u, i.e., With this in hand, we proceed to show that In (1), we partition the independent sets in G depending on their incidence with C. The line (2) follows since every independent set intersecting R in G can be written as a union of v ∈ R and an independent set in G \ C which allows us to collect the first and third terms.At the same time, all independent sets intersecting S in G can be regarded as independent sets intersecting S in G \ R. In (3), we replace w with w ′ which scales each term in the second sum by the factor w v /w ′ v = α w .Finally, notice that the terms in (3) describe (up to scaling by α w ) the independent sets in G \ R partitioned according to their incidence with S. We collect them in (4).
Since α w > 0 and Z G (w) ≤ 0, we have Z G ′ (w ′ ) ≤ 0 which witnesses that r ′ ̸ ∈ R(G ′ ).Therefore, we can apply induction on G ′ and r ′ to obtain functions h ′ , g ′ such that the hat guessing game Let G ′′ be the graph obtained from the clique G[C] by contracting S to a single vertex u and define the vector r ′′ = (r ′′ v ) v∈R∪{u} as Observe that G is precisely the clique join of G ′ and G ′′ with respect to S and w.Since r ′′ u + v∈R r ′′ v = 1, we can take the minimal vectors h ′′ , g ′′ ∈ N V such that g ′′ v /h ′′ v = r v for every v and apply Theorem 2 on G ′′ to show that the hat guessing game (G ′′ , h ′′ , g ′′ ) is winning.Finally, we construct the desired winning strategy by combining the two graphs and their respective strategies using Lemma 2 since  Fig. 3: Application of Theorem 2 on a chordal graph G with vector r ∈ R(G).In each step, we highlight the clique S and vertex w that are used for Lemma 2 to inductively build a strategy for G from strategies on cliques given by Theorem 2. Note that the number of colors and guesses may differ from the depicted ratios by a multiplicative factor.Theorem 4 applied for the uniform polynomial U G immediately gives us the following corollary.
Corollary 2 For any chordal graph G, the fractional hat chromatic number μ(G) is equal to 1/r where r is the smallest positive root of U G (x).
Proof: Theorem 4 implies that μ(G) ≥ 1/r.For the other direction, let (w i ) i∈N be a sequence of rational numbers such that w i < r for every i and lim i→∞ w i = r.Set w i = (w i ) v∈V .Scott and Sokal [28, Thereom 2.10] prove that r ∈ R(G) if and only if there is a path in [0, ∞) V connecting 0 and r such that Z G (p) > 0 for any p on the path.Taking the path {λw i | λ ∈ [0, 1]}, we see that Z G (λw i ) = U G (λ • w i ) > 0 and thus w i ∈ R(G) for every i.Therefore by Proposition 2, the hat guessing game (G, h, g) is losing for any h, g such that g/h = w i and μ(G) ≤ 1/w i for every i.It follows that μ(G) ≤ 1/r. 2 We would like to remark that the proof of Theorem 4 (and also Theorem 2) is constructive in the sense that given a graph G and a vector r it either greedily finds vectors g, h ∈ N V such that g v /h v ≤ r v together with a succinct representation of a winning strategy on (G, h, g) or it reaches a contradiction if r ∈ R(G).Moreover, it is easy to see that it can be implemented to run in polynomial time if the clique tree of G is provided.Combining it with the well-known fact that a clique tree of a chordal graph can be obtained in polynomial time (see Blair and Peyton [3]) we get the following corollary.
Corollary 3 There is a polynomial-time algorithm that for a chordal graph G = (V, E) and vector r decides whether r ∈ R(G).Moreover, if r ̸ ∈ R(G) it outputs vectors h, g ∈ N V such that g v /h v ≤ r v for every v ∈ V , together with a polynomial-size representation of a winning strategy for the hat guessing game (G, h, g).
This result is consistent with the fact that chordal graphs are in general well-behaved with respect to Lovász Local Lemma-Pegden [25] showed that for a chordal graph G, we can decide in polynomial time whether a given vector r belongs to R(G).We finish this section by presenting an algorithm that computes the fractional hat chromatic number of chordal graphs.

Paths and Cycles
In this section, we discuss the precise value of μ of paths and cycles.It follows from Corollary 4, that μ(P n ) and μ(C n ) are upper bounded by constants.We prove that the fractional hat chromatic number of paths and cycles goes to 4 with their increasing length.
For a proof, we need a version of Lovász local lemma proved by Shearer.
Lemma 3 (Shearer [29]) Let A 1 , . . ., A n be events such that each event is independent on all but at most d other events.Let the probability of any events A i is at most p.
, then there is non-zero probability that none of the events A 1 , . . ., A n occurs.
Proof: First, we prove the lower bound for paths.Let ε > 0. We construct a sufficiently long path P = (V, E) and vectors h, g ∈ N V such that a hat guessing game (P, h, g) is winning and g v /h v ≤ 1/4 + ε.Thus, we can conclude that for every δ > 0 there is n such that μ(P n ) ≥ 4 − δ, i.e., lim n→∞ μ(P n ) ≥ 4. The same lower bound holds for cycles as they contain paths as subgraphs.
We construct the path P iteratively.Let P 0 be a path consisting of one edge e 0 = {v 0 , u 0 }.We set g 0 v0 = g 0 u0 = 1 and h 0 v0 = h 0 u0 = 2.By Theorem 2, the game (P 0 , h 0 , g 0 ) is winning.Now, we want to construct a game H i+1 = (P i+1 , h i+1 , g i+1 ) from (P i , h i , g i ).Let v i and u i be the endpoints of P i .We will maintain the invariant that g i vi = g i ui and h i vi = h i ui and let us denote the ratio g i vi /h i vi by r i .We construct the paths P i in a way such that r i = 1 2 − i • ε.Note that this equality holds for the game (P 0 , h 0 , g 0 ).
Let P ′ be a path consisting of one edge e ′ = {w, w ′ } and we set g ′ and h ′ in such a way that g ′ w /h ′ w = 1/2 + (i + 1) • ε and g ′ w ′ /h ′ w ′ = 1/2 − (i + 1) • ε.Again by Theorem 2, the game (P ′ , h ′ , g ′ ) is winning.To create the path P i+1 , we join two copies of P ′ to P i using Lemma 2.More formally, we join one copy of P ′ by identifying w and u i and the second copy by identifying w and v i .Thus, the endpoints u i+1 and v i+1 of P i+1 are copies of w ′ .By Lemma 2, we get a winning game H i+1 = (P i+1 , h i+1 , g i+1 ).For a sketch of construction of the game H i+1 , see Figure 4.Note that indeed r i+1 = 1 2 − (i + 1) • ε.We end this process after k = 1 4ε steps.Thus, it holds that r k = 1 2 − k • ε ≤ 1 4 .On the other hand, it holds for each 0 ≤ i < k by Lemma 2 that Thus, for each vertex v of P k holds that 4 + ε as claimed.Now, we prove the upper bound.Let H = (G, h, g) be a game such that G is a path or a cycle and h g > 4. We will prove that bears lose H, which implies that lim n→∞ μ(P n ), lim n→∞ μ(C n ) ≤ 4. Let us fix some strategy of bears and the demon gives each bear a hat of random color (chosen uniformly and independently).We denote A v an event that the bear on v guesses correctly.Then, Pr[A v ] = g h < 1 4 .Since the maximum degree in G is 2, each event A v might depend only on at most 2 other events.By Václav Blažej, Pavel Dvořák, Michal Opler Lemma 3, for events (A v ) v∈V (G) and d = 2, we have that no event A v occurs with non-zero probability.Thus, there is a hat arrangement such that no bear guesses correctly.2 We remark that Proposition 5 follows also from the results of Scott and Sokal [28] as they proved that the small positive roots of U Pn and U Cn go to 1/4 when n goes to infinity.However, their proof is purely algebraic whereas we provide a combinatorial proof.
Further, we discuss the value of μ = μ(P 3 ).By Corollary 2, we have that 1/μ is the smallest positive root of U P3 (x) = x 2 − 3x + 1.Thus, 1/μ = (3 − √ 5)/2.By Theorem 4, it holds that for any p, q ∈ N such that μ ≤ p/q there are g, h ∈ N such that p/q = h/g and the game (P 3 , h, g) is winning.However, the strategy from proof gives us h = p • (p − q) and g = q • (p − q).We present a sequence (h i /g i ) i∈N such that the sequence goes to μ, for each i the numbers h i and g i are coprime, and the game (P 3 , h i , g i ) is winning for each i.Thus, we present a strategy that is in some sense more efficient than the strategy given by the proof of Theorem 4 as the general strategy for P 3 does not produce numbers g and h which are coprimes.
First, we present the strategy for P 3 .Note that if 1 ≥ g/h ≥ 1/μ (for g, h ∈ N) then U P3 (g/h) = (g/h) 2 − 3g/h + 1 < 0. We change the inequality to g 2 − 3gh + h 2 < 0 and prove that for each g and h, which satisfy the previous inequality, there is a winning strategy for (P 3 , h, g).Lemma 4 Let g, h ∈ N such that g 2 − 3gh + h 2 < 0.Then, the bears win the game (P 3 , h, g).
Proof: Let V (P 3 ) = {u, v, w} where v and w are the endpoints of the path P 3 .We identify the colors with a set C = {0, . . ., h − 1}.Let the bear on v get a hat of color c v .The bear on u makes guesses . The bear on w makes guesses A w = c v , c v − ⌊h/g⌉ , . . ., c v − ⌊(g − 1) • h/g⌉ , where ⌊x⌉ is the nearest integer to x (i.e., standard rounding).We compute the guessed colors modulo h.
The bear on v computes two sets of colors I u and I w based on the hat colors of bears on u and w such that he will not guess the colors from I u ∪ I w because if c v ∈ I u ∪ I w then the bear on u or the bear on w would guess correctly (or maybe both of them).The guesses of the bear on u is an interval in the set C. However, the guesses of the bear on w are spread through C as evenly as possible.Thus, the intersection I u ∩ I w is small and I u ∪ I w is large.
More formally, let c u and c w be hat colors of the bears on u and w, respectively.Then, I u = {c u , c u + 1, . . ., c u + (g − 1)}, and I w = {c w , c w + ⌊h/g⌉, . . ., c w + (g − 1) • ⌊h/g⌉}.Again, we compute the elements in the sets modulo h.Note that if c v ∈ I u then the bear on u guesses correctly because in that case c v = c u + t (mod h) for some t < g and thus c u ∈ A u .An analogous property holds for c w .Thus, the bear on v does not have to guess the colors from I u ∪ I w .
We will prove that C \ (I u ∪ I w ) ≤ g.Thus, the bear on v can guess all colors outside I u and I w and makes at most g guesses.First, we prove that In such a case, there must be k such that both colors c w + ⌊k • h/g⌉ and c w + ⌊(k Applying bounds on the rounded terms, we obtain The final inequality implies g 2 − 3gh + h 2 ≥ 0 which contradicts the assumption of the lemma.Therefore, the size of the intersection I u ∩ I w is at most 3g − h.It follows that the size of the union I u ∪ I w is at least 2g − (3g − h) = h − g and C \ (I u ∪ I w ) ≤ g. 2 Let F i be the i-th Fibonacci number (ii) .We define h i = F 2i and g i = F 2i−2 .Now, we prove the sequence (g i /h i ) i∈N has the sought properties.= 1 − 1 φ , where φ is the golden ratio, i.e., φ = 1+ √ 5

. It is well-known that fractions Fi
Fi−1 go to φ.Moreover, F2i F2i−1 ≥ φ.Thus, for each i ∈ N it holds that and the fractions hi gi indeed go to μ. 2 Václav Blažej, Pavel Dvořák, Michal Opler Observation 2 Due to Cassini's identity, for each i ∈ N holds that Observation 3 For each i ∈ N the numbers h i and g i are coprime.
Proof: By definition, g i = F 2i−2 and It is easy to prove by induction that for each i ∈ N it holds that GCD(F i−1 , F i ) = 1. 2 A The Second Proof of the Non-algorithmic Part of Theorem 2 Proof (The second proof of non-algorithmic part of Theorem 2): The proof again follows the proof of Kokhas et al. [20] for the single-guessing game.We prove only the "if" part.Thus, suppose that v∈V (Kn) gv hv ≥ 1.Let V (K n ) = {v 1 , . . ., v n }.We create an auxiliary bipartite graph G = (V ℓ ∪V r , E).In the left partite V ℓ there is a vertex for each possible coloring of hats.Thus we can identify each vertex in V ℓ with an n-tuple (c 1 , . . ., c n ) where c i ∈ [h vi ] is some color of the i-th bear's hat.The set V r is split into n sets, V r = V 1 r ∪ . . .∪V n r .For each v i ∈ V (K n ) and a tuple (c 1 , . . ., c i−1 , * , c i+1 , . . ., c n ) we have a g vi vertices in the set V i r .Thus, the vertices in V i r represent what could see the i-th bears.Each vertex in V i r labeled with (c 1 , . . ., c i−1 , * , c i+1 , . . ., c n ) is connected with vertices in V ℓ labeled with (c 1 , . . ., c i−1 , c i , c i+1 , . . ., c n ) for all c i ∈ [h vi ].Thus, each vertex in V i r has degree h(v i ) and each vertex in V ℓ has a degree v∈V (Kn) g(v).
Note that bears win the game if and only if there is a matching in G which covers V ℓ .Suppose there is such matching M .Let a bear sitting on a vertex v i sees colors c 1 , . . ., c i−1 , c i+1 , . . ., c n and U ⊆ V r be a set of vertices in V r labeled by (c 1 , . . ., c i−1 , * , c i+1 , . . ., c n ).By construction of G, it holds that |U | = g vi .Let N (U ) be a set of neigbors of U given by the matching M , thus, |N (U )| ≤ g(v i ).Each vertex u ∈ N (U ) has label (c 1 , . . ., c i−1 , c u i , c i+1 , . . ., c n ).Thus, the bear sitting on v i guesses colors c u i for all u ∈ N (U ).It is clear that for each v ∈ V (K n ), the bear sitting on v guesses at most g v colors.
Moreover, since the matching M covers V ℓ at least one bear guesses the color of his hat correctly.On the other hand, each winning strategy gives us a matching covering V ℓ .We use Hall's theorem [7,Chapter 2] to prove there is a matching M covering V ℓ if and only if v∈V (Kn) gv hv ≥ 1.Let S ⊆ V ℓ be a set of m left vertices.Each vertex in V i r has at most h vi neigbors in S. Since each vertex in V ℓ has g vi neigbors in V i r , the set S has at least g vi • m hv i vertices in V i r .Therefore, in total the set S has at least v∈V (Kn) neighbors in V r .We conclude by Hall's theorem, that there is a matching in G which covers V ℓ . 2 Albeit Hall's theorem is constructive, the size of the auxiliary graph G constructed in the proof could be exponential in n.Thus, this proof can not be used for designing a polynomial algorithm.

Fig. 4 :
Fig. 4: A sketch of construction of the game Hi+1.The formulas below vertices are the fractions gv/hv.

Lemma 5
For each i ∈ N it holds that hi gi ≤ μ.Moreover, Theorem 6 (Non-algorithmic part of Theorem 2) Bears win a game (K n , h, g) if and only if v∈V (Kn)