The Magnus-Derek game in groups

The Magnus-Derek game (also called the maximal variant of the vector game), introduced by Nedev and Muthukrish-nan is the following: a token is moved around a table with n positions. In each round of the game Magnus chooses an integer and then Derek chooses a direction (clockwise or counterclockwise), and the token moves that many positions into that direction. The goal of Magnus is to maximize the number of positions visited, the goal of Derek is the opposite. In the minimal variant of the game the goals of the two players are exchanged: Magnus wants to minimize the number of positions visited and Derek wants the opposite. Here we introduce a generalization of these games: the token is moved in a group, Magnus chooses an element of the group and Derek decides if the current position is multiplied or divided by that element.


Introduction
The Magnus-Derek game was introduced by Nedev and Muthukrishnan in [8].The game is played by two players called Magnus (from magnitude) and Derek (from direction).The equipment used is a circular board with n positions labeled consecutively from 0 to n − 1, and a token which moves in these positions.In each round Magnus chooses and announces an integer l (0 < l ≤ n/2 can be assumed) and then Derek chooses + or −.Suppose the token starts that round in the position i.Then the token is moved to the position i + l mod n or i − l mod n according to the choice of Derek.This ends the round.Then the game continues infinitely.Magnus aims to maximize the cardinality of the set S of the positions visited during the game, Derek aims to minimize the cardinality of S.
Nedev and Muthukrishnan ( [8]) determined what Magnus and Derek can achieve in this game.They showed that Magnus can make sure at least f * (n) and Derek can make sure at most f * (n) positions are visited, where f * (n) is the following: p n if p is the smallest odd divisor of n.
Let us rewrite the description of the game.The token is moved on the elements of the cyclic group Z n .Magnus chooses an element of Z n and Derek decides if it should be added to the current position or subtracted from it.This definition can be obviously extended to any other group.Actually, most parts of the proof in [8] can be easily translated into this language.For example Derek always chooses a coset of a subgroup of size n/p, and his strategy is that he does not let the token visit any element of that coset.As we will see, he can follow the same strategy in any abelian group.
In general we will use multiplication as the operation in the group and denote it by simply writing the elements next to each other.The identity element will be denoted by 1 and the inverse of an element x will be denoted by x −1 .In some cases the group will be identified with Z or Z n ; in those cases addition is the operation in the group, 0 is the identity element and −x is the inverse of x.Only the following non-trivial group theoretic results will be used (see for example [1]).
Theorem 1.1 (Fundamental theorem of finite abelian groups) Any finite abelian group G can be written as a direct sum of cyclic groups in the following canonical way: Corollary 1.2 If G is an abelian group of order n and n is divisible by m then there exists a subgroup of G with order m.Theorem 1.3 If G is a group of order p k where p is a prime and k is an integer, then for any integer l < k there exists a subgroup of G with order p l .

The Magnus-Derek game in groups
Here we give a more precise definition of the Magnus-Derek game in groups.Let G be a group.A token moves on the elements of the group.In each turn Magnus chooses an element x of the group.Suppose the token starts the round at the position y.Then Derek chooses its next position out of the following two: yx or yx −1 .Note that the current position is always multiplied from the right with Derek's choice.We suppose that the token is at the identity element in the beginning of the game.Let f (G) be the number of positions visited if both players play optimally.Proposition 2.1 If G is an infinite group, then f (G) is countably infinite, moreover Magnus has a nonadaptive strategy (i.e. he can make all his choices before Derek's first answer) that forces the token to visit a new position in every step.
Proof: Now we describe Magnus's strategy for choosing the nth element.No matter what elements he has chosen in the first n − 1 rounds, Derek has at most 2 n−1 possible choices in the first n − 1 rounds, and no matter what he chooses, at most n positions 1 = x 1 , x 2 , . . ., x n have been visited.Only x n can bring back the token to one of them, hence Magnus should not choose any of them, nor their inverses.At most 2n choices are wrong for Magnus in the nth round.Of course, he does not know what they are at the beginning, when choosing an element for the nth round.But he knows the first n − 1 elements he has chosen, hence he knows the 2 n−1 possibilities for the token's movement in the first n − 1 round.Altogether at most 2n2 n−1 possibilities should be avoided, but there are infinitely many elements, Magnus can choose one of them.✷ Theorem 2.2 If G is a group of order n = 2 k , then there is a non-adaptive strategy for Magnus which forces the token to visit all positions, hence f (G) = n.Moreover, the strategy needs only n − 1 steps.
Proof: We use induction on k.The theorem is trivial for k = 1.Suppose the statement is true for every l < k and we prove it for k.At first we choose a subgroup H of order 2 k−1 .By the induction Magnus can force the token to visit every position in H in 2 k−1 − 1 steps.Let h i denote the element of H which multiplies the position in the ith round (by Derek's decision it is either the element Magnus has chosen, or its inverse).Then the visited positions are 1, h 1 , h 1 h 2 , . . ., h 1 h 2 . . .h n/2−1 .These are all different.Magnus uses this strategy in the first 2 k−1 − 1 steps.Then he chooses an element of G which is not in H, and after that he applies again the strategy which works for H.The token visits every element of H in the first 2 k−1 steps, then goes to an element x of the coset of H and moves there in the next 2 k−1 − 1 steps (since only the elements of H are used, the token does not go back to H).
On the other hand no matter what Derek chooses, x is multiplied (from the right) by n/2 − 1 different elements, hence the results are all different.It means every position in the coset is visited, hence the proof is done.✷ The following lemma is a reformulation of Lemma 6 from [8].Here we give only a sketch of the proof.Note that in [8] the numbers Magnus should choose are given exactly.
Proof: The upper bound follows from a very simple strategy of Derek.He chooses any non-starting position, and makes sure the token does not visit that particular position.As no element has order 2, he always has a real choice; if x brings the point to that particular position, then x −1 does not.
For the lower bound, it is enough to show that if there are two elements a and b not yet visited, then Magnus can make sure the token visits one of them.There is a unique element Proof: We will show how Magnus (resp.Derek) can make sure that the token visits at least (resp.at most) f * (n) positions.Both in Derek's and Magnus's optimal strategy a subgroup H of order n/p will be chosen, where p is the smallest odd prime divisor of n (if there is no such p then the statement follows from Theorem 2.2).Clearly the factor group G/H is isomorphic to Z p .Thus we can denote the cosets of H by x 0 H, x 1 H, . . ., x p−1 H where x 0 = 1 and x i x j ∈ x i+j H, where the addition i + j is considered modulo p (i.e.x i = x i 1 ).We use the notation m(a, b) defined in the proof of Lemma 2.3.

Derek's Strategy
This strategy is a very simple extension of the strategy used in [8].At the beginning of the game Derek chooses a coset x i H where x i ∈ H.We claim that Derek can prevent any position in x i H from being visited.In each round he can choose a position from the set {yz, yz −1 }, where y denotes the current position which is not in x i H.
Suppose y ∈ x j H and z ∈ x l H, then yz ∈ x j+l H and yz −1 ∈ x j−l H.At least one of them is not in xH, otherwise both j + l = i mod p and j − l = i mod p and by adding the two equations we obtain the contradiction 2j = 2i mod p, hence i = j mod p, which would mean y is in x i H.

Magnus's Strategy
This strategy is a simple extension of the second strategy of Magnus from [8].An induction on n is used and Magnus applies the following strategy recursively on the smaller groups.The strategy proceeds in two phases.In the first phase the token visits at least f * (n/p) positions in all but one cosets.In the second phase either f * (n/p) positions of the remaining coset, or all the positions from the other cosets are visited.One can easily see that the number of visited positions is either at least pf * (n/p), or at least (p − 1)n/p, which will prove the theorem.
Phase 1. Phase 1 consists of the repetition of two main steps.In Step A the token visits at least f * (n/p) positions in a coset, and in Step B it moves to another coset.
Step A. The token is at a position x ∈ x i H. Magnus chooses elements of H, hence the token does not leave x i H.If n/p is a power of 2, then Magnus applies his strategy from Theorem 2.2.If not, then applies this two-phase algorithm recursively.In both cases the token visits at least f * (n/p) positions in x i H.
Step B. Magnus chooses two cosets of H never visited before, and moves the token into one of them.He uses the strategy from Lemma 2.3.It is easy to see that he can force the token into p − 1 cosets.
Phase 2. The coset x j H has not been visited yet.In this phase the token goes either to a position in x j H, and then Step A is applied, or the token visits all the other positions.One can easily see that in both cases at least f * (n) positions are visited altogether.Suppose y ∈ x l H has not been visited yet.Then Magnus can, like in Step B of the previous phase, move the token either into x j H (which would finish the proof), or into an element z from x m(j,l) H. Then he chooses yz −1 .If Derek chooses to multiply z by this element, then the token goes to y, but if he chooses to divide, then the token goes to an element of the coset x m(j,l)−l+m(j,l) H, which is equal to x j H. ✷ Remarks.There is a quadratic upper bound given in [8] on the number of steps Magnus needs to visit f (n) positions.This was improved to O(n log n) in [3] and to 3n in [6] and independently in [2].The same upper bounds hold here.Additionally, it is shown in [8] that Magnus and Derek both only need O(1) time to calculate their moves in each round.Here we cannot give such bounds in general.The difference is that simple calculations can take long time in some groups.However, if the players are allowed to find the subgroups and cosets they want before starting the game, and the computations in the groups (multiplying elements, finding the inverse) do not require too much time, then similar bounds can be given.
Here is an example showing how different the non-abelian case is [4].
Proof: The following well-known property of S n is used: any element of it can be written as a result of multiplications of transpositions.If there is a position x not yet visited, and the token is at the position y, Magnus writes y −1 x = x 1 x 2 x 3 . . .x k , where x 1 , x 2 . . .x k are transpositions and says x 1 , x 2 . . .x k in this order.Then Derek does not have any choice, since One can easily see that the same argument holds for all groups generated by elements of order 2, for example for all non-abelian simple groups (by the Feit-Thompson theorem they contain at least one element of order 2, and those elements generate a normal subgroup, which has to be the group itself).

The Derek-Magnus game in groups
Nedev and Quas defined the minimal variant of the vector game in [7].In that variant all the equipment and rules are the same as in the Magnus-Derek game, with only one difference: the goals are exchanged.
Here we call it Derek-Magnus game.For sake of brevity we define and prove everything in the more general setting of groups.
To get a meaningful definition, we cannot let Magnus choose the identity element.The token starts at the identity element of G.In every round the token starts at a position x, Magnus chooses an element y of G which is not the identity element and Derek chooses the next position of the token from the set {xy, xy −1 }.The goal of Magnus is to visit as few positions as possible, the goal of Derek is the opposite.Let g(G) be the number of positions visited if both Magnus and Derek play optimally.
One can immediately see that if there is an element of G with order 2, then Magnus can repeat that element always, and the token will visit only 2 positions.More generally Magnus can choose any nontrivial subgroup and repeat an element of that always, the token will never leave that subgroup.This strategy is non-adaptive, i.e.Magnus does not wait for Derek's decisions, he chooses the sequence of elements and Derek knows them before starting the game.Let h(G) denote the number of positions visited if they both play optimally but Magnus has to play non-adaptively.Obviously h(G) ≥ g(G).Theorem 3.1 Let G be a finite abelian group.Then h(G) = p, where p is the smallest prime divisor of the order of the group.
Proof: Magnus can choose a non-identity element of the smallest non-trivial subgroup and repeat that.This proves h(G) ≤ p.If p = 2, the lower bound is also trivial.Otherwise we suppose Magnus chooses an infinite sequence of elements of G.At least one of the elements, say x appears infinitely many times.
We cut the sequence of Magnus into several parts, each ending with x.Let denote these parts by A 1 , A 2 , . . . .They cover every element in this order, and A i always contain i copies of x.Equivalently A 1 is the part of the sequence till the first appearance of x, and more generally A i is the part from the end of A i−1 till the ith x (that is included in A i and not in A i+1 , hence A i contains i copies of x).
Now we describe a strategy of Derek for i < p such that at the endpoints of A 1 , A 2 , . . ., A i the token will be at i different positions, also different from the identity element, which is the starting position.Obviously this is enough to prove the theorem.Suppose that this is true before starting at the endpoints of A 1 , A 2 , . . ., A i−1 (that is clearly the case if i = 1).The token is at the position z after the end of A i−1 .
Derek chooses a "direction" for the elements in A i not equal to x at first, arbitrarily.Those would move the token into z ′ altogether (it is possible that the token never actually moves to z ′ , since there are also xs in the sequence).Then Derek decides how many of the i copies of x should be used for multiplying and how many for dividing.This way he chooses one element among x i , x i−2 , . . ., x −i .These are all different since the rank of x is odd and at least p > i.The position of the token at the endpoint of A i will be one of z ′ x i , z ′ x i−2 , . . ., z ′ x −i , these are all different, hence at least one of them is different from all the earlier endpoints and the identity element.✷ One can easily see that the size of the smallest subgroup is an upper bound even if G infinite and/or non-abelian.
Let us turn our attention to the adaptive case.Surprisingly Magnus can do much better.Nedev and Quas examined this version for Z n in [7].They showed that g(Z n ) is equal to the size of the smallest balanced set in Z n , where a set S is balanced if for any x ∈ S there is an element y = 0 of Z n such that x − y and x + y are both in S.
One can easily see that the same is true in any finite group.Let S be a balanced set in G if for any x ∈ S there exists a y = 1 in G such that xy and xy −1 are both in S. Magnus obviously can choose the smallest balanced set and force the token to stay there.On the other hand Derek can easily make sure that the set of positions visited infinitely many times is balanced.He can use the following simple strategy: if the token is in position x and Magnus chooses y, Derek checks if this has happened before.If it has, then he checks his answer and now answers differently.If x is visited infinitely many times, at least one y chosen by Magnus also appears infinitely many times, and then both xy and xy −1 are visited infinitely many times.
Using this Nedev and Quas proved the following theorem.
Theorem 3.2 [ [7]] Let n be an integer.Then g(Z n ) = min p|n g(Z p ) where the minimum is taken over all primes dividing n.For a prime p the following is true: The upper bound was improved to 1 + o(1) log 2 p in [5].Here we prove that it is enough to examine the subgroups of prime order in an abelian group, and since they are isomorphic to Z p , g(G) only depends on the order of G.
Let G = G 1 ⊕ G 2 .For a set S let S 1 = {g 1 ∈ G 1 : there exists a g 2 such that(g 1 , g 2 ) ∈ S}. S 2 is defined in a similar way.For an element x its first coordinate is simply denoted by x 1 .Lemma 3.3 If S is balanced and minimal under inclusion in G, then S 1 is either balanced in G 1 or contains only one element.
Proof: We define a directed graph on S. A vertex x is connected to xy and xy −1 if they are both in S. If xy = xy −1 , then {x, xy} is balanced, hence S does not contain any other elements, and it is easy to see that the statement holds.Otherwise every out-degree is at least 2, and the graph is strongly connected, i.e. for every pair of vertices x, x ′ there is a directed path going from x to x ′ .If not, then there is a proper subset U of the vertices such that all edges leaving the vertices of U go to vertices in U , but then U is balanced.
For every vertex x ∈ S there is a vertex z ∈ S and an edge from x to z.It means there is a y ∈ G such that z = xy and xy −1 ∈ S.But then x 1 y 1 and x 1 y −1 1 are both in S 1 .This makes S 1 balanced, unless y 1 is the identity element in G 1 .In that case z and x differ only in the second coordinate.Now suppose S 1 is not a singleton.One can easily see that the strong connectivity of the graph defined in the first paragraph implies that there is an edge from a vertex u with u 1 = x 1 to a vertex v with v 1 = x 1 .Then v = uw, and uw −1 is also in S. It is easy to see that v 1 = x 1 w 1 and (uw −1 ) 1 = x 1 (w −1 ) 1 , and w 1 is not the identity element in G 1 .✷ Theorem 3.4 Let G be an abelian group of order n.Then g(G) = g(Z n ) = min p|n g(Z p ) where the minimum is taken over all primes dividing n.
Proof: Obviously g(G) ≤ g(Z p ) if p|n, since there is a subgroup of G isomorphic to Z p .Let S be a balanced set in G. Consider the canonical form of G given in Theorem 1.1.S is not a singleton, hence there are two elements of it which differ in a coordinate i.We consider G as the direct sum of Z ki and all the other members of the canonical direct sum.Then S i is a balanced set in Z ki .Hence g(G) ≥ g(Z ki ).Clearly k i is a power of a prime p where p|n, hence by Theorem 3.2 g(Z ki ) ≥ g(Z p ).This finishes the proof.✷ 4 Magnus-Derek game in Z One group is of special interest: the group of integers.Proposition 2.1 implies that Magnus has a nonadaptive strategy to ensure there are infinitely many positions visited.One could think this solves the problem, if both players play optimally, the number of positions visited is infinity.But we can change Derek's goal a bit: now he wants to maximize the number of unvisited positions.In the case of finite groups, these were equivalent.Derek can always choose that the token goes to the right, hence there are infinitely many positions never visited.(In case of other infinite groups this strategy is not well-defined.Moreover, one can easily see that if every element has order 2, and the group is countably infinite, Magnus can force the token to visit every position).Again, one could think this finishes the case of Z.But the strategy given in Proposition 2.1 results in a very sparse set of visited positions.In some sense Magnus could do much better.Hence we change the definition again.
For a set S let The goal of Magnus is to maximize the density of the positions visited, the goal of Derek is to minimize it.The previously mentioned strategy of Magnus gives 0 density, but Derek's strategy shows at most 1/2 can be achieved.Proof: We have already shown Derek's strategy (he always makes the token move to the right), now we give the strategy of Magnus.We show a little bit more: he can make sure that before the token first leaves [−n, n], either there have been at least n + 1 positions visited, and the token goes to one of −n − 1 and n + 1, or there have been at least n + 2 positions visited and the token goes to one of −n − 2 and n + 2, or there have been at least n + 3 positions visited and the token goes to one of −n − 3 and n + 3. We prove it by induction on n, the cases n ≤ 6 can be easily checked.
Suppose it is true for every positive integer less than n, but Derek has a strategy that does not allow this for n.It is true for n − 1, which means when the token left [−n + 1, n − 1], there was n − 1 + i positions visited and the token went to n − 1 + i or −n + 1 − i, where 1 ≤ i ≤ 3.For i = 2 and i = 3 this proves the statement, hence we can suppose there were at least n − 1 + 1 positions visited, and the token went n or −n.If Magnus can move the token into n + 1 or −n − 1, the proof is done, hence we can assume the token does not move there.
In the next part of the game Magnus chooses numbers such a way that no matter what Derek chooses, the token does not leave [−n − 1, n + 1].We show that for a set S with −n − 1, n + 1 ∈ S ⊂ [−n − 1, +1], if the token starts from [−n−1, n+1], Magnus can make sure a position of S is visited unless S is a subset of an arithmetic progression with difference at least 3. Obviously is a, b ∈ S, it is enough for Magnus to arrive to (a + b)/2.And it is enough to arrive to (a + (a + b)/2)/2, and so on, like in Lemma 2.3.Let S be the smallest set containing S which is closed under this operation (among integers), it is enough for Magnus to arrive into any position in S.But S is an arithmetic progression with odd difference, as for three consecutive elements x, y, z there is at least one even difference among them, but then the average of those two numbers have to be among then.This only possible if z − y = y − x and it is an odd number.

Theorem 2 . 4
Obviously if the token visits m(a, b), Magnus can choose a − m and then the token goes to a or b.Hence Derek cannot let the token visit m(a, b), and then similarly cannot let the token visit m(a, m(a, b)) and m(b, m(a, b)).In general, the set of positions which Derek cannot let be visited is closed under the operation m.But it is easy to see that such sets have p or at most 1 elements, and these both lead to contradiction.✷If G is an abelian group of order n, then f (G) = f * (n).

Theorem 4 . 1
If both players play optimally, the density of the positions visited is 1/2.