A Property of Random Walks on a Cycle Graph

We analyze the Hunter vs Rabbit game on graph, which is a kind of model of communication in an adhoc mobile network. Let $G$ be a cycle graph with $N$ nodes. The hunter can move from a vertex to another vertex on the graph along an edge. The rabbit can move to any vertex on graph at once. We formalized the game using the random walk framework. The strategy of the rabbit is formalized using a one dimensional random walk over $\mathbb{Z}$. We classify strategies using the order $O(k^{-\beta-1})$ of their Fourier transformation. We investigate lower bounds and upper bounds of a probability that the hunter catches the rabbit. We found a constant lower bound if $\beta \in (0,1)$. That is there is not depend on the size $N$ of the graph. We show the order is equivalent to $O(1/\log N)$ if $\beta=1$ and a lower bound is $1/N^{(\beta-1)/\beta}$ if $\beta \in (1,2]$. Those results assist to choose the parameter $\beta$ of a rabbit strategy according to the size $N$ of the given graph. We introduce a formalization of strategies using a random walk, theoretical estimation of bounds of a probability that the hunter catches the rabbit, and we also show computing simulation results.


Introduction
We consider a game played by two players: the hunter and the rabbit. This game is described using a graph G(V, E) where V is a set of vertices and E is a set of edges. Both players may use a randomized strategy. The hunter can move from vertex to vertex along edges. The rabbit can move to any vertex at once. The hunter's purpose is to catch the rabbit in as few steps as possible. On the other hand, the rabbit considers a strategy that maximizes the time until the hunter catch the rabbit. If the hunter moves to a vertex that the rabbit is at, the game finishes and we say that the hunter catches the rabbit.
The Hunter vs. Rabbit game model is used for analyzing transmission procedures in mobile adhoc networks [5,6]. This model helps to send an electronic messages efficiently using mobile phones. The expected value of time until the hunter catches the rabbit is equal to the expected time until the recipient receives the mail. One of our goals is to improve these procedures.
We introduce some games resembling the Hunter vs. Rabbit game. The first one is the Princess vs. Monster game. In this game, the Monster tries to catch the Princess in area D. The difference between the Hunter vs. Rabbit game is that the Monster catches the Princess if the distance between the two players is smaller than a chosen value. Also the Monster moves at a constant speed whereas the * Correspondence: y-ikeda(at)math.kyushu-u.ac.jp 1 Graduate School of Mathematics, Kyushu University, Full list of author information is available at the end of the article † Equal contributor Princess can move at any speed. This game is played on a cycle graph as introduced by Isaacs [10]. The Princess vs. Monster game has been investigated by Alpern [3], Zelikin [20], and so on. Gal analyzed the Princess-Monster game on a convex multidimensional domain [8]. The next one is the Deterministic pursuit-evasion game. In this game we consider a runaway hide dark spot, for example a tunnel. Parsons innovated the search number of a graph [16,17]. The search number of a graph is the least number of people that are required to catch a runaway hiding dark spot moving at any speed. LaPaugh [12] showed that if the runaway is known not to be in edge e at any point of time, then the runaway can not enter edge e without being caught in the remainder of the game. Meggido showed that the computation time of the search number of a graph is NP-hard [14]. If an edge can be cleared without moving along it, but it suffices to 'look into' an edge from a vertex, then the minimum number of guards needed to catch the fugitive is called the node search number of graph [11]. The pursuit evasion problem in the plane was introduced by Suzuki and Yamashita [19]. They gave necessary and sufficient conditions for a simple polygon to be searchable by a single pursuer. Later Guibas et al. [9] presented a complete algorithm and showed that the problem of determining the minimal number of pursuers needed to clear a polygonal region with holes is NP-hard. Park et al. [15] gave three necessary and sufficient conditions for a polygon to be searchable and showed that there is O(n 2 ) time algorithm for constructing a search path for an n-sided polygon. Efrat et al. [7] gave a polynomial time algorithm for the problem of clearing a simple polygon with a chain of k pursuers when the first and last pursuer can only move on the boundary of the polygon.
A first study of the Hunter vs. Rabbit game can be found in [2]. The presented hunter strategy is based on random walk on a graph and it is shown that the hunter catches an unrestricted rabbit within O(nm 2 ) rounds, where n and m denote the number of nodes and edges, respectively. Adler et al. showed that if the hunter chooses a good strategy, the upper bound of the expected time that the hunter catches the rabbit is O(n log(diam(G))), where diam(G) is a diameter of a graph G, and if the rabbit chooses a good strategy, the lower bound of the expected time that the hunter catches the rabbit is Ω(n log(diam(G))) [1]. Babichenko et al. showed Adler's strategies yield a Kakeya set consisting of 4n triangles with minimal area [4].
In this paper, we propose three assumptions for the strategy of the rabbit. We have the general lower bound formula for the probability that the hunter catches the rabbit. The strategy of the rabbit is formalized using a one dimensional random walk over Z. We classify strategies using the order O(k −β−1 ) of their Fourier transform. If β = 1, the lower bound of a probability that the hunter catches the rabbit is ((c * π) −1 log N + c 2 ) −1 where c 2 and c * are constants defined by the given strategy. If β ∈ (1, 2], the lower bound of the probability that the hunter catches the rabbit is c 4 N −(β−1)/β where c 4 > 0 is are constant defined by the given strategy.
We show experimental results for three examples of the rabbit strategy.
. We can confirm our bounds formula, and the asymptotic behavior of those bounds by the results of simulations.

Statements of Results
We consider the Hunter vs Rabbit game on a cycle graph. To explain the Hunter vs Rabbit game, we introduce some notation. Let X 1 , X 2 , . . . be independent, identically distributed random variables defined on a probability space (Ω, F , P) taking values in the integer lattice Z. A onedimensional random walk {S n } ∞ n=1 is defined by Let Y 1 , Y 2 , . . . be independent, identically distributed random variables defined on a probability space (Ω H , F H , P H ) taking values in the integer lattice Z with Let N ∈ N be fixed. We denote by X (N) 0 a random variable defined on a probability space (Ω N , indicates the position of the rabbit at time n on V N . Hunter's strategy {H (N) n } ∞ n=0 is defined by indicates the position of the hunter at time n on V N . Put The hunter catches the rabbit whenthe hunter and the rabbit are both located on the same place.
We will discuss the probability that the hunter catches the rabbit by time N on V N , that is, We investigate the asymptotic estimate of this probability as N → ∞.
where Γ is the gamma function (see Appendix (B)). X 1 satisfies (A1), (A2) and (5). If X 1 takes three values −1, 0, 1 with equal probability, then X 1 satisfies (A1), (A2) and The inequality (3) seems to be sharp, because the powers of upper and lower bound appearing in (3) cannot be improved. Indeed, we have the following estimates.
Proposition 2 Let H (N) i = i for any i. If X 1 takes three values −1, 0, 1 with equal probability, then there exists a constant c 7 > 0 such that for N ∈ N, The proofs of Proposition 1 and Proposition 2 are given in Appendix (A).

Computer simulation
In this section, we show some experimental results about the Hunter vs Rabbit game on a cycle graph. We compute P {S n mod N = k} by using the gamma function and the class discrete distribution in C++. We can show the probability the rabbit is caught and the expected value of the time until the rabbit is caught using this application.
In this section, we consider a lower bound L(N, a) of the probability that the hunter catches the rabbit. According to the Proposition 3 and Proposition 6, we define L(N, a) as follows: and We note β and c * are defined by a given P{X t = k} in an example. We choose appropriate constants ε, ρ * and C * for each examples.

Example 1
We consider the generalization of the case of [1]. Let where a ≥ 1 2 . We note β = 1, c * = π and ε = 1/2 in Remark 1. If a = 1, then this is the case in [1]. We can define C * and ρ * for this case. So we have The proof of (9) is given in Appendix (D). Figure 1 shows an experimental result of the probabilities for all initial positions of the rabbit with N = 100 and a = 1. The horizontal axis is the initial position of the rabbit, and the vertical axis shows the probability the rabbit is caught. The red line in the figure is a probability that the hunter catches the rabbit.The blue line is the average of probabilities that the hunter catches the rabbit. The green line is L (N, a). In this case, the hunter does not move from the initial position 0. As you can see, the average of the probability that the hunter catches the rabbit is bounded below by L (N, a).
In this case, the average of the probability that the hunter catches the rabbit each initial position of the rabbit nearly equals 0.4258, so we have 1 L(100, 1) 7.43823, and 1 L(100, 1) Table 1 is the experimental results of Example 1 with a = 1 and N = 100, 500 and 1000. This table shows the asymptotic behavior of (8).  1 and N = 100, 500 and 1000. A is the average of the probability that the hunter catches the rabbit.

Example 2
We consider the case of β ∈ (0, 2). We put where a > ∞ k=1 1 k β+1 . By Remark 2, c * = π 2aΓ(β+1) sin(βπ/2) and ε = 2−β 2 . Then, the lower bound of the probability that the hunter catches the rabbit L(N, a) is where ρ * and C * are appropriate constants for each examples. When a = 2.5 and β = 1, we set C * 0.177245 and ρ * 0.694811. So we have . Figure 2 is an experimental result with β = 1, N = 100 and a = 2.5. In this case, the average of the probability that the hunter catches the rabbit nearly equals 0.318, so we have 1 L(100, 2.5) 6.99237, Table 2 is the experimental results of Example 2 with β = 1, a = 2.5 and N = 100, 500 and 1000. This table shows that the value of A/L(N, a)(> 1) is decreasing.
By Remark 2, β = 2, c * = 1 3 and ε = 2. In this case, the lower bound of the probability the hunter catches the rabbit L ′ (N) is .
(We can prove this using in the same way in Appendix (D).) Figure 3 is an experimental result of Example 3. The green line in Figure 3 is L ′ (N).
We could have a concrete lower bound of the average of a probability that the hunter catches the rabbit for those examples.

Upper bounds and Lower bounds
In this section, we give a relation between

Proposition 3
For N ∈ N \ {1} and y 1 , y 2 , . . . , y N ∈ Z with |y n − y n+1 | ≤ 1 (n = 1, 2, . . . , N − 1), where and Proof. We note that   . We note P (N) R = µ N × P, the above relation implies The probability in the double summation on the righthand side above is equal to by the Markov property. It is easy to verify that for any m ∈ Z, Here we used (11).

Fourier transform
In this section, we introduce some results concerning onedimensional random walk.
(21) By using the above inequality, We perform the change of variables θ = x/n 1/β , so that .
We decompose I(n, l) as follows: Therefore, The proof of Proposition 4 will be complete if we show that each term in the right-hand side of the above inequality is bounded by a constant (independent of l) multiple of n −δ .
We perform the change of variables t = c * x β , so that With the help of the above calculation, Proposition 4 gives the following corollary.

Proposition 5
If a one-dimensional random walk satisfies (A2), then for l ∈ Z and n ∈ {0} ∪ N, where Proof. By the definition of φ(θ), Therefore, We note that φ n (θ) ∈ R and Let N be an even number. Then, by (27), Therefore, we have (26) for every even number N. The proof of (26) for odd number is similar and is omitted.

Proof of Theorem 1
In this section we prove Theorem 1. To prove it, we introduce the following Proposition.
It remains to show the last inequality in (2). To achieve this, we will use Proposition 3 and Corollary 4.
The proof of Theorem 1 is complete.

Conclusion and Future works
We formalized the Hunter vs Rabbit game using the random walk framework. We generalize a probability distribution of the rabbit's strategy using four assumptions. We have the general lower bound formula of a probability that the rabbit is caught. Let P {X 1 = k} = O(k −β−1 ). If β ∈ (0, 1), the lower bound of a probability that the hunter catches the rabbit is c 1 where c 1 > 0 is a constant. If β = 1, the lower bound of a probability that the rabbit is caught is where c 2 and c * are constants defined by the given strategy. If β ∈ (1, 2], the lower bound of a probability that the rabbit is caught is c 4 N (β−1)/β where c 4 > 0 is a constant defined by the given strategy.
We show experimental results for three examples of the rabbit strategies. We can confirm our bounds formula, and asymptotic behavior of those bounds In this paper, we consider the lower bound of a probability that the rabbit is caught to show the worst expected value of time until the rabbit caught. Our motivation is to find the best strategy of the rabbit. Our results help to find the best strategy of the rabbit. On the other hands, what is the best strategy of the hunter? And what is the worst strategy of the hunter? Future works include to show the best strategy of the hunter is Y j+1 = Y j + 1, and the worst strategy of the hunter is Y j = H (N) 0 for any j.

Acknowledgment
I would like to express my deepest gratitude to Professor Hiroyuki Ochiai for his valuable advice and guidance. I would like to thank Mr. Norikazu Ishii for his help.
A simple calculation shows that the absolute value of the difference between the right-hand side of the above and .
We perform integration by part for the left-hand side of (48) and use (47). Then we have (48) and (5).