- Original Article
- Open access
- Published:
A property of random walks on a cycle graph
Pacific Journal of Mathematics for Industry volume 7, Article number: 3 (2015)
Abstract
We analyze the Hunter vs. Rabbit game on a graph, which is a model of communication in adhoc mobile networks. Let G be a cycle graph with N nodes. The hunter can move from a vertex to a vertex along an edge. The rabbit can jump from any vertex to any vertex on the graph. We formalize the game using the random walk framework. The strategy of the rabbit is formalized using a one dimensional random walk over . We classify strategies using the order O(k −β−1) of their Fourier transformation. We investigate lower bounds and upper bounds of the probability that the hunter catches the rabbit. We found a constant lower bound if β∈(0,1) which does not depend on the size N of the graph. We show the order is equivalent to O(1/logN) if β=1 and a lower bound is 1/N (β−1)/β if β∈(1,2]. These results help us to choose the parameter β of a rabbit strategy according to the size N of the given graph. We introduce a formalization of strategies using a random walk, theoretical estimation of bounds of a probability that the hunter catches the rabbit, and also show computing simulation results.
Introduction
We consider a game played by two players: the hunter and the rabbit. This game is described using a graph G(V,E) where V is a set of vertices and E is a set of edges. Both players may use a randomized strategy. The hunter can move from vertex to vertex along edges. The rabbit can move to any vertex at once. The hunter’s purpose is to catch the rabbit in as few steps as possible. On the other hand, the rabbit considers a strategy that maximizes the time until the hunter catch the rabbit. If the hunter moves to a vertex that the rabbit is at, the game finishes and we say that the hunter catches the rabbit.
The Hunter vs. Rabbit game model is used for analyzing transmission procedures in mobile adhoc networks [5,6]. This model helps to send an electronic messages efficiently using mobile phones. The expected value of time until the hunter catches the rabbit is equal to the expected time until the recipient receives the mail. One of our goals is to improve these procedures.
We introduce some games resembling the Hunter vs. Rabbit game. The first one is the Princess vs. Monster game. In this game, the Monster tries to catch the Princess in area D. The difference between the Hunter vs. Rabbit game is that the Monster catches the Princess if the distance between the two players is smaller than a chosen value. Also the Monster moves at a constant speed whereas the Princess can move at any speed. This game is played on a cycle graph as introduced by Isaacs [10]. The Princess vs. Monster game has been investigated by Alpern [3], Zelikin [20], and so on. Gal analyzed the Princess-Monster game on a convex multidimensional domain [8].
The next one is the Deterministic pursuit-evasion game. In this game we consider a runaway hide dark spot, for example a tunnel. Parsons innovated the search number of a graph [16,17]. The search number of a graph is the least number of people that are required to catch a runaway hiding dark spot moving at any speed. LaPaugh [12] showed that if the runaway is known not to be in edge e at any point of time, then the runaway can not enter edge e without being caught in the remainder of the game. Meggido showed that the computation time of the search number of a graph is NP-hard [14]. If an edge can be cleared without moving along it, but it suffices to ‘look into’ an edge from a vertex, then the minimum number of guards needed to catch the fugitive is called the node search number of graph [11]. The pursuit evasion problem in the plane was introduced by Suzuki and Yamashita [19]. They gave necessary and sufficient conditions for a simple polygon to be searchable by a single pursuer. Later Guibas et al. [9] presented a complete algorithm and showed that the problem of determining the minimal number of pursuers needed to clear a polygonal region with holes is NP-hard. Park et al. [15] gave three necessary and sufficient conditions for a polygon to be searchable and showed that there is O(n 2) time algorithm for constructing a search path for an n-sided polygon. Efrat et al. [7] gave a polynomial time algorithm for the problem of clearing a simple polygon with a chain of k pursuers when the first and last pursuer can only move on the boundary of the polygon.
A first study of the Hunter vs. Rabbit game can be found in [2]. The presented hunter strategy is based on random walk on a graph and it is shown that the hunter catches an unrestricted rabbit within O(n m 2) rounds, where n and m denote the number of nodes and edges, respectively. Adler et al. showed that if the hunter chooses a good strategy, the upper bound of the expected time that the hunter catches the rabbit is O(n log(d i a m(G))), where d i a m(G) is a diameter of a graph G, and if the rabbit chooses a good strategy, the lower bound of the expected time that the hunter catches the rabbit is Ω(n log(d i a m(G))) [1]. Babichenko et al. showed Adler’s strategies yield a Kakeya set consisting of 4n triangles with minimal area [4].
In this paper, we propose three assumptions for the strategy of the rabbit. We have the general lower bound formula for the probability that the hunter catches the rabbit. The strategy of the rabbit is formalized using a one dimensional random walk over . We classify strategies using the order O(k −β−1) of their Fourier transform. If β=1, the lower bound of a probability that the hunter catches the rabbit is ((c ∗ π)−1 logN+c 2)−1 where c 2 and c ∗ are constants defined by the given strategy. If β∈(1,2], the lower bound of the probability that the hunter catches the rabbit is c 4 N −(β−1)/β where c 4>0 is are constant defined by the given strategy.
We show experimental results for three examples of the rabbit strategy.
-
1.
$$P\left\{X_{1}=k\right\} =\left\{ \begin{array}{ll} \frac{1}{2a(\vert k\vert+1)(\vert k\vert+2)} &\quad (k\in \mathbb{Z} \setminus\left\{0\right\})\\ 1-\frac{1}{2a} &\quad (k=0) \end{array} \right.$$
-
2.
$$P\left\{X_{1}=k\right\}=\left\{ \begin{array}{ll} \frac{1}{2a\vert k\vert^{\beta +1}} &\quad (k\in\mathbb{Z}\setminus\left\{0\right\})\\ 1-\frac{1}{a}\sum\limits_{k=1}^{\infty}\frac{1}{k^{\beta +1}} &\quad (k=0) \end{array} \right. $$
-
3.
$$P\left\{X_{1}=k\right\}=\left\{ \begin{array}{ll} \frac{1}{3} &\quad (k\in\left\{-1,0,1\right\})\\ 0 &\quad (k\not\in\left\{-1,0,1\right\}). \end{array} \right. $$
We can confirm our bounds formula, and the asymptotic behavior of those bounds by the results of simulations.
Statements of results
We consider the Hunter vs Rabbit game on a cycle graph. To explain the Hunter vs Rabbit game, we introduce some notation. Let X 1,X 2,… be independent, identically distributed random variables defined on a probability space (Ω,,P) taking values in the integer lattice . A one-dimensional random walk \( \{ S_{n} \}_{n=1}^{\infty }\) is defined by
Let Y 1,Y 2,… be independent, identically distributed random variables defined on a probability space \((\Omega _{\mathcal {H}}, {\cal F}_{\mathcal {H}}, P_{\mathcal {H}})\) taking values in the integer lattice with
Let \( N \in {\mathbb {N}}\) be fixed. We denote by \(X_{0}^{(N)} \) a random variable defined on a probability space \((\Omega _{N}, {\mathcal F}_{N}, \mu _{N})\) taking values in V N :={0,1,2,…,N−1} with
For \( b \in {\mathbb {Z}}\), we denote by (b mod N) the remainder of b divided by N.
A rabbit’s strategy \( \left \{\mathcal {R}_{n}^{(N)} \right \}_{n=0}^{\infty } \) is defined by
\( \mathcal {R}_{n}^{(N)} \) indicates the position of the rabbit at time n on V N . Hunter’s strategy \( \left \{ \mathcal {H}_{n}^{(N)} \right \}_{n=0}^{\infty } \) is defined by
\( \mathcal {H}_{n}^{(N)} \) indicates the position of the hunter at time n on V N . Put
The hunter catches the rabbit when the hunter and the rabbit are both located on the same place.
We will discuss the probability that the hunter catches the rabbit by time N on V N , that is,
We investigate the asymptotic estimate of this probability as N→∞.
Definition 1.
We define conditions (A1), (A2) and (A3) as follows.
-
The random walk \(\{S_{n} \}_{n=1}^{\infty }\) is strongly aperiodic, i.e. for each \(y \in \mathbb {Z}\), the smallest subgroup containing the set
$$\begin{array}{@{}rcl@{}} \left\{y+k \in {\mathbb{Z}} \ \vert \ P\left\{X_{1} = k \right\} > 0\right\} \end{array} $$is .
-
\(P\left \{X_{1}= k \right \} = P\left \{X_{1}=- k \right \} \quad (k \in {\mathbb {Z}})\).
-
There exist β∈(0,2], c ∗>0 and ε>0 such that
$$\begin{array}{@{}rcl@{}} {}\phi(\theta) := \sum_{k \in \mathbb{Z}}e^{i\theta k}P\left\{X_{1} = k \right\} = 1 - c_{*} \vert \theta \vert^{\beta} + O\left(\vert \theta \vert^{\beta + \varepsilon}\right). \end{array} $$
We denote the β in (A3) as \(\beta _{\mathcal {R}}\).
Theorem 1.
Assume that X 1 satisfies (A1)−(A3).
-
(I)
If \(\beta _{\mathcal {R}} \in (0,1)\), then there exists a constant c 1>0 such that for \( N \in \mathbb {N} \setminus \{ 1 \} \) and \( y_{1},y_{2}, \ldots, y_{N} \in {\mathbb {Z}}\) with |y n −y n+1|≤1 (n=1,2,…,N−1),
$$ c_{1} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N} \left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right). $$((1)) -
(II)
If \(\beta _{\mathcal {R}}=1\), then there exist constants c 2>0 and c 3>0 such that for \( N \in \mathbb {N} \setminus \{ 1 \} \) and \( y_{1}, y_{2}, \ldots, y_{N} \in {\mathbb {Z}}\) with |y n −y n+1|≤1(n=1,2,…,N−1),
$$\begin{array}{@{}rcl@{}} {}\frac{1}{ \frac{1}{ c_{*} \pi} \log N +c_{2}} &\leq & \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right)\\ &\leq & \frac{c_{3}}{ \log N}. \end{array} $$((2)) -
(III)
If \(\beta _{\mathcal {R}} \in (1,2]\), then there exists a constant c 4>0 such that for \( N \in \mathbb {N} \setminus \{ 1 \} \) and \( y_{1}, y_{2}, \ldots, y_{N} \in {\mathbb {Z}}\) with |y n −y n+1|≤1 (n=1,2,…,N−1),
$$\begin{array}{@{}rcl@{}} \frac{c_{4}}{ N^{(\beta-1)/ \beta}} \leq \mathbb{P}_{\mathcal{R}}^{(N)} \left(\bigcup_{n=1}^{N}\left\{ \mathcal{R}_{n}^{(N)} = (y_{n} \mod N) \right\} \right). \end{array} $$((3))
The following bounds are obtained as a corollary of Theorem 1.
Corollary 1.
Assume (A1)−(A3).
If \(\beta _{\mathcal {R}} \in (0,1)\), then there exists a constant c 1>0 such that for \( N \in \mathbb {N} \setminus \{ 1 \} \),
If \(\beta _{\mathcal {R}}=1\), then there exist constants c 2>0 and c 3>0 such that for \( N \in \mathbb {N} \setminus \{ 1 \} \),
If \(\beta _{\mathcal {R}} \in (1,2]\), then there exists a constant c 4>0 such that for \( N \in \mathbb {N} \setminus \{ 1 \} \),
Remark 1.
Adler, Räcke, Sivadasan, Sohler and Vöcking considered \( \tilde {\mathbb {P} }^{(N)} \left (\cup _{n=1}^{N} \left \{\mathcal {H}_{n}^{(N)} = \mathcal {R}_{n}^{(N)} \right \}\right)\) in the case of
In this case, X 1 satisfies (A1), (A2) and
((A3) with β=1), and we have (4) in Corollary 1 which coincides with the result of Lemma 3 in [1].
Remark 2.
For β∈(0,2), let
with a constant a satisfying \(a > \sum _{k=1}^{\infty } (1/ k^{\beta +1}). \) Then ϕ(θ) in (A3) is
where Γ is the gamma function (see Proof of Proposition 2). X 1 satisfies (A1), (A2) and (5).
If X 1 takes three values −1,0,1 with equal probability, then X 1 satisfies (A1), (A2) and
((A3) with β=2).
The inequality (3) seems to be sharp, because the powers of upper and lower bound appearing in (3) cannot be improved. Indeed, we have the following estimates.
Proposition 1.
Let \(\mathcal {H}_{i}^{(N)} = 0\) for any i and assume (A1)−(A3). If \(\beta _{\mathcal {R}} \in (1,2]\), then there exist constants c 5,c 6>0 such that for \( N \in \mathbb {N}\),
Proposition 2.
Let \(\mathcal {H}_{i}^{(N)} = i\) for any i. If X 1 takes three values −1,0,1 with equal probability, then there exists a constant c 7>0 such that for \( N \in \mathbb {N}\),
The proofs of Proposition 1 and Proposition 2 are given in Proof of Proposition 1.
Remark 3.
Assume (A1) and (A2). If there exist c ∗>0 and ε>0 such that
((A3) with β=1). Then
Computer simulation
In this section, we show some experimental results about the Hunter vs Rabbit game on a cycle graph. We compute P{S n mod N=k} by using the gamma function and the class discrete_distribution in C++. We can show the probability the rabbit is caught and the expected value of the time until the rabbit is caught using this application.
In this section, we consider a lower bound L(N,a) of the probability that the hunter catches the rabbit. According to the Proposition 3 and Proposition 6, we define L(N,a) as follows:
where
and
We note β and c ∗ are defined by a given P{X t =k} in an example. We choose appropriate constants ε, ρ ∗ and C ∗ for each examples.
Example 1.
We consider the generalization of the case of [1]. Let
where \(a \geq \frac {1}{2}\). We note β=1, c ∗=π and ε=1/2 in Remark 1. If a=1, then this is the case in [1].
We can define C ∗ and ρ ∗ for this case. So we have
The proof of (9) is given in Proof of (9).
Figure 1 shows an experimental result of the probabilities for all initial positions of the rabbit with N=100 and a=1. The horizontal axis is the initial position of the rabbit, and the vertical axis shows the probability the rabbit is caught. The red line in the figure is a probability that the hunter catches the rabbit. The blue line is the average of probabilities that the hunter catches the rabbit. The green line is L(N,a). In this case, the hunter does not move from the initial position 0. As you can see, the average of the probability that the hunter catches the rabbit is bounded below by L(N,a).
In this case, the average of the probability that the hunter catches the rabbit each initial position of the rabbit nearly equals 0.4258, so we have
and
Table 1 is the experimental results of Example 1 with a=1 and N=100,500 and 1000. This table shows the asymptotic behavior of (8).
Example 2.
We consider the case of β∈(0,2). We put
where \(a\! >\! \sum _{k=1}^{\infty }\frac {1}{k^{\beta +1}}\). By Remark 2, \(c_{*} = \frac {\pi }{2a \Gamma (\beta +1) \sin (\beta \pi /2)}\) and \(\varepsilon = \frac {2- \beta }{2}\). Then, the lower bound of the probability that the hunter catches the rabbit L(N,a) is
where ρ ∗ and C ∗ are appropriate constants for each examples. When a=2.5 and β=1, we set \(C_{*} \fallingdotseq 0.177245\) and \(\rho _{*} \fallingdotseq 0.694811\). So we have
Figure 2 is an experimental result with β=1, N=100 and a=2.5. In this case, the average of the probability that the hunter catches the rabbit nearly equals 0.318, so we have
and
Table 2 is the experimental results of Example 2 with β=1, a=2.5 and N=100,500 and 1000. This table shows that the value of A/L(N,a)(>1) is decreasing.
Example 3.
We put
By Remark 2, β=2, \(c_{*} = \frac {1}{3}\) and ε=2. In this case, the lower bound of the probability the hunter catches the rabbit L ′(N) is
(We can prove this using in the same way in Proof of (9).) Figure 3 is an experimental result of Example 3. The green line in Fig. 3 is L ′(N).
We could have a concrete lower bound of the average of a probability that the hunter catches the rabbit for those examples.
Upper bounds and lower bounds
In this section, we give a relation between
and one-dimensional random walk \( \{ S_{n} \}_{n=1}^{\infty }\).
Proposition 3.
For \( N \in {\mathbb {N}} \setminus \{ 1 \}\) and \(y_{1},y_{2}, \ldots, y_{N} \in {\mathbb {Z}} \) with |y n −y n+1|≤1 (n=1,2,…,N−1),
where
and
Proof.
We note that
by the definition of \( \left \{ \mathcal {R}_{n}^{(N)} \right \}_{n=0}^{\infty } \). By \(\mathbb {P}_{\mathcal {R}}^{(N)} = \mu _{N} \times P \), the above relation implies
For l∈{0,1,…,N−1} and n∈{2,3,…,N}, we decompose the event {l+S n ∈[y n ] N } according to the value of the first hitting time for [y 1] N ,[y 2] N ,…,[y n ] N and the hitting place to obtain
The probability in the double summation on the right-hand side above is equal to
by the Markov property. It is easy to verify that for any \( m \in {\mathbb {Z}}\),
by |y n −y j |≤n−j. Therefore
for l∈{0,1,…,N−1} and n∈{1,2,…,N}. By multiplying (12) by 1/N and summing (l,n) over {0,1,…,N−1}×{1,2,…,N}, we have
Here we used (11).
By \( \sum _{l=0}^{N-1} P \{ l\,+\,S_{n}\! \in [\!y]_{N} \}= P \{ S_{n} \in {\mathbb {Z}} \}\! = 1 (n \in {\mathbb {N}}, y \in {\mathbb {Z}})\),
that is the first inequality in (10).
For the last inequality in (10), let y N+j =y N (j=1,2,…,N). The same argument as showing (15) (we use \( q_{i}^{(N)} \) instead of \( p_{i}^{(N)} \)) gives
Corollary 2.
For \( N \in {\mathbb {N}} \setminus \{ 1 \} \),
Proof.
Put y 1=y 2=⋯=y 2N =0 in the proof of Proposition 3. Then the same argument as showing (10) gives (16).
Corollary 3.
For \( N \in {\mathbb {N}} \setminus \{ 1 \} \),
Proof.
Put y j =j (j=1,2,…,2N) in the proof of Proposition 3. Then the same argument as showing (10) gives (17).
Remark 4.
By the same argument as showing (16), we obtain that for \( \tilde {\epsilon } >0 \) and \( N \geq 1/ \tilde {\epsilon } \),
Fourier transform
In this section, we introduce some results concerning one-dimensional random walk.
Proposition 4.
If a one-dimensional random walk satisfies (A1) and (A3), then there exist C 1>0 and \(N_{1} \in {\mathbb {N}}\) such that for n≥N 1,
where δ= min{ε/(2β),1/2}.
Proof.
Proposition 4 can be proved by the same procedure as in Theorem 1.2.1 of [13].
The Fourier inversion formula for ϕ n(θ) is
By (A3), there exist C ∗>0 and r∈(0,π) such that for |θ|<r,
and
With r, we decompose the right-hand side of (18) to obtain
where
A strongly aperiodic random walk (A1) has the property that |ϕ(θ)|=1 only when θ is a multiple of 2π (see § 7 Proposition 8 of [18]). By the definition of ϕ(θ), |ϕ(θ)| is a continuous function on the bounded closed set [−π,−r]∪[r,π], and |ϕ(θ)|≤1 (θ∈[−π,π]). Hence, there exists a ρ<1, depending on r∈(0,π], such that
By using the above inequality,
We perform the change of variables θ=x/n 1/β, so that
Put
We decompose I(n,l) as follows:
where
and
Therefore,
The proof of Proposition 4 will be complete if we show that each term in the right-hand side of the above inequality is bounded by a constant (independent of l) multiple of n −δ.
If n is large enough, then the bound |J(n,l)|≤n 1/β ρ n, which has already been shown above, yields
With the help of
and |ϕ(θ)|≤1 (θ∈[−π,π]), (19) implies that for |x|<r n 1/β,
Thus
It is easy to verify that for |x|<r n 1/β,
by (20), and we obtain that
Moreover, if n is large enough, then
where s=(1/β)(1+1/(2γ)). By replacing the integrand in the right-hand side of the last inequality of (23) with the right-hand side of the above inequality, we obtain
The same argument as showing (24) gives
Let
appearing in Proposition 4.
Remark 5.
When a one-dimensional random walk is the strongly aperiodic (A1) with E[X 1]=0 and E[|X 1|2+ε]<∞ for some ε∈(0,1), it is verified that
In this case, \(I_{0}(n,l:2,E\left [{X_{1}^{2}}\right ]/2)\) can be computed and Proposition 4 gives the following.
(Local Central Limit Theorem) There exist \(\tilde {C}_{1}>0\) and \( \tilde {N}_{1} \in {\mathbb {N}}\) such that for \(n \geq \tilde {N}_{1}\),
where δ= min{ε/4,1/2}. (See Remark after Proposition 7.9 in [18].)
It is easy to see
and we have the following corollary of Proposition 4.
Corollary 4.
If a one-dimensional random walk satisfies (A1) and (A3) with β=1, then there exist C 2>0 and \(N_{2} \in {\mathbb {N}}\) such that for n≥N 2,
where δ= min{ε/2,1/2}.
We perform the change of variables t=c ∗ x β, so that
With the help of the above calculation, Proposition 4 gives the following corollary.
Corollary 5.
If a one-dimensional random walk satisfies (A1) and (A3), then there exist C 3>0 and \(N_{3} \in {\mathbb {N}}\) such that for n≥N 3,
where δ= min{ε/2β,1/2}.
Proposition 5.
If a one-dimensional random walk satisfies (A2), then for \( l \in {\mathbb {Z}} \) and \( n \in \{ 0 \} \cup {\mathbb {N}} \),
where
Proof.
By the definition of ϕ(θ),
Thus
Then,
since
Therefore,
We note that \(\phi ^{n}(\theta) \in \mathbb {R}\) and
by (A2). So we have
Let N be an even number. Then, by (27),
Therefore, we have (26) for every even number N. The proof of (26) for odd number is similar and is omitted.
Proof of Theorem 1
In this section we prove Theorem 1. To prove it, we introduce the following Proposition.
Proposition 6.
Assume (A1), (A2) and (A3).
If β∈(0,1), then there exists a constant c 8>0 such that
If β=1, then there exists a constant c 9>0 such that
If β∈(1,2], then there exists a constant c 10>0 such that
Proof.
There exist C ∗ and r∈(0,π) such that for |θ|<r,
by (A3). We can choose r ∗∈(0,r] small enough so that
Then for |θ|<r ∗,
and
There exists a ρ ∗∈[0,1), depending on r ∗, such that
by the same reason as (21). (Here we used the condition (A1).)
Using Proposition 5 and (35), we obtain that for i∈{1,2,…,N−1},
Therefore
where
Because of (A2), ϕ(θ) takes a real number. Then (33), (34) and (A1) mean that
and
We will calculate Φ N in the case β∈(0,1]. By (38), we decompose the right-hand side of the above to obtain
where
To estimate E N , we use (31) and (33) which imply that for \( j \in [1, (r_{*}/(2 \pi))N) \cap \mathbb {Z}\),
where \(c_{11}= 2^{2+ \varepsilon - \beta } \pi ^{\varepsilon - \beta } C_{*}/c_{*}^{2}.\) By noticing that 1+ε−β>0,
Thus
It is easy to see that
Put the pieces ((36), (38)-(41)) together, we have (28) and (29).
In the case β∈(1,2], we use (37) to obtain
where N(β)= min{N (β−1)/β,(r ∗/(2π))N} and
We use (22) (set n=N and \(a=1,b= \phi \left (\frac {2j\pi }{N}\right)\)), then
We notice that β−1>0, (33) gives
Put the pieces ((36), (42)-(44)) together, we have (30).
It remains to show the last inequality in (2). To achieve this, we will use Proposition 3 and Corollary 4.
There exist C 2>0 and \(N_{2} \in \mathbb {N}\) such that for i≥N 2 and \( l \in \mathbb {Z}\),
by Corollary 4. Let
We can choose \( N_{*} \in \mathbb {N} \) large enough so that
Then for N≥N ∗+1,
It follows from Proposition 3 and (45) that for \( N \in [N_{*}+1, + \infty) \cap \mathbb {N} \) and \(y_{1},y_{2}, \ldots, y_{N} \in {\mathbb {Z}} \) with |y n −y n+1|≤1 (n=1,2,…,N−1),
It is clear that \( \mathbb {P}_{\mathcal {R}}^{(N)} \left (\bigcup _{n=1}^{N} \left \{ \mathcal {R}_{n}^{(N)} = (y_{n} \mod N)\right \} \right) \) is bounded by 1. Put \( c_{3}= \max \left \{ 4 \pi (c_{*}^{2}+1)/c_{*}, \log N_{*} \right \} \). The last inequality in (2) holds.
The proof of Theorem 1 is complete.
Conclusion and future works
We formalized the Hunter vs Rabbit game using the random walk framework. We generalize a probability distribution of the rabbit’s strategy using four assumptions. We have the general lower bound formula of a probability that the rabbit is caught. Let P{X 1=k}=O(k −β−1). If β∈(0,1), the lower bound of a probability that the hunter catches the rabbit is c 1 where c 1>0 is a constant. If β=1, the lower bound of a probability that the rabbit is caught is \(\frac {1}{\frac {1}{c_{*}\pi }\log N + c_{2}}\) where c 2 and c ∗ are constants defined by the given strategy. If β∈(1,2], the lower bound of a probability that the rabbit is caught is \(\frac {c_{4}}{N^{(\beta -1)/\beta }}\) where c 4>0 is a constant defined by the given strategy.
We show experimental results for three examples of the rabbit strategies. We can confirm our bounds formula, and asymptotic behavior of those bounds
In this paper, we consider the lower bound of a probability that the rabbit is caught to show the worst expected value of time until the rabbit caught. Our motivation is to find the best strategy of the rabbit. Our results help to find the best strategy of the rabbit. On the other hands, what is the best strategy of the hunter? And what is the worst strategy of the hunter? Future works include to show the best strategy of the hunter is Y j+1=Y j +1, and the worst strategy of the hunter is \(Y_{j} = \mathcal {H}_{0}^{(N)}\) for any j.
Appendix
B Proof of Proposition 2
We consider the case when X 1 takes three values −1,0,1 with equal probability. In this case, X 1 satisfies (A1), (A2) and
We can show that there exist \(\tilde {C}_{1}>0\) and \( \tilde {N}_{1} \in {\mathbb {N}}\) such that for \(i \geq \tilde {N}_{1}\) and \(l \in \mathbb {Z} \),
by (25). We notice that P{|X 1|≤1}=1, then we obtain that for \( N \in \mathbb {N} \setminus \{ 1 \} \),
and
With the help of e −x≤1/x (x>0), (46) implies that for \( N \geq 2 \tilde {N}_{1} \),
where \(c_{13}= \sqrt {3/ (2 \pi)}+ 4 \sqrt {2}/ \sqrt {3 \pi }+2 \tilde {C}_{1}. \) Thus for \( N \in \mathbb {N} \setminus \{ 1 \}\),
Combining the above inequality with Corollary 3, we have (7). □
(B) To obtain (5), we use the formula
for α∈(0,2) and b>0. By the definition of X 1,
A simple calculation shows that the absolute value of the difference between the right-hand side of the above and
is bounded by a constant multiple of |θ|β+(2−β)/2. It remains to show that
We perform integration by part for the left-hand side of (48) and use (47). Then we have (48) and (5).
C Proof of (8)
Let ε>0 be fixed. By Corollary 4, there exist C 2>0 and \(N_{2} \in \mathbb {N}\) such that for i≥N 2,
(49) implies that for N≥(4/ε)(N 2+1),
where \(c_{14}= (1/ (c_{*} \pi)) \log 4 + (1/ (c_{*} \pi)) \log N_{2} + C_{2} \left \{ 1/ N_{2}^{1+ \delta }+ 1/ (\delta N_{2}^{\delta }) \right \}.\)
We can choose \(N_{4} \in \mathbb {N}\) which satisfies
and
where c 2 is the same constant in (2).
Combining Remark 5 with (50) and using the left-hand side of (2), we obtain that for N≥ max{N 4,(4/ε)(N 2+1)},
Hence for N≥ max{N 4,(4/ε)(N 2+1)},
where
and
The proof is complete if we show that for N≥ max{N 4,(4/ε)(N 2+1)},
We use (52), then
for N≥ max{N 4,(4/ε)(N 2+1)}. We can show that
for N≥ max{N 4,(4/ε)(N 2+1)} by (51). The above two inequalities yield (53). □
D Proof of (9)
We show the lower bound of Example 1. In this case, a=1, β=1, \(c_{*} = \frac {\pi }{2a}\) and \(\varepsilon = \frac {1}{2}\). We have |E N |=2c 11 by (40).
We note
We can choose C ∗=1.225 by (31). So we have
We have
by (41). So we can show that
by (36), (38) and (39). So we have
by Proposition 3. It is easily to check \(r_{*} \fallingdotseq 0.212207\) (by (32)) and \(\max _{r_{*}\le |\theta |\le \pi }|\phi (\theta)| \le 0.785802\), then we set ρ ∗=0.785802. Then,
So we have (9). □
References
Adler, M., Räcke, H., Sivadasan, N., Sohler, C., Vöcking, B.: Randomized Pursuit-Evasion in Graphs. Combinatorics Probability Comput. 12, 225–244 (2003).
Aleliunas, R., Karp, R.M., Lipton, R.J., Lovász, L., Rackoff, C.: Random walks, universal traversal sequences, and the complexity of maze problems. In: In Proceedings of the 20th IEEE Symposium on Foundations of ComputerScience (FOCS), pp. 218–223 (1979).
Alpern, S.: The search game with mobile hider on the circle. In: Emilio O. Roxin, Pan-Tai Liu, Robert L. Sternberg (eds.)Differential Games and Control Theory, pp. 181–200. Marcel Dekker, New York (1974).
Babichenko, Y., Peres, Y., Peretz, R., Sousi, P., Winkler, P.: Hunter, Cauchy Rabbit, and Optimal Kakeya Sets. preprint, arXiv:1207.6389v1 (2012).
Chatzigannakis, I., Nikoletseas, S., Spirakis, P.: An efficient communication strategy for ad-hoc mobile networks. In: Proc. the 20th ACM Symposium on Principles of Distributed Computing (PODC), pp. 320–322 (2001).
Chatzigiannakis, I., Nikoletseas, S., Paspallis, N., Spirakis, P., Zaroliagis, C.: An experimental study of basic communication protocols in ad-hoc mobile networks. Lecture Notes in Computer Science 2141, pp. 159–171 (2001).
Efrat, A., Guibas, L.J., Har-Peled, S., Lin, D.C., Mitchell, J.S.B., Murali, T.M.: Sweeping Simple Polygons with a Chain of Guards (2000).
Gal, S.: Search games with mobile and immobile hider, SIAM. J. Control Optimization. 17(1), 99–122 (1979).
Guibas, L.J., Latombe, J.-C., LaValle, S.M., Lin, D., Motwani, R.: A visibility-based pursuit-evasion problem. Int. J. Comput. Geometry Appl. (IJCGA). 9(4), 471–493 (1999).
Isaacs, R.: Differential games, A mathematical theory with applications to warfare and pursuit, control and optimization. John Wiley & Sons, Inc, New York-London-Sydney (1965).
Kirousis, L.M., Papadimitriou, C.H.: Searching and pebbling. Theor Comput. Sci. 47, 205–218 (1986).
LaPaugh, A.S.: Recontamination does not help to search a graph. J. ACM. 40(2), 224–245 (1993).
Lawler, G.F: Intersections of Random Walks, Birkhäuser, Boston (1991).
Megiddo, N., Hakimi, S.L., Garey, M.R., Johnson, D.S., Papadimitriou, C.H.: The complexity of searching a graph. J. ACM. 35(1), 18–44 (1988).
Park, S.-M., Lee, J.-H., Chwa, K.-Y.: Visibility-based pursuit-evasion in a polygonal region by a searcher. Lecture Notes in Computer Science 2076, pp. 456–468 (2001).
Parsons, T.D: Pursuit-evasion in a graph. In: Alavi, Y., Lick, D. (eds.)Theory and Applications of Graphs, Lecture Notes in Mathematics 642, pp. 426–441 (1976).
Parsons, T.D.: The search number of a connected graph. In: Proc. the 9th South-eastern Conference on Combinatorics, Graph Theory and Computing, Utilitas Mathematica, Winnipeg, pp. 549–554 (1978).
Spitzer, F.: Principles of Random Walk. 2nd ed. Springer-Verlag, New York (1976).
Suzuki, I., Yamasita, M.: Searching for a mobile intruder in a polygonal region, SIAM. J. Comput. 21(5), 863–888 (1992).
Zelikin, M.I.: A certain differential game with incomplete information. Doklady Akademii Nauk SSSR. 202, 998–1000 (1972).
Acknowledgements
I would like to express my deepest gratitude to Professor Hiroyuki Ochiai for his valuable advice and guidance. I would like to thank Mr. Norikazu Ishii for his help.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ikeda, Y., Fukai, Y. & Mizoguchi, Y. A property of random walks on a cycle graph. Pac. J. Math. Ind. 7, 3 (2015). https://doi.org/10.1186/s40736-015-0015-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40736-015-0015-3