Skip to main content

Exact computation for meeting times and infection times of random walks on graphs

Abstract

We consider independent multiple random walks on graphs and study comparison results of meeting times and infection times between many conditions of the random walks by obtaining the exact density functions or expectations.

1 Introduction

Let G=(V,E) be a finite connected graph and \(X^{(0)}_{t}, \ldots, X^{(k)}_{t}\) independent continuous time or discrete time k+1-multiple random walks on G. We suppose |V|≥k+1 and regard \(X^{(0)}_{t}\) as an infected particle and consider an infection time t i n f e : it is the first time that \(X^{(0)}_{t}\) meets any other particles. Formally, it is given by

$$\begin{array}{@{}rcl@{}} t_{infe} = \max_{i \in \{1,\ldots,k\}} t_{meet}(i), \end{array} $$

where

$$\begin{array}{@{}rcl@{}} t_{meet}(i) = \inf\{t \ge 0;X^{(0)}_{t} = X^{(i)}_{t} \} \end{array} $$

is the first time that \(X^{(0)}_{t}\) meet \(X^{(i)}_{t}\) and called meeting time. In this paper, we investigate the exact distributions or expectations for meeting times and infection times. By using the principle of inclusion-exclusion, we see that

$$\begin{array}{@{}rcl@{}} t_{infe} = \sum\limits_{\Lambda \subset \{1,\ldots,k\}} (-1)^{|\Lambda|+1} t_{meet}(\Lambda) \end{array} $$

for every path of the random walks, where

$$\begin{array}{@{}rcl@{}} t_{meet}(\Lambda) &=& \min_{j \in \Lambda} t_{meet}(j) \\ &=& \inf\{t \ge 0;X^{(0)}_{t} = X^{(j)}_{t} \text{for some}\,\,j \in \Lambda\}. \end{array} $$

From this fact, our central aim is to obtain the density functions or the expectations of t m e e t (Λ) for each Λ⊂{1,…,k}. We will achieve it from the theorem derived in Section 2. It is well-known that a problem to obtain some first hitting times can be reduced to find solutions of some system of linear equations through the technique of the Laplace transform. In Section 2, we derive a basic relation of the Laplace transform of meeting time in Theorems 1 and 2. This result is just a kind of harmonic relation and an extension of that for first hitting times, but plays an important role to get exact value of meeting and infection times. Actually, solving the reduced system of linear equations for meeting times by that relations, we can show the exact densities and expectations for some graphs in Section 3. Moreover, we discuss comparison results among many conditions of random walks: starting point of the random walks, number of random walks, parameters of exponential holding times, graphs, and continuous times versus discrete times. In Section 4, another tool to analyze the meeting time of two random walks will be shown. It is mentioned in [1] that meeting times of two random walks on some graphs can be regarded as a first hitting time of a single random walk. We give a generalization of this fact.

1.1 Related work

Several mathematical models of infections are proposed and investigated. They are classified into the models in which the number of infected particles increases by meeting between particles, and the models in which the number of infected particles varies by the factors differ from the meetings.

In the latter models, many people have studied on infectious disease for a long time (cf. [8]).

In the former models, the particles are move on finite or infinite graphs. Aldous [1], Aldous and Fill [2], Bshouty et al. [3], Cooper et al. [4] and Coppersmith et al. [5] investigated the expected meeting time of two independent Markov chains on a finite graph. Using their results, Draief and Ganesh [7] derived the upper bound for the expected time that all particles are infected for complete graphs and regular graphs. In their model, the infected probability varies by the coincidence time with infected particles and the infected rate. Such a model is also studied in Datta and Dorlas [6], and is related to our models; we may take the parameter for the infected rate to be infinity. Another similar models as ours are studied in [14]. They derived the the upper and lower bound for the time that all particles are informed(broadcasting time) on a finite square grid. Kurtz et al. [10] and Machado et al. [12] studied frog models on complete graphs: infection rule is same as ours in this model, although non-infected particles are immobile. They derived the limit theorems for the number of sites visited by the infected particles. Kurkova et al. [9] investigated the model that the infection rule is same as ours for an infinite square grid.

1.2 Models and notations

Recall that G=(V,E) is a finite connected graph and \(X^{(0)}_{t}, \ldots, X^{(k)}_{t}\) are independent continuous time or discrete time Markov chain on V with a same transition probability P=(p(x,y)) x,y∈V . In this paper we call them random walks on G if P satisfies p(x,y),p(y,x)>0 if and only if x y∈E. A p-lazy version of P is the transition matrix given by p I+(1−p)P, where I is the identity matrix. We often consider the lazy version for convenience, especially for discrete time random walks, because lazy chains are aperiodic for any graph. For continuous time random walks, let \(\frac {1}{\theta _{i}} > 0\) be a mean of an exponential holding time of X (i) for each i∈{0,…,k}. Note that they are allowed to take a different values. We write \(\mathbb {P}_{x_{0},\ldots,x_{k}}\) for the probability measure corresponding to the random walks \((X^{(i)}_{t})_{i=0,\ldots,k}\) starting from (x i ) i=0,…,k respectively.

2. Laplace transform of meeting times

We give the key tools to calculate the distribution or the expectation of meeting times t m e e t (Λ) for any Λ⊂V.

Theorem 1.

(Discrete time). Let Λ⊂V. For a discrete time random walk, the Laplace transform of t m e e t =t m e e t (Λ) is given by

$$\begin{aligned} &{}\mathbb{E}_{x_{0},\ldots,x_{k}} \left[e^{-\lambda t_{meet}} \right] \\ &= e^{-\lambda} \sum\limits_{z_{0},\ldots,z_{k} \in V} p(x_{0},z_{0}) \cdots p(x_{k},z_{k}) \mathbb{E}_{z_{0},\ldots,z_{k}} \left[e^{-\lambda t_{meet}} \right]\!. \end{aligned} $$

if x 0≠x j for all j∈Λ, and \(\mathbb {E}_{x_{0},\ldots,x_{k}} \left [e^{-\lambda t_{meet}(\Lambda)} \right ] = 1\) if x 0=x j for some j∈Λ. As a corollary, we have

$$\begin{array}{@{}rcl@{}} && \mathbb{E}_{x_{0},\ldots,x_{k}} \left[t_{meet}\right] \\ &~~=& 1 + \sum\limits_{z_{0},\ldots,z_{k} \in V} p(x_{0},z_{0}) \cdots p(x_{k},z_{k}) \mathbb{E}_{z_{0},\ldots,z_{k}}[t_{meet}]. \end{array} $$

if x 0≠x j for all j∈Λ.

Theorem 2.

(Continuous time). Let Λ⊂V. For a discrete time random walk, the Laplace transform of t m e e t =t m e e t (Λ) is given by

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},\ldots,x_{k}} \left[e^{-\lambda t_{meet}} \right] &= \frac{1}{\lambda+\sum_{l=0}^{k} \theta_{l}} \sum\limits_{i=0}^{k} \theta_{i} \sum\limits_{y \in V} p(x_{i},y) \\ &~~ \times \mathbb{E}_{x_{0},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{k}} \left[e^{-\lambda t_{meet}} \right]. \end{array} $$

if x 0≠x j for all j∈Λ, and \(\mathbb {E}_{x_{0},\ldots,x_{k}} \left [e^{-\lambda t_{meet}} \right ] = 1\). As a corollary, we have

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},\ldots,x_{k}} [t_{meet}] &= \frac{1}{\sum_{l=0}^{k} \theta_{l}} \left(1+\sum\limits_{i=0}^{k} \theta_{i} \sum\limits_{y \in V} p(x_{i},y)\right. \\ &~~\left. \times \mathbb{E}_{x_{0},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{k}} [t_{meet}(\Lambda)]{\vphantom{\sum\limits_{i=0}^{k}}} \right). \end{array} $$

Remark 1.

We mention underlying two product graphs G=(V,E) with V=V k+1; in the proofs below, an extended Markov chain (cf. [11]) will moves on the graphs. Let x=(x 0,…,x k ),y=(y 0,…,y k )∈V. For a discrete time random walk, x y∈E if and only if

$$\begin{array}{@{}rcl@{}} x_{i}y_{i} \in E~ \text{for all}\, i \in \{0,\ldots,k\}. \end{array} $$

For a continuous time random walk, x y∈E if and only if there exists a unique i∈{0,…,k} such that

$$x_{i}y_{i} \in E \quad \text{and} \quad x_{j}=y_{j} \text{~for all~} j \neq i. $$

The former graph is called a tensor product of graphs and the latter graph is called a cartesian product of graphs.

Proof of Theorem 1.

We consider a Markov chain on V k+1 given by \(\mathbf {X_{t}} = \left (X^{(0)}_{t}, \ldots, X^{(k)}_{t}\right)\). Note that i-th marginal process of X t is equal in law to \(X^{(i)}_{t}\) respectively and the transition matrix of this chain is given by

$$\begin{array}{@{}rcl@{}} P((x_{0},\ldots,x_{k}),(y_{0},\ldots,y_{k})) = p(x_{0},y_{0}) \cdots p(x_{k},y_{k}). \end{array} $$

Moreover, we notice that t m e e t (Λ) is equivalent to the first hitting time of X t to the set

$$\begin{array}{@{}rcl@{}} \left\{(x_{0},\ldots,x_{k}) \in V^{k+1} ; x_{0} = x_{i} \text{for some}\, i \in \Lambda\right\}. \end{array} $$

Thus, the assertion is obtained by an ordinary first-step analysis of first hitting times. â–¡

In order to prove Theorem 2, we let \(t_{jump}^{X^{(i)}} = \inf \left \{t \ge 0 ; X^{(i)}_{t} \neq X^{(i)}_{0} \right \}\) be a first jump time of \(X^{(i)}_{t}\) and \(t_{jump} = \min _{i \in \{0,\ldots,k\}} t_{jump}^{X^{(i)}}\).

Lemma 1.

$$\begin{array}{@{}rcl@{}} E\left[\mathbf{1}_{\{t_{jump} = t_{jump}^{X^{(i)}}\}} e^{-\lambda t_{jump}}\right] = \frac{\theta_{i}}{\lambda+\sum_{l=0}^{k} \theta_{l}}, \end{array} $$

where 1 A is the indicator function of a set A.

Proof.

We remark that

$$\begin{array}{@{}rcl@{}} &\mathbb{P} & \left(t_{jump} = t_{jump}^{X^{(i)}} \right) \\ &=& \mathbb{P}\left(t_{jump}^{X^{(i)}} \le \min_{l \neq i} t_{jump}^{X^{(l)}} \right) \\ &=& \int_{0}^{\infty} \theta_{i} e^{-\theta_{i} t} \mathbb{P}\left(t \le \min_{l \neq i} t_{jump}^{X^{(l)}} \right) dt \\ &=& \int_{0}^{\infty} \theta_{i} e^{-\theta_{i} t} \prod\limits_{l\neq i} \mathbb{P}\left(t \le t_{jump}^{X^{(l)}} \right) dt \\ &=& \int_{0}^{\infty} \theta_{i} e^{-\left(\sum_{l=0}^{k} \theta_{l}\right) t} dt. \end{array} $$

Thus,

$$\begin{array}{@{}rcl@{}} \mathbb{E} \left[\mathbf{1}_{\{t_{jump} = t_{jump}^{X^{(i)}}\}} e^{-\lambda t_{jump}}\right] &=& \int_{0}^{\infty} e^{-\lambda t} \theta_{i} e^{-\left(\sum_{l=0}^{k} \theta_{l}\right) t} dt \\ &=& \frac{\theta_{i}}{\lambda+\sum_{l=0}^{k} \theta_{l}}. \end{array} $$

â–¡

Proof of Theorem 2.

By Lemma 1, we have

$$\begin{array}{@{}rcl@{}} &&\mathbb{E}_{x_{0},\ldots,x_{k}} \left[e^{-\lambda t_{meet}}\right] \\ &=& \sum\limits_{i=0}^{k} \sum\limits_{y \in V} \mathbb{E}_{x_{0},\ldots,x_{k}}\left[ \mathbf{1}_{\{t_{jump} = t_{jump}^{X^{(i)}}\}} \mathbf{1}_{\{X^{(i)}_{t_{jump}} = y \}} e^{-\lambda t_{meet}} \right] \\ &=& \sum\limits_{i=0}^{k} \sum\limits_{y \in V} p(x_{i},y) \mathbb{E}_{x_{0},\ldots,x_{k}}\left[ \mathbf{1}_{\{t_{jump} = t_{jump}^{X^{(i)}}\}} e^{-\lambda t_{jump}}\right] \\ &&\times \mathbb{E}_{x_{0},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{k}}\left[e^{-\lambda t_{meet}}\right] \\ &=& \frac{1}{\lambda+\sum_{l=0}^{k} \theta_{l}} \sum\limits_{i=0}^{k} \theta_{i} \sum\limits_{y \in V} p(x_{i},y) \\ &&\times \mathbb{E}_{x_{0},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{k}} \left[e^{-\lambda t_{meet}} \right]. \end{array} $$

â–¡

3 Exact computations for graphs

In this section we study exact computations of t i n f e =t i n f e (k)= maxi∈{1,…,k}t m e e t (i) for certain graphs. Note that t i n f e is nothing but t m e e t (1) when k=1.

3.1 Star graphs

We consider the p-lazy simple random walk on a star graph for the case k=1. Let G=(V,E) be a star graph with V={0,…,N} and E={0x;x=1,…,N}.

Proposition 1.

Set

$$\begin{array}{@{}rcl@{}} \Theta &=& \frac{1}{2}\left(\frac{\theta_{0}}{\theta_{1}} + \frac{\theta_{1}}{\theta_{0}}\right), \\ A &=& (1-p)(\theta_{0}+\theta_{1}), \\ B &=& (1-p)\sqrt{({\theta_{0}^{2}} + {\theta_{1}^{2}})\left(1-\frac{1}{N}\right)}, \\ C &=& \frac{1}{\Theta} + \frac{1}{N}, \\ D &=& \frac{(\theta_{0} + \theta_{1})(\theta_{0} - \theta_{1})}{{\theta_{0}^{2}}+{\theta_{1}^{2}}}. \end{array} $$

Then,

$$\begin{array}{@{}rcl@{}} \frac{\mathbb{P}_{x_{0},x_{1}}(t_{infe} \in dt)}{dt} &=& \frac{BC}{\left(1-\frac{1}{N}\right)} e^{-At} \sinh{(Bt)}, \\ \mathbb{E}_{x_{0},x_{1}}\left[t_{infe}\right] &=& \frac{1}{1-p} \left(\frac{1}{\theta_{0}} + \frac{1}{\theta_{1}} \right) \left(1+\frac{\Theta}{N} \right)^{-1} \end{array} $$

if x 0,x 1≠0 and x 0≠x 1,

$${\small{\begin{aligned} &\frac{\mathbb{P}_{x_{0},x_{1}}(t_{infe} \in dt)}{dt} \\ &\quad= (1-p)e^{-At} \left(\theta_{0}C\cosh{(Bt)} - \theta_{1}D \right), \\ &\mathbb{E}_{x_{0},x_{1}}[t_{infe}] \\ &\quad= \frac{1}{1-p}\left(\frac{1}{\theta_{1}}\left(1-\frac{1}{N} \right)\left(1 + \frac{\Theta}{N} \right)^{-1} + \frac{1}{\theta_{0} + \theta_{1}} \right) \end{aligned}}} $$

if x 0=0 and x 1≠0, and

$$\begin{aligned} &\frac{\mathbb{P}_{x_{0},x_{1}}(t_{infe} \in dt)}{dt} \\ &\quad= (1-p)e^{-At} \left(\theta_{1}C\cosh{(Bt)} + \theta_{0}D \right), \\ &\mathbb{E}_{x_{0},x_{1}}[t_{infe}] \\ &\quad= \frac{1}{1-p}\left(\frac{1}{\theta_{0}}\left(1-\frac{1}{N} \right)\left(1 + \frac{\Theta}{N} \right)^{-1} + \frac{1}{\theta_{0} + \theta_{1}} \right) \end{aligned} $$

if x 0≠0 and x 1=0.

Corollary 1.

We see that

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{0,1}\left[t_{infe}\right] \le \mathbb{E}_{1,0}[t_{infe}] \end{array} $$
(1)

if and only if θ 0≤θ 1, and

$$\begin{array}{@{}rcl@{}} \max_{x_{0},x_{1} \in V} \mathbb{E}_{x_{0},x_{1}}[t_{infe}] = \mathbb{E}_{1,2}[t_{infe}] \end{array} $$
(2)

for any θ 0,θ 1>0.

Remark 2.

It is shown in [1, 2] that \(\mathbb {E}_{x_{0},x_{1}}[t_{infe}]\) can be bounded by the maximum expected hitting time for a reversible Markov chain, that is,

$$\begin{array}{@{}rcl@{}} \max_{x_{0},x_{1}} \mathbb{E}_{x_{0},x_{1}}\left[t_{infe}\right] \le \max_{x_{0},x_{1}} \mathbb{E}_{x_{0}}\left[t_{hit}(x_{1})\right], \end{array} $$

where \(t_{hit}(y) = \inf \left \{t\ge 0;X^{(0)}_{t}=y\right \}\). In addition, it is mentioned as a remark that the above bound is not tight for a star graph. This remark also can be verified from Proposition 1 as follows. Note that the maximum expected hitting time is

$$\begin{array}{@{}rcl@{}} \max_{x,y \in V} \mathbb{E}_{x}[t_{hit}(y)] \sim \frac{2N}{(1-p)\theta_{0}}, \end{array} $$

where f(N)∼g(N) means \({\lim }_{N \to \infty } f(N)/g(N) = 1\). On the other hand, from Proposition 1, we see that

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1}}\![t_{infe}]\!\left\{ \begin{array}{ll} \sim \frac{1}{1-p} \left(\frac{1}{\theta_{0}} + \frac{1}{\theta_{1}} \right), & x_{0}, x_{1} \!\neq 0 \text{~and~}x_{0} \neq x_{1}, \\ \sim \frac{1}{1-p} \left(\frac{1}{\theta_{1}}+\frac{1}{\theta_{0}+\theta_{1}} \right), & x_{0}=0 \text{~and~} x_{1} \neq 0, \\ \sim \frac{1}{1-p} \left(\frac{1}{\theta_{0}}+\frac{1}{\theta_{0}+\theta_{1}} \right), & x_{0} \neq 0 \text{~and~} x_{1} = 0. \end{array} \right. \end{array} $$

Therefore, \(\max _{x_{0},x_{1}} \mathbb {E}_{x_{0},x_{1}}[t_{infe}] = O(1)\) and \(\max _{x,y \in V} \mathbb {E}_{x}[t_{hit}(y)] = O(N)\).

Lemma 2.

The Laplace transform of t i n f e is given by

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1}}\left[e^{-\lambda t_{infe}}\right] & = \frac{BC}{\left(1-\frac{1}{N}\right)} \left(\frac{B}{(\lambda+A)^{2} - B^{2}}\right). \end{array} $$

if x 0,x 1≠0 and x 0≠x 1,

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}] = (1-p)\left(\frac{\theta_{0}C(\lambda+A)}{(\lambda+A)^{2} - B^{2}} - \frac{\theta_{1}D}{\lambda+A} \right) \end{array} $$

if x 0=0 and x 1≠0, and

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}] = (1-p)\left(\frac{\theta_{1}C(\lambda+A)}{(\lambda+A)^{2} - B^{2}} + \frac{\theta_{0}D}{\lambda+A} \right) \end{array} $$

if x 0≠0 and x 1=0, where A,B,C and D are given in Proposition 1.

Proof.

Put \(\alpha = \mathbb {E}_{1,2}[e^{-\lambda t_{infe}}], \beta = \mathbb {E}_{0,1}\left [e^{-\lambda t_{infe}}\right ]\) and \(\gamma = \mathbb {E}_{1,0}[e^{-\lambda t_{infe}}]\). From Theorem 2, they satisfy the following:

$$\begin{array}{@{}rcl@{}} &&(\lambda+\theta_{0}+\theta_{1}) \alpha \\ && \quad=\theta_{0}(p \alpha + (1-p)\beta) + \theta_{1}(p \alpha + (1-p)\gamma), \\ &&(\lambda+\theta_{0}+\theta_{1}) \beta \\ && \quad=\theta_{0} \left(p\beta \,+\, \frac{(1\,-\,p)}{N}(1\,+\,(N\,-\,1)\alpha) \right) \!+ \!\theta_{1} \left(p\beta \,+\, (1-p) \right), \\ &&(\lambda+\theta_{0}+\theta_{1}) \gamma \\ && \quad=\theta_{0} \left(p\gamma\! +\! (1\,-\,p) \right) \,+\, \theta_{1} \left(\!p\gamma \,+\, \frac{(1\,-\,p)}{N}(1\,+\,(N\,-\,1)\alpha) \right), \end{array} $$

or, equivalently,

$$\begin{array}{@{}rcl@{}} (\lambda+A) \alpha &=& (1-p)(\theta_{0}\beta + \theta_{1}\gamma), \\ (\lambda+A) \beta &=& (1-p) \left(\frac{\theta_{0}}{N} + \theta_{1} + \theta_{0}\left(1-\frac{1}{N}\right) \alpha \right), \\ (\lambda+A) \gamma &=& (1-p) \left(\theta_{0} + \frac{\theta_{1}}{N} + \theta_{1}\left(1-\frac{1}{N}\right) \alpha \right). \end{array} $$

By solving the above equations, we obtain the assertion. â–¡

Proof of Proposition 1.

The density functions immediately follow from Lemma 2 and

$$\begin{array}{@{}rcl@{}} \frac{\lambda+A}{(\lambda+A)^{2} - B^{2}} &=& \int_{0}^{\infty} e^{-\lambda t} e^{-At} \cosh{(Bt)} dt, \end{array} $$
(3)
$$\begin{array}{@{}rcl@{}} \frac{B}{(\lambda+A)^{2} - B^{2}} &=& \int_{0}^{\infty} e^{-\lambda t} e^{-At} \sinh{(Bt)} dt. \end{array} $$
(4)

The expectations will be obtaind by taking \(- \frac {\partial }{\partial \lambda } \mathbb {E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}] \big |_{\lambda =0}\) in Lemma 2. Noting that

$$\begin{array}{@{}rcl@{}} A^{2} - B^{2} &=& (1-p)^{2}({\theta_{0}^{2}}+{\theta_{1}^{2}})C, \\ A^{2} + B^{2} \ &=& (1-p)^{2} ({\theta_{0}^{2}} + {\theta_{1}^{2}})^{2}C \\ &&\quad \times\! \left(\frac{1}{\theta_{0}\theta_{1}}\!\left(1\,-\,\frac{1}{N} \right)\left(1 + \frac{\Theta}{N} \right)^{-1} \,+\, \frac{1}{{\theta_{0}^{2}}+{\theta_{1}^{2}}} \!\right), \end{array} $$

if x 0,x 1≠0 and x 0≠x 1, we have

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1}}\left[t_{infe}\right] &=& \frac{B^{2}C}{1-\frac{1}{N}} \cdot \frac{2A}{(A^{2}-B^{2})^{2}} \\ &=& \frac{2(\theta_{0}+\theta_{1})} {(1-p)({\theta_{0}^{2}} + {\theta_{1}^{2}})C} \\ &=& \frac{1}{1-p}\left(\frac{1}{\theta_{0}} + \frac{1}{\theta_{1}} \right) \left(1 + \frac{\Theta}{N}\right)^{-1}. \end{array} $$

If x 0=0 and x 1≠0,

$${\small{\begin{aligned} {}\mathbb{E}_{x_{0},x_{1}}\left[t_{infe}\right] \\ &= (1-p)\left(\theta_{0} C \cdot \frac{A^{2}+B^{2}}{(A^{2} - B^{2})^{2}} - \theta_{1} D \cdot \frac{1}{A^{2}} \right) \\ &= \frac{1}{1-p}\left(\frac{1}{\theta_{1}}\left(1-\frac{1}{N} \right)\left(1 + \frac{\Theta}{N} \right)^{-1} \,+\, \frac{1}{\theta_{0} + \theta_{1}} \right). \end{aligned}}} $$

If x 0≠0 and x 1=0,

$${\small{\begin{aligned} {}\mathbb{E}_{x_{0},x_{1}}[t_{infe}] \\ &= (1-p)\left(\theta_{1} C \cdot \frac{A^{2}+B^{2}}{(A^{2} - B^{2})^{2}} + \theta_{0} D \cdot \frac{1}{A^{2}} \right) \\ &= \frac{1}{1-p}\left(\frac{1}{\theta_{0}}\left(1-\frac{1}{N} \right)\left(1 + \frac{\Theta}{N} \right)^{-1} + \frac{1}{\theta_{0} + \theta_{1}} \right). \end{aligned}}} $$

â–¡

Proof of Corollary 1.

The inequality (1) follows immediately from Proposition 1. The equality (2) will be shown as follows. Put α= max{θ 0,θ 1} and β= min{θ 0,θ 1}. Note that α β=θ 0 θ 1 and α+β=θ 0+θ 1. Then,

$$\begin{array}{@{}rcl@{}} &&\max\{\mathbb{E}_{0,1}[t_{infe}], \mathbb{E}_{1,0}[t_{infe}] \} \\ &=& \frac{1}{1-p}\left(1 \,+\, \frac{\Theta}{N} \right)^{-1}\left(\frac{1}{\alpha}\left(1-\frac{1}{N} \right) + \frac{1}{\alpha + \beta} \left(1 \,+\, \frac{\Theta}{N} \right) \right), \\ &&\mathbb{E}_{1,2}[t_{infe}] \\ &=&\frac{1}{1-p} \left(1 + \frac{\Theta}{N} \right)^{-1}\left(\frac{1}{\alpha} + \frac{1}{\beta}\right). \end{array} $$

From the above two equalities and \(\Theta = \frac {1}{2}\left (\frac {\beta }{\alpha } + \frac {\alpha }{\beta }\right)\), we obtain

$$\begin{array}{@{}rcl@{}} &&\mathbb{E}_{1,2}[t_{infe}] - \max\{\mathbb{E}_{0,1}[t_{infe}], \mathbb{E}_{1,0}[t_{infe}] \} \\ &=& \mathbb{E}_{1,2}[t_{infe}] - \max\{\mathbb{E}_{0,1}[t_{infe}], \mathbb{E}_{1,0}[t_{infe}] \} \\ &=& \frac{1}{(1-p)(\alpha+\beta)} \left(1 + \frac{\Theta}{N} \right)^{-1} \\ &&\times \left(\frac{\alpha}{\beta} \left(1 - \frac{1}{2N}\right) + \frac{1}{N}\left(1+\frac{\beta}{2\alpha}\right) \right) \\ && \ge 0. \end{array} $$

â–¡

3.2 Cycle graphs

We consider a random walk on a cycle graph for the case k=1. Let V={0,1,…,N−1} and we give the transition probability by

$$p(x,y) =\left\{ \begin{array}{ll} p, & y = x+1 \mod N, \\ q, & y = x-1 \mod N, \\ 0, & \text{otherwise}, \end{array} \right. $$

where 0≤p,q≤1 and p+q=1. Note that we can apply Theorem 3 stated in Section 4 when the random walk is simple, that is, p=q=1/2.

Proposition 2.

Set

$$\begin{array}{@{}rcl@{}} \mu_{\pm} &=& \frac{1}{2(\theta_{0}p + \theta_{1}q)} \left({\vphantom{\sqrt{(\lambda+\theta_{0}+\theta_{1})^{2}-4(\theta_{0}p + \theta_{1}q)(\theta_{0}q + \theta_{1}p)}}} (\lambda+\theta_{0}+\theta_{1})\right. \\ &&\left.\pm \sqrt{(\lambda+\theta_{0}+\theta_{1})^{2}-4(\theta_{0}p + \theta_{1}q)(\theta_{0}q + \theta_{1}p)} \right). \end{array} $$

Then, the Laplace transform of t i n f e is given by

$$\begin{array}{@{}rcl@{}} E_{x,y}[e^{-\lambda t_{infe}}] = \frac{\mu_{+}^{f(x-y)}\left(1 - \mu_{-}^{N}\right) + \mu_{-}^{f(x-y)}\left(\mu_{+}^{N} - 1\right)}{\mu_{+}^{N}-\mu_{-}^{N}}, \end{array} $$

where f(z)=z mod N.

Corollary 2.

Put \(\nu = \frac {\theta _{0}q + \theta _{1}p}{\theta _{0}p + \theta _{1}q}\) and f(z)=z mod N. Then,

$$\begin{array}{@{}rcl@{}} &&E_{x,y}[t_{infe}] \\ &=& \frac{1}{(p-q)(\theta_{0}-\theta_{1})} \left(\frac{N(1 - \nu^{f(x-y)})}{1 - \nu^{N}} - f(x-y) \right) \end{array} $$

if p≠q and θ 0≠θ 1, and

$$\begin{array}{@{}rcl@{}} E_{x,y}\left[t_{infe}\right] = \frac{f(x-y)(N-f(x-y))}{\theta_{0} + \theta_{1}} \end{array} $$

if p=q or θ 0=θ 1.

Remark 3.

  1. 1.

    It is worth noting that the distribution of t i n f e is independent of p if θ 0=θ 1, because of the definition of μ ±.

  2. 2.

    Let \(t_{hit}(y) = \inf \left \{t\ge 0;X^{(0)}_{t}=y\right \}\) and

    $$\begin{array}{@{}rcl@{}} \tilde{\mu}_{\pm} = \frac{\lambda+\theta_{0} \pm \sqrt{(\lambda+\theta_{0})^{2}-4{\theta_{0}^{2}}pq}}{2\theta_{0}p}. \end{array} $$

    Then, we can verify that

    $$\begin{array}{@{}rcl@{}} E_{x}\!\left[e^{-\lambda t_{hit}(y)}\right] \,=\, \frac{\tilde{\mu}_{+}^{f(x-y)}\left(1 - \tilde{\mu}_{-}^{N}\right) \!+ \tilde{\mu}_{-}^{f(x-y)}\left(\tilde{\mu}_{+}^{N} - 1\right)} {\tilde{\mu}_{+}^{N}-\tilde{\mu}_{-}^{N}}. \end{array} $$

    Therefore the Laplace transform of the hitting time has exactly the same expression as Proposition 2 by replacing μ ± with \(\tilde {\mu }_{\pm }\). We notice that \({\lim }_{\theta _{1} \to 0} \mu _{\pm } = \tilde {\mu }_{\pm }\).

  3. 3.

    We see from Corollary 2 that

    $$\begin{array}{@{}rcl@{}} E_{x,y}\left[t_{infe}\right] =\left\{ \begin{array}{ll} O(N), & p \neq q \text{~and~} \theta_{0} \neq \theta_{1}, \\ O(N^{2}), & p = q \text{~or~} \theta_{0} = \theta_{1}. \end{array} \right. \end{array} $$

    Note that (p−q)(θ 0−θ 1)≠0 is equivalent to ν≠0.

Proof of Proposition 2.

Put \(\phantom {\dot {i}\!}a_{x,y} = E_{x,y}[e^{-\lambda t_{meet}(\Lambda)}]\). We first remark that

$$\begin{array}{@{}rcl@{}} a_{x,y} = a_{f(x-y),0} \end{array} $$
(5)

for any x,y∈V because of the definition of the transition probability. So we may just have \(\tilde {a}_{z} = E_{z,0}[e^{-\lambda t_{meet}(\Lambda)}]\) for z∈V. From Theorem 2, they satisfy

$${\small{\begin{aligned} {}(\lambda+\theta_{0}+\theta_{1})\tilde{a}_{z} \\ &= \theta_{0}(pa_{f(z+1),0} + qa_{f(z-1),0}) \\ &\quad+ \theta_{1}(pa_{z,f(0+1)} + qa_{z,f(0-1)}) \\ &= (\theta_{0}p + \theta_{1}q)\tilde{a}_{f(z+1)} + (\theta_{0}q + \theta_{1}p)\tilde{a}_{f(z-1)}. \end{aligned}}} $$

Here we used a f(z+1),0=a z,f(0−1) and a f(z−1),0=a z,f(0+1) for the last equality, which are obtained from (5). This system of equation together with the boundary conditions \(\tilde {a}_{0}=\tilde {a}_{N}=1\) has a solution

$$\begin{array}{@{}rcl@{}} \tilde{a}_{z} = \frac{\mu_{+}^{z}(1 - \mu_{-}^{N}) + \mu_{-}^{z}(\mu_{+}^{N} - 1)}{\mu_{+}^{N}-\mu_{-}^{N}}. \end{array} $$

â–¡

Proof of Corollary 2.

We prove the corollary in a similar fashion to that in the proof of Proposition 2. Put \(\phantom {\dot {i}\!}b_{x,y} = E_{x,y}[t_{meet}(\Lambda)]\) and \(\phantom {\dot {i}\!}\tilde {b}_{z} = E_{z,0}[t_{meet}(\Lambda)]\). Then, we see that

$$\begin{array}{@{}rcl@{}} b_{x,y} = b_{f(x-y),0} \end{array} $$

for x,y∈V and

$$\begin{array}{@{}rcl@{}} && (\theta_{0}+\theta_{1})\tilde{b}_{z} \\ &=& 1 + \theta_{0}(pb_{f(z+1),0} + qb_{f(z-1),0}) \\ &&+ \theta_{1}(pb_{z,f(0+1)} + qb_{z,f(0-1)}) \\ &=& 1+ (\theta_{0}p + \theta_{1}q)\tilde{b}_{f(z+1)} + (\theta_{0}q + \theta_{1}p)\tilde{b}_{f(z-1)}. \end{array} $$

for z∈V. This system of equation together with the boundary conditions \(\tilde {b}_{0}=\tilde {b}_{N}=0\) has a solution

$$\begin{array}{@{}rcl@{}} \tilde{b}_{z} = \frac{1}{(p-q)(\theta_{0}-\theta_{1})} \left(\frac{N(1 - \nu^{z})}{1 - \nu^{N}} - z \right) \end{array} $$

if p≠q and θ 0≠θ 1, and

$$\begin{array}{@{}rcl@{}} \tilde{b}_{z} = \frac{z(N-z)}{\theta_{0} + \theta_{1}} \end{array} $$

if p=q or θ 0=θ 1. □

3.3 Complete graphs

We consider the p-lazy simple random walk on a complete graph with N+1-vertices. For simplicity, we use the following notations:

$$\begin{array}{@{}rcl@{}} \tilde{N} &=& \frac{N}{1-p}, \\ \theta_{ij} &=& \theta_{i} + \theta_{j}. \end{array} $$

3.3.1 The case k=1

Proposition 3.

Let x 0≠x 1. For continuous time random walks, it holds that

$$\begin{array}{@{}rcl@{}} \frac{\mathbb{P}_{x_{0},x_{1}}(t_{infe} \in dt)}{dt} &=& \frac{\theta_{01}}{\tilde{N}} e^{-\frac{\theta_{01}}{\tilde{N}}t}, \\ \mathbb{E}_{x_{0},x_{1}}[t_{infe}] &=& \frac{\tilde{N}}{\theta_{01}}. \end{array} $$

For discrete time random walks, it holds that

$$\begin{aligned} \mathbb{P}_{x_{0},x_{1}} & (t_{infe} = t) \\ &= \left(1 - \frac{(1+p)\tilde{N} - 1}{\tilde{N}^{2}} \right)^{t-1}\! \frac{(1+p)\tilde{N} - 1}{\tilde{N}^{2}}, \\ \mathbb{E}_{x_{0},x_{1}}[t_{infe}] &= \frac{\tilde{N}^{2}}{(1+p)\tilde{N} - 1}. \end{aligned} $$

Remark 4.

We notice from Proposition 3 that t i n f e has an exponential distribution of the parameter \(\frac {\theta _{01}}{\tilde {N}}\) for continuous time random walks, and a geometric distribution of the parameter \(\frac {(1+p)\tilde {N} - 1}{\tilde {N}^{2}}\) for discrete time random walks.

Corollary 3.

Let \(t^{\text {(dis.)}}_{infe}\) be an infection time of a discrete time random walk and \(t^{\text {(con.)}}_{infe}\) an infection time of a continuous time random walk. Then,

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1}}\left[t^{\text{(dis.)}}_{infe}\right] \le \mathbb{E}_{x_{0},x_{1}}\left[t^{\text{(con.)}}_{infe}\right] \end{array} $$
(6)

if and only if \(p \le 1 + \frac {(\theta _{01}-2)N}{N+1}\).

Proof of Proposition 3.

Put \(\alpha = \mathbb {E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}]\) for any fixed x 0,x 1∈V with x 0≠x 1. From Theorem 2, α satisfies the following:

$$\begin{array}{@{}rcl@{}} (\lambda+\theta_{01}) \alpha &=& \theta_{0} \left(\sum\limits_{z \in V} p(x_{0},z)E_{z,x_{1}}[e^{-\lambda t_{infe}}] \right) \\ &&+ \theta_{1} \left(\sum\limits_{z \in V} p(x_{1},z)E_{x_{0},z}[e^{-\lambda t_{infe}}] \right) \\ &=& \theta_{01} \left(p\alpha + \frac{1}{\tilde{N}}(1+(N-1)\alpha) \right). \end{array} $$

Thus, the density function is obtained from

$$\begin{array}{@{}rcl@{}} \alpha &=& \frac{\theta_{01}}{\tilde{N}\lambda + \theta_{01}} \\ &=& \int_{0}^{\infty} e^{-\lambda t} \frac{\theta_{01}}{\tilde{N}} e^{-\frac{\theta_{01}}{\tilde{N}}t} dt. \end{array} $$

The expectation is also obtained by \(- \frac {\partial \alpha }{\partial \lambda } \big |_{\lambda =0}\). On the other hand, from Theorem 1, α satisfies the following:

$$\begin{array}{@{}rcl@{}} e^{\lambda} \alpha &=& \sum\limits_{z_{0},z_{1} \in V} p(x_{0},z_{0})p(x_{1},z_{1}) \mathbb{E}_{x_{0},x_{1}}[e^{-\lambda t_{infe}}] \\ &=& p\left(p \alpha + \frac{1}{\tilde{N}}(1+(N-1)\alpha) \right) + \frac{1}{\tilde{N}} \left(p + \frac{1}{\tilde{N}} N\alpha \right) \\ &&+ \frac{1}{\tilde{N}} (N-1) \left(p \alpha + \frac{1}{\tilde{N}}(1+(N-1)\alpha) \right) \end{array} $$

Thus, the density function is obtained from

$$\begin{array}{@{}rcl@{}} \alpha &=& \frac{(1+p)\tilde{N} - 1}{\tilde{N}^{2}(e^{\lambda} -1) + ((1+p)\tilde{N}-1)} \\ &=& \sum\limits_{t=1}^{\infty} e^{-\lambda t}\left(1 - \frac{(1+p)\tilde{N} - 1}{\tilde{N}^{2}} \right)^{t-1} \frac{(1+p)\tilde{N} - 1}{\tilde{N}^{2}}. \end{array} $$

The expectation is also obtained by \(- \frac {\partial \alpha }{\partial \lambda } \big |_{\lambda =0}\). â–¡

Proof of Corollary 3.

From Proposition 3, the inequality

$$\begin{array}{@{}rcl@{}} \frac{\mathbb{E}_{x_{0},x_{1}}[t^{\text{(con.)}}_{infe}]}{\mathbb{E}_{x_{0},x_{1}}[t^{\text{(dis.)}}_{infe}]} &=& \frac{(1+p)\tilde{N} - 1}{\theta_{01}\tilde{N}} \\ &=& \frac{N-1 + p(N+1)}{\theta_{01}N} \\ &\,\,\,\,\,\le 1 \end{array} $$

holds if and only if

$$\begin{array}{@{}rcl@{}} p\le \frac{(\theta_{01}-1)N+1}{N+1} = 1 + \frac{(\theta_{01}-2)N}{N+1}. \end{array} $$

â–¡

3.3.2 The case k=2

Proposition 4.

Let x 0≠x 1,x 2 and

$$\begin{array}{@{}rcl@{}} I &=& \frac{(1-p)\theta_{12}}{2} \left(1 + \frac{3\theta_{0} + 2\theta_{12}}{\theta_{12}N} \right), \\ J &=& \frac{(1-p)\theta_{12}}{2} \sqrt{\left(1 - \frac{\theta_{0}}{\theta_{12}N}\right)^{2} + \frac{4\theta_{0}}{\theta_{12}N^{2}}}, \\ K &=& \frac{\left(1- \frac{(2\theta_{0} + 3\theta_{12})\theta_{0}}{(2\theta_{0} + \theta_{12})\theta_{12}N}\right)} {\sqrt{\left(1 - \frac{\theta_{0}}{\theta_{12}N}\right)^{2} + \frac{4\theta_{0}}{\theta_{12}N^{2}}}}, \\ L &=& \left(\frac{3\theta_{0} + \theta_{12}}{\theta_{0} + \theta_{12}} \right) \frac{\left(1 + \frac{(\theta_{0} - \theta_{12})\theta_{0}} {(3\theta_{0} + \theta_{12})\theta_{12}N}\right)} {\sqrt{\left(1 - \frac{\theta_{0}}{\theta_{12}N}\right)^{2} + \frac{4\theta_{0}}{\theta_{12}N^{2}}}} \end{array} $$

Then, the density function of t i n f e is given by

$$\begin{aligned} & \frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{infe} \in dt)}{dt} \\ &~~= \frac{\theta_{01}}{\tilde{N}} e^{-\frac{\theta_{01}}{\tilde{N}}t} + \frac{\theta_{02}}{\tilde{N}} e^{-\frac{\theta_{02}}{\tilde{N}}t} \\ &\quad- \frac{2\theta_{0} + \theta_{12}}{\tilde{N}} e^{-It} (\cosh{(Jt)} + K \sinh{(Jt)}) \end{aligned} $$

if x 1≠x 2, and

$$\begin{aligned} &\frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{infe} \in dt)}{dt} \\ &\quad=\frac{\theta_{01}}{\tilde{N}} e^{-\frac{\theta_{01}}{\tilde{N}}t} + \frac{\theta_{02}}{\tilde{N}} e^{-\frac{\theta_{02}}{\tilde{N}}t} \\ &\quad- \frac{\theta_{0} + \theta_{12}}{\tilde{N}} e^{-It} (\cosh{(Jt)} + L \sinh{(Jt)}) \end{aligned} $$

if x 1=x 2.

Lemma 3.

Consider the Laplace transform of t m e e t =t m e e t (Λ) for Λ={1,2}. If x 0≠x 1,x 0≠x 2 and x 1≠x 2, then

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1},x_{2}}\left[e^{-\lambda t_{meet}}\right] = \frac{(2\theta_{0} + \theta_{12})\tilde{N}\lambda + b}{\tilde{N}^{2}\lambda^{2} + a\tilde{N}\lambda + b}, \end{array} $$

and if x 0≠x 1,x 0≠x 2 and x 1=x 2, then

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1},x_{1}}\left[e^{-\lambda t_{meet}}\right] = \frac{(\theta_{0} + \theta_{12})\tilde{N}\lambda + b}{\tilde{N}^{2}\lambda^{2} + a\tilde{N}\lambda + b}, \end{array} $$

where

$$\begin{array}{@{}rcl@{}} a &=& \theta_{12}N + 3\theta_{0} + 2\theta_{12}, \\ b &=& (2\theta_{0} + \theta_{12})\theta_{12}(N+1) + 2{\theta_{0}^{2}}. \end{array} $$

Proof.

Put \(\alpha = \mathbb {E}_{0,1,2}\left [e^{-\lambda t_{meet}}\right ], \beta = \mathbb {E}_{0,1,1}\left [e^{-\lambda t_{meet}}\right ]\). From Theorem 2, they satisfy the following:

$$\begin{array}{@{}rcl@{}} && (\lambda+\theta_{0}+\theta_{12}) \alpha \\ &=& \theta_{0} \left(p \alpha + \frac{1}{\tilde{N}}(2 + (N-2)\alpha) \right) \\ && + \theta_{12} \left(p \alpha + \frac{1}{\tilde{N}}(1 + \beta + (N-2)\alpha)\right), \\ && (\lambda+\theta_{0}+\theta_{12}) \beta \\ &=& \theta_{0}\left(p \beta + \frac{1}{\tilde{N}}(1 + (N-1)\beta) \right) \\ &&+ \theta_{12} \left(p \beta + \frac{1}{\tilde{N}}(1 + (N-1)\alpha)\right). \end{array} $$

By solving the above equations, we obtain the assertion. â–¡

Proof of Proposition 4.

Let Λ={1,2}. From Lemma 3, we see that

$$\begin{array}{@{}rcl@{}} && \mathbb{E}_{x_{0},x_{1},x_{2}}\left[e^{-\lambda t_{meet}(\Lambda)}\right] \\ &=& \frac{(2\theta_{0} + \theta_{12})\tilde{N}\lambda + b}{\tilde{N}^{2}\lambda^{2} + a\tilde{N}\lambda + b} \\ &=& \frac{1}{\tilde{N}^{2}} \frac{(2\theta_{0} + \theta_{12})\left(\tilde{N}\lambda + \frac{a}{2}\right) + \frac{2b - (2\theta_{0} + \theta_{12})a}{2}} {\left(\lambda + \frac{a}{2\tilde{N}}\right)^{2} - \left(\frac{\sqrt{a^{2}- 4b}}{2\tilde{N}}\right)^{2}} \\ &=& \frac{2\theta_{0} + \theta_{12}}{\tilde{N}} \left(\frac{\lambda+I}{(\lambda+I)^{2} - J^{2}} + K\frac{J}{(\lambda+I)^{2} - J^{2}} \right) \end{array} $$

if x 0≠x 1,x 0≠x 2 and x 1≠x 2, and

$$\begin{array}{@{}rcl@{}} && \mathbb{E}_{x_{0},x_{1},x_{2}}\left[e^{-\lambda t_{meet}(\Lambda)}\right] \\ &=& \frac{(\theta_{0} + \theta_{12})\tilde{N}\lambda + b}{\tilde{N}^{2}\lambda^{2} + a\tilde{N}\lambda + b} \\ &=& \frac{1}{\tilde{N}^{2}} \frac{(\theta_{0} + \theta_{12})\left(\tilde{N}\lambda + \frac{a}{2}\right) + \frac{2b - (\theta_{0} + \theta_{12})a}{2}} {\left(\lambda + \frac{a}{2\tilde{N}}\right)^{2} - \left(\frac{\sqrt{a^{2}- 4b}}{2\tilde{N}}\right)^{2}} \\ &=& \frac{\theta_{0} + \theta_{12}}{\tilde{N}} \left(\frac{\lambda+I}{(\lambda+I)^{2} - J^{2}} + L \frac{J}{(\lambda+I)^{2} - J^{2}} \right). \end{array} $$

if x 0≠x 1,x 0≠x 2 and x 1=x 2. Note that

$$\begin{array}{@{}rcl@{}} a^{2}- 4b = \theta_{12}^{2}N^{2} \left(\left(1 - \frac{\theta_{0}}{\theta_{12}N}\right)^{2} + \frac{4\theta_{0}}{\theta_{12}N^{2}} \right) > 0. \end{array} $$

Considering inverse Laplace transform as (3) and (4), we have

$$\begin{array}{@{}rcl@{}} && \frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{meet}(\Lambda) \in dt)}{dt} \\ &=& \frac{2\theta_{0} + \theta_{12}}{\tilde{N}} e^{-It} (\cosh{(Jt)} + K \sinh{(Jt)}), \end{array} $$

if x 0≠x 1,x 0≠x 2 and x 1≠x 2, and

$$\begin{array}{@{}rcl@{}} && \frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{meet}(\Lambda) \in dt)}{dt} \\ &=& \frac{\theta_{0} + \theta_{12}}{\tilde{N}} e^{-It} (\cosh{(Jt)} + L \sinh{(Jt)}) \end{array} $$

if x 0≠x 1,x 0≠x 2 and x 1=x 2. Therefore, combining results above and Proposition 3, we can compute

$$\begin{array}{@{}rcl@{}} &&\frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{infe} \in dt)}{dt} \\ &=& \frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{meet}(\{1\}) \in dt)}{dt} + \frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{meet}(\{2\}) \in dt)}{dt} \\ &&- \frac{\mathbb{P}_{x_{0},x_{1},x_{2}}(t_{meet}(\{1,2\}) \in dt)}{dt}. \end{array} $$

â–¡

Corollary 4.

We denote the infection time of k+1-multiple random walks by t i n f e (k). Then,

$$ {\small{\begin{aligned} \mathbb{E}_{x_{0},x_{1}} \left[t_{infe}(1)\right] \le \mathbb{E}_{x_{0},x_{1},x_{1}} \left[t_{infe}(2)\right] \le \mathbb{E}_{x_{0},x_{1},x_{2}} \left[t_{infe}(2)\right] \end{aligned}}} $$
(7)

for any θ 0,θ 1,θ 2>0 and any mutually distinct x 0,x 1,x 2∈V.

Lemma 4.

Suppose x 0≠x 1 and x 0≠x 2. Then,

$$\begin{aligned} \mathbb{E}_{x_{0},x_{1},x_{2}}&[t_{meet}(\{1,2\})]\\ =&\left\{ \begin{array}{lll} \frac{(\theta_{12}(N+1) + \theta_{0})\tilde{N}} {(2\theta_{0} + \theta_{12})\theta_{12}(N+1) + 2{\theta_{0}^{2}}}, &x_{1} \neq x_{2}, \\ \frac{(\theta_{12}(N+1) + 2\theta_{0})\tilde{N}} {(2\theta_{0} + \theta_{12})\theta_{12}(N+1) + 2{\theta_{0}^{2}}}, &x_{1} = x_{2}, \end{array} \right. \end{aligned} $$

and

$$\mathbb{E}_{x_{0},x_{1},x_{2}}[t_{infe}]\ $$
$${\small{\begin{aligned} \quad=\left\{ \begin{array}{ll} \left(1\! -\! \frac{\theta_{0}\theta_{12}} {\tilde{\theta}(2\theta_{0} + \theta_{12})((2\theta_{0} + \theta_{12})\theta_{12}(N+1) + 2{\theta_{0}^{2}})} \right)\tilde{\theta}\tilde{N}, & \!\!x_{1} \neq x_{2}, \\ \!\left(1 - \frac{2\theta_{0}(\theta_{0} + \theta_{12})} {\tilde{\theta}(2\theta_{0} + \theta_{12})((2\theta_{0} + \theta_{12})\theta_{12}(N+1) + 2{\theta_{0}^{2}})} \right)\tilde{\theta}\tilde{N},\!\! & \!x_{1} \!= x_{2}, \end{array} \right. \end{aligned}}} $$

where

$$\begin{array}{@{}rcl@{}} \tilde{\theta} = \frac{1}{\theta_{01}} + \frac{1}{\theta_{02}} - \frac{1}{\theta_{01} + \theta_{02}}. \end{array} $$

Proof.

The expectations of t m e e t ({1,2}) are obtained from Lemma 3 by taking \(- \frac {\partial }{\partial \lambda } \mathbb {E}_{x_{0},x_{1},x_{2}}[e^{-\lambda t_{meet}(\Lambda)}] \big |_{\lambda =0}\). The expectations of t i n f e are obtained by computing

$$\begin{array}{@{}rcl@{}} &&\mathbb{E}_{x_{0},x_{1},x_{2}}[t_{infe}] \\ &=& \mathbb{E}_{x_{0},x_{1},x_{2}}[t_{meet}(\{1\})] + \mathbb{E}_{x_{0},x_{1},x_{2}}[t_{meet}(\{2\})] \\ &&- \mathbb{E}_{x_{0},x_{1},x_{2}}[t_{meet}(\{1,2\})]. \end{array} $$

â–¡

Proof of Corollary 4.

We observe from Lemma 4 that

$$\begin{array}{@{}rcl@{}} \mathbb{E}_{x_{0},x_{1},x_{1}}\left[t_{meet}(\{1,2\})\right] &\le& \frac{(\theta_{12} + \theta_{0})\tilde{N}} {(2\theta_{0} + \theta_{12})\theta_{12} + {\theta_{0}^{2}}} \\ &=& \frac{\tilde{N}}{\theta_{0} + \theta_{12}} \end{array} $$

since N≥1. Thus the first inequality of (7) holds from

$$\begin{aligned} &\mathbb{E}_{x_{0},x_{1},x_{1}} \left[t_{inf}(2)\right] - \mathbb{E}_{x_{0},x_{1}} \left[t_{inf}(1)\right] \\ &\quad= \mathbb{E}_{x_{0},x_{1},x_{1}} \left[t_{inf}(2)\right] - \mathbb{E}_{x_{0},x_{1},x_{1}} \left[t_{meet}(\{1\})\right] \\ &\quad= \mathbb{E}_{x_{0},x_{1},x_{1}}\left[t_{meet}(\{2\})\right] - \mathbb{E}_{x_{0},x_{1},x_{1}}\left[t_{meet}(\{1,2\})\right] \\ & \quad\ge \frac{\tilde{N}}{\theta_{0} + \theta_{2}} - \frac{\tilde{N}}{\theta_{0} + \theta_{12}} \\ &\quad \ge 0. \end{aligned} $$

Here the second equality follows from the principle of inclusion-exclusion. The second inequality of the corollary is also holds from

$$\begin{array}{@{}rcl@{}} &&\mathbb{E}_{x_{0},x_{1},x_{2}} \left[t_{inf}^{(2)}\right] - \mathbb{E}_{x_{0},x_{1},x_{1}} \left[t_{inf}^{(2)}\right] \\ &= &- \mathbb{E}_{x_{0},x_{1},x_{2}}\left[t_{meet}(\{1,2\})\right] + \mathbb{E}_{x_{0},x_{1},x_{1}}\left[t_{meet}(\{1,2\})\right] \\ &\ge& 0 \end{array} $$

since \(\mathbb {E}_{x_{0},x_{1},x_{2}}\left [t_{meet}(\{1,2\})\right ] \le \mathbb {E}_{x_{0},x_{1},x_{1}}\left [t_{meet}(\{1,2\})\right ]\) by Lemma 4.

â–¡

4 Meeting times of t m e e t (1)

Finally, we give another tool for meeting times t m e e t (Λ) for Λ={1}.

A couple of examples of graphs is mentioned in [1] such that “difference" of two random walks \(X_{t}^{(0)} - X_{t}^{(1)}\) on the graph behaves precisely as \(X_{2t}^{(0)}\). If this property holds, then the meeting time of two random walks is equivalent to half of the first hitting time of a single random walk. Of course, it must be assumed that an additive operation on V is defined, that is, \(X_{t}^{(0)} - X_{t}^{(1)} \in V\). Other detailed assumptions are stated below.

Theorem 3.

We consider t m e e t =t m e e t ({1})of continuous time random walk on G=(V,E)such that an additive operation on V is defined. Let t h i t (x)=i n f{t≥0;Y t =x} for x∈V, where \(Y_{2t} = X^{(0)}_{t} - X^{(i)}_{t}\) is the continuous time random walk with mean \(\frac {2}{\theta _{0}+\theta _{i}}\) of exponential holding time. Then,

$$\begin{array}{@{}rcl@{}} t_{meet} \stackrel{\text{law}}{=} \frac{1}{2} t_{hit}(x_{0}-x_{1}) \end{array} $$

with respect to \(\mathbb {P}_{x_{0},x_{1}}\), if the transition matrix P=(p(x,y)) satisfies the following:

  1. 1.

    p(x,y)=p(y,x) for all x,y∈V,

  2. 2.

    p(x,y)=p(x−y,0) for all x,y∈V.

Proof of Theorem 3.

We remark that the hypotheses 1 and 2 in the theorem are equivalent to

$$\begin{array}{@{}rcl@{}} p(x,y-z) = p(y,x+z) \quad \text{for all} \quad x,y,z \in V. \end{array} $$

Then, for any function \(f: \mathbb {R} \to \mathbb {R}\), we have

$$\begin{array}{@{}rcl@{}} &&\mathbb{E}_{x_{0},x_{1}} \left[f(X^{(0)}_{t} - X^{(1)}_{t})\right] \\ &=&\sum\limits_{v,w \in V} f(v-w) p^{t}(x_{0},v)p^{t}(x_{1},w) \\ &=& \sum\limits_{v \in V} \sum\limits_{z \in V} f(z) p^{t}(x_{0},v)p^{t}(x_{1},v-z) \\ &=& \sum\limits_{z \in V} f(z) \sum\limits_{v \in V} p^{t}(x_{0},v)p^{t}(v,x_{1}+z) \\ &=& \sum\limits_{z \in V} f(z) p^{2t}(x_{0},x_{1}+z) \\ &=& \sum\limits_{z \in V} f(z) p^{2t}(x_{0}-x_{1},z) = \mathbb{E}_{x_{0},x_{1}}[f(Y_{2t})]. \end{array} $$

This completes the proof. â–¡

Remark 5.

Theorem 3 also holds for discrete time random walk under the obvious modification.

Example 1.

We demonstrate how to define an additive operator on V for continuous time simple random walks on G=(V,E).

  1. 1.

    Let G be a cycle graph with a vertex set V={0,…,n−1} and edge set E={x y;x−y=1 mod n}. Then, Theorem 3 holds if x−y is given by x−y mod n. In this context, we can recover the result in Section 3.2 for the case where p=q=1/2.

  2. 2.

    Let G be a hamming graph with a vertex set V={0,…,n−1}d and edge set E={x y;|{i;x i ≠y i }|=1}. Then, Theorem 3 holds if (x−y) i for all i=1,…,d is given by x i −y i mod n. The hitting times for hamming graphs are investigated in [13].

References

  1. Aldous, D.: Meeting times for independent Markov chains. Stoch. Process. Appl. 38, 185–193 (1991).

    Article  MathSciNet  MATH  Google Scholar 

  2. Aldous, D., Fill, J.: Reversible Markov chains and random walks on graphs. Monograph available at http://www.stat.berkeley.edu/~aldous/RWG/book.html (2002).

  3. Bshouty, N., Higham, L., Warpechowska-Gruca, J.: Meeting times of random walks on graphs. Inf Process Lett. 69(5), 259–265 (1999).

    Article  MathSciNet  MATH  Google Scholar 

  4. Cooper, C., Frieze, A., Radzik, T.: Multiple random walks in random regular graphs. SIAM J Discrete Math. 22(4), 1738–1761 (2009).

    MathSciNet  MATH  Google Scholar 

  5. Coppersmith, D., Tetali, P.: Winkler: Collisions among random walks on a graph. SIAM J Discrete Math. 6(3), 363–374 (1993).

    Article  MathSciNet  MATH  Google Scholar 

  6. Datta, N., Dorlas, T.C: Random walks on a complete graph: a model for infection. J Appl Probab. 41, 1008–1021 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  7. Draief, M., Ganesh, A.: A random walk model for infection on graphs: spread of epidemics & rumours with mobile agents. Discrete Event Dyn Syst. 21, 41–61 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  8. Kuperman, M.: Invited review: Epidemics on social networks. Papers in Physics. 5, art. 050003 (2013).

    Article  Google Scholar 

  9. Kurkova, I., Popov, S., Vachkovskaia, M.: On infection spreading and competition between independent random walks. Electron. J. Probab. 9(11), 1–22 (2004).

    MathSciNet  MATH  Google Scholar 

  10. Kurtz, T., Lebensztayn, E., Leichsenring, A., Machado, F.: Limit theorems for an epidemic model on the complete graph. Alea. 4, 45–55 (2008).

    MathSciNet  MATH  Google Scholar 

  11. Levin, D., Peres, Y., Wilmer, E.: Markov Chains and Mixing Times. American Mathematical Society, Providence, RI (2009).

    MATH  Google Scholar 

  12. Machado, F., Mashurian, H., Matzinger, H.: CLT for the proportion of infected individuals for an epidemic model on a complete graph. Markov Process. Related Fields. 17, 209–224 (2011).

    MathSciNet  MATH  Google Scholar 

  13. Markowsky, G.: Simple random walk on distance-regular graphs. preprint Arxiv:1301.6394 (2013).

  14. Pettarin, A., Pietracaprina, A., Pucci, G.: Infectious Random Walks. preprint Arxiv:1007.1604v2 (2011).

Download references

Acknowledgements

The author is grateful to Mikio Shibuya for stimulus discussions, in particular for Theorem 3. The author was supported by JST, ERATO, Kawarabayashi Large Graph Project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takuya Ohwa.

Rights and permissions

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ohwa, T. Exact computation for meeting times and infection times of random walks on graphs. Pac. J. Math. Ind. 7, 5 (2015). https://doi.org/10.1186/s40736-015-0016-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40736-015-0016-2

Keywords