Skip to main content
  • ORIGINAL ARTICLE
  • Open access
  • Published:

A tridiagonal matrix construction by the quotient difference recursion formula in the case of multiple eigenvalues

Abstract

In this paper, we grasp an inverse eigenvalue problem which constructs a tridiagonal matrix with specified multiple eigenvalues, from the viewpoint of the quotient difference (qd) recursion formula. We also prove that the characteristic and the minimal polynomials of a constructed tridiagonal matrix are equal to each other. As an application of the qd formula, we present a procedure for getting a tridiagonal matrix with specified multiple eigenvalues. Examples are given through providing with four tridiagonal matrices with specified multiple eigenvalues.

1 Introduction

One of the important problems in linear algebra is to construct matrices with specified eigenvalues. This is an inverse eigenvalue problem which is classified in Structured Inverse Eigenvalue Problem (SIEP) called in [1]. The main purpose of this paper is to design a procedure for solving an SIEP in the case where the constructed matrix has tridiagonal form with multiple eigenvalues, through reconsidering the quotient difference (qd) formula. It is known that the qd formula has the applications to computing a continued fraction expansion of power series [5], zeros of polynomial [3], eigenvalues of a so-called Jacobi matrix [9] and so on. Though the book [9] refers to an aspect similar to in the following sections, it gives only an anticipated comment without proof in the case of multiple eigenvalues. There is no observation about numerical examples for verifying it. The key point for the purpose is to investigate the Hankel determinants appearing in the determinant solution to the qd formula with the help of the Jordan canonical form. In this paper, we give our focus on the unsettled case in order to design a procedure for constructing a tridiagonal matrix with specified multiple eigenvalues, based on the qd formula. The reason why the sequence of discussions was stopped is expected that multiple-precision arithmetic and symbolic computing around the published year of Rutishauser’s works for the qd formula were not sufficiently developed. The qd formula, strictly speaking the differential form of it, for computing tridiagonal eigenvalues acts with high relative accuracy in single-precision or double-precision arithmetic [7], while, actually, that serving for constructing a tridiagonal matrix gives rise to no small errors. Thus, the qd formula serving for constructing a tridiagonal matrix is not so worth in single-precision or double-precision arithmetic. In recent computers, it is not difficult to employ not only single or double precision arithmetic but also arbitrary-precision arithmetic or symbolic computing. In fact, an expression involving only symbolic quantities achieves exact arithmetic on the scientific computing software such as Wolfram Mathematica, Maple and so on. Numerical errors frequently occur in finite-precision arithmetic, so that a constructed tridiagonal matrix probably does not have multiple eigenvalues without symbolic computing. The resulting procedure in this paper is assumed to be carried out on symbolic computing.

This paper is organized as follows. In Section 2, we first give a short explanation of some already obtained properties concerning the qd formula. In Section 3, we observe a tridiagonal matrix whose characteristic polynomial is associated with the minimal polynomial of a general matrix through reconsidering the qd formula. The tridiagonal matrix essentially differs from the Jacobi matrix in that it is not always symmetrized. We also discuss the characteristic and the minimal polynomials of a tridiagonal matrix in Section 4. In Section 5, we design a procedure for constructing a tridiagonal matrix with specified multiple eigenvalues, and then demonstrate four tridiagonal matrices as examples of the resulting procedure. Finally, in Section 6, we give conclusion.

2 Some properties for the qd recursion formula

In this section, we briefly review two theorems in [4] concerning the qd formula from the viewpoint of a generating function, the Hankel determinant and a tridiagonal matrix.

Let us introduce the Hankel determinants \(H_{1}^{(n)},H_{2}^{(n)},\dots \) given in terms of a complex sequence \(\{\,f_{n}\}_{0}^{\infty }\) as

$$\begin{array}{*{20}l} H_{s}^{(n)}&=\left| \begin{array}{cccc} f_{n} & f_{n+1} & \cdots & f_{n+s-1} \\ f_{n+1} & f_{n+2} & \cdots & f_{n+s} \\ \vdots & \vdots & \ddots & \vdots \\ f_{n+s-1} & f_{n+s} & \cdots & f_{n+2s-2} \\ \end{array} \right|,\\ s&=1,2,\dots,\quad n=0,1,\dots, \end{array} $$
(1)

where \(H_{-1}^{(n)}=0\) and \(H_{0}^{(n)}=1\) for n=0,1,…. Moreover, let F(z) be a generating function associated with \(\{\,f_{n}\}_{0}^{\infty }\) as

$$\begin{array}{*{20}l} F(z)=\sum\limits_{n=0}^{\infty}f_{n} z^{n}=f_{0} +f_{1} z+f_{2} z^{2}+\cdots. \end{array} $$
(2)

Let us consider that F(z) is a rational function with respect to z with a pole of order l0≥0 at infinity and finite poles z k ≠0 of order l k for k=1,2,…,L. Then the sum of the orders of the finite poles is l=l1+l2++l L , and F(z) is factorized as

$$\begin{array}{*{20}l} F(z)=G_{0}(z)+\frac{G(z)}{(z-z_{1})^{l_{1}}(z-z_{2})^{l_{2}}\cdots(z-z_{L})^{l_{L}}}, \end{array} $$
(3)

where G(z) is a polynomial of degree at most l, and G0(z) is a polynomial of degree l0 if l0>0, or G0(z)=0 if l0=0. The following theorem gives the determinant solution to the qd recursion formula

$$ {}{\fontsize{9.9pt}{9.6pt}\selectfont{\begin{aligned}\left\{ \begin{array}{lll} q_{s}^{(n+1)}+e_{s-1}^{(n+1)}=q_{s}^{(n)}+e_{s}^{(n)},\\ \qquad s=1,2,\dots,\quad n=0,1,\dots, \\ q_{s}^{(n+1)}e_{s}^{(n+1)}=q_{s+1}^{(n)}e_{s}^{(n)},\quad s=1,2,\dots,\quad n=0,1,\dots. \end{array} \right. \end{aligned}}} $$
(4)

Theorem1.

([4], pp. 596, 603, 610) Let F(z) be factorized as in (3). Then it holds that

$$\begin{array}{*{20}l} H_{s}^{(n)}=0,\quad s=l+1,l+2,\dots,\quad n=l_{0}+1,l_{0}+2,\dots. \end{array} $$
(5)

Let us assume that

$$\begin{array}{*{20}l} H_{s}^{(n)} \ne 0,\quad s=1,2,\dots,l,\quad n=0,1,\dots. \end{array} $$
(6)

Then the qd formula (4) with the initial settings

$$\begin{array}{*{20}l} e_{0}^{(n)}=0,\quad q_{1}^{(n)}=\frac{f_{n+1}}{f_{n}},\quad n=0,1,\dots \end{array} $$
(7)

admits the determinant solution

$$\begin{array}{*{20}l} & q_{s}^{(n)}=\frac{H_{s}^{(n+1)}H_{s-1}^{(n)}}{H_{s}^{(n)}H_{s-1}^{(n+1)}},\quad s=1,2,\dots,l,\quad n=0,1,\dots, \end{array} $$
(8)
$$\begin{array}{*{20}l} & e_{s}^{(n)}=\frac{H_{s+1}^{(n)}H_{s-1}^{(n+1)}}{H_{s}^{(n)}H_{s}^{(n+1)}},\quad s=0,1\dots,l,\quad n=0,1,\dots. \end{array} $$
(9)

From (9) with (5), it follows that \(e_{l}^{(n)}=0\) for n=0,1,…. Moreover, it turns out that \(q_{s}^{(n)}\) and \(e_{s}^{(n)}\) for s=l+1,l+2,… and n=0,1,… are not given in the same form as (8) and (9).

Let us introduce s-by-s tridiagonal matrices,

$$\begin{array}{*{20}l} & T_{s}^{(n)} =\left(\begin{array}{cccc} q_{1}^{(n)} & q_{1}^{(n)}e_{1}^{(n)} & & \\ 1 & q_{2}^{(n)}+e_{1}^{(n)} & \ddots & \\ & \ddots & \ddots & q_{s-1}^{(n)}e_{s-1}^{(n)}\\ & & 1 & q_{s}^{(n)}+e_{s-1}^{(n)} \end{array} \right),\\ & \quad s=1,2,\dots,l,\quad n=0,1,\dots \end{array} $$
(10)

with the qd variables \(q_{s}^{(n)}\) and \(e_{s}^{(n)}\). Let I s be the s-by-s identity matrix. Then we obtain a theorem for the characteristic polynomial of \(T_{l}^{(n)}\).

Theorem2.

([4], pp. 626, 635) Let F(z) be factorized as in (3). Let us assume that \(H_{s}^{(n)}\) satisfies (6). For n=0,1,…, it holds that

$$\begin{array}{*{20}l} {} \det\left({zI}_{l}-T_{l}^{(n)}\right) \,=\,\left(z-z_{1}^{-1}\right)^{l_{1}}\left(z-z_{2}^{-1}\right)^{l_{2}}\!\cdots\left(z-z_{L}^{-1}\right)^{l_{L}}\!. \end{array} $$
(11)

3 Tridiagonal matrix associated with general matrix

In this section, from the viewpoint of the characteristic and the minimal polynomials, we associate a general M-by-M complex matrix A with a tridiagonal matrix \(T_{l}^{(n)}\).

Let λ1,λ2,…,λ N be the distinct eigenvalues of A, which are numbered as |λ1|≥|λ2|≥≥|λ N |. It is noted that some of |λ1|,|λ2|,…,|λ N | may equal to each other in the case where some of λ1,λ2,…,λ N are negative eigenvalues or complex eigenvalues. Let M k be the algebraic multiplicity of λ k , where M=M1+M2++M N . For the identity matrix \(I_{M}\in \mathbb {R}^{M\times M}\), let ϕ A (z)= det(zI M A) be the characteristic polynomial of A, namely,

$$\begin{array}{*{20}l} \phi_{A}(z)=(z-\lambda_{1})^{M_{1}}(z-\lambda_{2})^{M_{2}}\cdots(z-\lambda_{N})^{M_{N}}. \end{array} $$
(12)

Let us prepare the sequence \(\{\,f_{n}\}_{0}^{\infty }\) given by

$$\begin{array}{*{20}l} f_{n}=\boldsymbol{w}^{H}A^{n}\boldsymbol{u},\quad n=0,1,\dots \end{array} $$
(13)

for some nonzero M-dimensional complex vectors u and w, where the superscript H denotes the Hermitian transpose. Originally, f0,f1,… were called the Schwarz constants, but they are usually today called the moments or the Markov parameters [2]. Since the matrix power series \(\sum _{n=0}^{\infty }(zA)^{n}\) is a Neumann series (cf. [6]), \(F(z)=\sum _{n=0}^{\infty }\boldsymbol {w}^{H}(zA)^{n}\boldsymbol {u}\) converges absolutely in the disk D:|z|<|λ1|−1. Moreover, we derive F(z)=wH(I M zA)−1u which implies that F(z) is a rational function with the denominator det(I M zA)=zMϕ A (z−1) as follows.

$$\begin{array}{*{20}l} F(z)=\frac{\tilde{G}(z)}{(1-\lambda_{1}z)^{M_{1}}(1-\lambda_{2}z)^{M_{2}}\cdots(1-\lambda_{N}z)^{M_{N}}}, \end{array} $$
(14)

where \(\tilde {G}(z)\) is some polynomial with respect to z. It is remarkable that the numerator \(\tilde {G}(z)\) may have the same factors as the denominator \(\phantom {\dot {i}\!}(1-\lambda _{1}z)^{M_{1}}(1-\lambda _{2}z)^{M_{2}}\cdots (1-\lambda _{N}z)^{M_{N}}\). In other words, F(z) has the poles \(\lambda _{1}^{-1},\lambda _{2}^{-1},\dots,\lambda _{N}^{-1}\) whose orders are equal to or less than M1,M2,…,M N , respectively.

Let us introduce the Jordan canonical form of A in order to investigate the poles of F(z) with (13) even in the case where A has multiple eigenvalues. Let \({\mathcal {M}}_{k}\) be the geometric multiplicity of λ k which indicates the dimension of eigenspace Ker(Aλ k I M ). It is noted that \({\mathcal {M}}_{k}\) is equal to or less than the algebraic multiplicity M k . The matrix A has \({\mathcal {M}}_{k}\) eigenvectors corresponding to λ k , and then the eigenvectors, denoted by \(\boldsymbol {v}_{k,1},\boldsymbol {v}_{k,2},\dots,\boldsymbol {v}_{k,{\mathcal {M}}_{k}}\), satisfy

$$\begin{array}{*{20}l} A\boldsymbol{v}_{k,j}=\lambda_{k}\boldsymbol{v}_{k,j},\quad j=1,2,\dots,{\mathcal{M}}_{k}. \end{array} $$
(15)

Hereinafter, for \(j=1,2,\dots,{\mathcal {M}}_{k}\), let vk,j(1)=vk,j. Moreover, for \(j=1,2,\dots,{\mathcal {M}}_{k}\), let vk,j(2), vk,j(3), …, vk,j(mk,j) denote the generalized eigenvectors associated with the eigenvectors vk,j(1), where mk,j is the maximal integer such that vk,j(1), vk,j(2), …, vk,j(mk,j) are linearly independent. Of course, \(m_{k,1}+m_{k,2}+\cdots +m_{k,{\mathcal {M}}_{k}}=M_{k}\). Then, the generalized eigenvectors vk,j(2),vk,j(3),…,vk,j(mk,j) satisfy

$$\begin{array}{*{20}l} &A\boldsymbol{v}_{k,j}(i)=\lambda_{k}\boldsymbol{v}_{k,j}(i)+\boldsymbol{v}_{k,j}(i-1),\\ &\qquad i=2,3,\dots,m_{k,j},\quad j=1,2,\dots,{\mathcal{M}}_{k}. \end{array} $$
(16)

From (15) and (16), we derive the Jordan canonical form of A as

$$\begin{array}{*{20}l} & V^{-1}AV = J \end{array} $$
(17)

with the nonsingular matrix

$$\begin{array}{*{20}l} & V =(V_{1}\,V_{2}\,\cdots\,V_{N})\in\mathbb{C}^{M\times M}, \end{array} $$
(18)

and the block diagonal matrix

$$\begin{array}{*{20}l} & J=\text{diag}(J_{1},J_{2},\dots,J_{N})\in\mathbb{C}^{M\times M}, \end{array} $$
(19)

where

$$\begin{array}{*{20}l} & V_{k} = (V_{k,1}\,V_{k,2}\,\cdots\,V_{k,{\mathcal{M}}_{k}})\in\mathbb{C}^{M\times M_{k}}, \end{array} $$
(20)
$$\begin{array}{*{20}l} & V_{k,j}=(\boldsymbol{v}_{k,j}(1)\,\boldsymbol{v}_{k,j}(2)\,\cdots\,\boldsymbol{v}_{k,j}(m_{k,j}))\in\mathbb{C}^{M\times m_{k,j}}, \end{array} $$
(21)
$$\begin{array}{*{20}l} & J_{k} = \text{diag}(J_{k,1},J_{k,2},\dots,J_{k,{\mathcal{M}}_{k}})\in\mathbb{C}^{M_{k}\times M_{k}}, \end{array} $$
(22)
$$\begin{array}{*{20}l} & J_{k,j}=\left(\begin{array}{lccc} \lambda_{k} & 1 & & \\ & \lambda_{k} & \ddots & \\ & & \ddots & 1 \\ & & & \lambda_{k} \end{array} \right) \in\mathbb{C}^{m_{k,j}\times m_{k,j}}. \end{array} $$
(23)

Without loss of generality, we may assume that \(m_{k,1}\ge m_{k,2}\ge \cdots \ge m_{k,{\mathcal {M}}_{k}}\).

Let \(m_{k}=\max \{m_{k,1},m_{k,2},\dots,m_{k,{\mathcal {M}}_{k}}\}\). Since \(m_{k,1}\ge m_{k,2}\ge \cdots \ge m_{k,{\mathcal {M}}_{k}}\), it is obvious that m k =mk,1. With the help of the Jordan canonical form of A as in (17), we get a proposition for the sequence \(\{\,f_{n}\}_{0}^{\infty }\) in (13).

Proposition1.

Let u be the vector given by the linear combination of the eigenvectors and the generalized eigenvectors of A, namely, for some constants κk,j,i,

$$\begin{array}{*{20}l} \boldsymbol{u}=\sum\limits_{k=1}^{N}\sum\limits_{j=1}^{{\mathcal {M}}_{k}}\sum\limits_{i=1}^{m_{k,j}}\kappa_{k,j,i}\boldsymbol{v}_{k,j}(i). \end{array} $$
(24)

Moreover, for a vector w, let

$$\begin{array}{*{20}l} c_{k,i}=\sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i^{\prime}=i}^{m_{k,j}}\kappa_{k,j,i^{\prime}}\boldsymbol{v}^{H}_{k,j}(i^{\prime}-i+1)\boldsymbol{w}. \end{array} $$
(25)

Then, the sequence \(\{f_{n}\}_{0}^{\infty }\) in (13) can be expressed by

$$\begin{array}{*{20}l} f_{n}=\sum\limits_{k=1}^{N}\sum\limits_{i=1}^{m_{k}}\binom{n}{i-1}c_{k,i}\lambda_{k}^{n-i+1}, \end{array} $$
(26)

where the binomial coefficients are 0 if n<i−1. Also, for suitable u and w, it holds that

$$\begin{array}{*{20}l} c_{k,i}\ne 0,\quad i=1,2,\dots,m_{k},\quad k=1,2,\dots,N. \end{array} $$
(27)

Proof.

From V−1AV=J in (17), it holds that An=VJnV−1. By combining it with (13) and (24), we derive

$$\begin{array}{*{20}l} f_{n} &=\boldsymbol{w}^{H}VJ^{n}V^{-1}\boldsymbol{u} \\ &=\sum\limits_{k=1}^{N}\sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i=1}^{m_{k,j}} \kappa_{k,j,i}\boldsymbol{w}^{H}VJ^{n}V^{-1}\boldsymbol{v}_{k,j}(i). \end{array} $$
(28)

Let ρk,j,i be the column number in which vk,j(i) arranges. Then it is obvious that V−1vk,j(i)=ek,j(i) where ek,j(i) denotes a unit vector such that the ρk,j,ith entry is 1 and the others are 0. Thus, it follows that

$$\begin{array}{*{20}l} f_{n}=\sum\limits_{k=1}^{N}\sum\limits_{j=1}^{{\mathcal {M}}_{k}}\sum\limits_{i=1}^{m_{k,j}}\kappa_{k,j,i}\boldsymbol{w}^{H} V J^{n}\boldsymbol{e}_{k,j}(i). \end{array} $$
(29)

Since J is the block diagonal matrix, the matrix Jn and its small blocks (J k )n are also so. It also turns out that (Jk,j)n is upper triangle. So, it is worth noting that Jnek,j(i) becomes the ρk,j,ith column vector of Jn and the zero-entries arrange in except for its ρk,j,1th, ρk,j,2th, …, ρk,j,ith rows. The Jordan blocks Jk,j can be decomposed as

$$\begin{array}{*{20}l} & J_{k,j}=\lambda_{k}I_{m_{k,j}}+E_{m_{k,j}}, \end{array} $$
(30)
$$\begin{array}{*{20}l} & E_{m_{k,j}}=\left(\begin{array}{lccc} 0 & 1 & & \\ & 0 & \ddots & \\ & & \ddots & 1 \\ & & & 0 \end{array} \right) \in \mathbb{R}^{m_{k,j}\times m_{k,j}}. \end{array} $$
(31)

It is emphasized that \(E_{m_{k,j}}\) is a nilpotent matrix whose ith power becomes the zero-matrix O for imk,j. Thus, (Jk,j)n can be expressed as

$$\begin{array}{*{20}l} (J_{k,j})^{n} &=\sum\limits_{i^{\prime}=1}^{m_{k,j}}\binom{n}{i^{\prime}-1}\lambda_{k}^{n-i^{\prime}+1} (E_{m_{k,j}})^{i^{\prime}-1}, \end{array} $$
(32)

where \((E_{m_{k,j}})^{0}=I_{m_{k,j}}\). Let us introduce an mk,j-dimensional unit vector e(i) which is regarded as a part of ek,j(i). Then, by taking account that \((E_{m_{k,j}})^{i^{\prime }-1}\boldsymbol {e}(i)=\boldsymbol {e}(i-i^{\prime }+1)\) in (32), we derive

$$\begin{array}{*{20}l} J^{n}\boldsymbol{e}_{k,j}(i)=\sum\limits_{i^{\prime}=1}^{i}\binom{n}{i^{\prime}-1}\lambda_{k}^{n-i^{\prime}+1}\boldsymbol{e}_{k,j}(i-i^{\prime}+1). \end{array} $$
(33)

Since it holds that Vek,j(ii+1)=vk,j(ii+1), by combining it with (29) and (33), we therefore have

$$\begin{array}{*{20}l} f_{n} &=\sum\limits_{k=1}^{N}\sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i=1}^{m_{k,j}}\sum\limits_{i^{\prime}=1}^{i} \kappa_{k,j,i}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(i-i^{\prime}+1) \\ &\qquad \times \binom{n}{i^{\prime}-1}\lambda_{k}^{n-i^{\prime}+1}. \end{array} $$
(34)

By writing down two summations, we get

$$\begin{array}{*{20}l} f_{n} &= \sum\limits_{k=1}^{N}\sum\limits_{j=1}^{{\mathcal{M}}_{k}} \left[ \kappa_{k,j,1}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(1)\binom{n}{0}{\lambda_{k}^{n}}\right.\\ &\quad + \kappa_{k,j,2}\left(\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(2)\binom{n}{0}{\lambda_{k}^{n}} + \boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(1)\binom{n}{1}\lambda_{k}^{n-1}\right)\\ &\quad + \kappa_{k,j,3}\left(\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(3)\binom{n}{0}{\lambda_{k}^{n}} + \boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(2)\binom{n}{1}\lambda_{k}^{n-1} \right.\\ &\left.\quad + \boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(1)\binom{n}{2}\lambda_{k}^{n-2}\right)\\ &\quad+ \cdots\\ &\quad +\kappa_{k,j,m_{k,j}}\left(\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(m_{k,j})\binom{n}{0}{\lambda_{k}^{n}}\right.\\ &\quad + \boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(m_{k,j}-1)\binom{n}{1}\lambda_{k}^{n-1} \\ &\left.\left.\!\quad + \cdots + \boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(1)\binom{n}{m_{k,j}-1}\lambda_{k}^{n-m_{k,j}+1}\right)\right]. \end{array} $$

Moreover, by paying our attention to the binomial coefficients, we can rewrite f n as

$$\begin{array}{*{20}l} f_{n} &= \sum\limits_{k=1}^{N}\left[ \binom{n}{0}{\lambda_{k}^{n}} \sum\limits_{j=1}^{{\mathcal{M}}_{k}}\right. \left(\vphantom{+ \cdots + \kappa_{k,j,m_{k,j}}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(m_{k,j})}\kappa_{k,j,1}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(1) + \kappa_{k,j,2}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(2)\right.\\ &\left.\quad + \cdots + \kappa_{k,j,m_{k,j}}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(m_{k,j})\right) \\ &\quad +\binom{n}{1}\lambda_{k}^{n-1} \sum_{j=1}^{{\mathcal{M}}_{k}} \left(\vphantom{+ \cdots + \kappa_{k,j,m_{k,j}}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(m_{k,j}-1)}\kappa_{k,j,2}\boldsymbol{w}^{H}\boldsymbol{v}_{k}(j,1) + \kappa_{k,j,3}\boldsymbol{w}^{H}\boldsymbol{v}_{k}(j,2) \right.\\ &\left.\quad + \cdots + \kappa_{k,j,m_{k,j}}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(m_{k,j}-1) \right)\\ &\quad + \cdots\\ &\left.\quad + \binom{n}{m_{k,j}-1}\lambda_{k}^{n-m_{k,j}+1} \sum\limits_{j=1}^{{\mathcal{M}}_{k}} \kappa_{k,j,m_{k,j}}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(1) \right]\\ &= \sum\limits_{k=1}^{N} \left[ \binom{n}{0}{\lambda_{k}^{n}} \sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i=1}^{m_{k,j}}\kappa_{k,j,i}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(i-1+1) \right.\\ &\quad +\binom{n}{1}\lambda_{k}^{n-1} \sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i=2}^{m_{k,j}}\kappa_{k,j,i}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(i-2+1)\\ &\quad + \cdots \\ &\quad +\binom{n}{m_{k,j}-1}\lambda_{k}^{n-m_{k,j}+1}\\ &\left.\qquad \times \sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i=m_{k,j}}^{m_{k,j}}\kappa_{k,j,i}\boldsymbol{w}^{H}\boldsymbol{v}_{k,j}(i-m_{k,j}+1) \right]. \end{array} $$

From m k mk,j and \(\boldsymbol {w}^{H}\boldsymbol {v}_{k,j}(i-i^{\prime }+1)=\boldsymbol {v}^{H}_{k,j}(i-i^{\prime }+1)\boldsymbol {w}\), it follows that

$$\begin{array}{*{20}l} f_{n} =&\sum\limits_{k=1}^{N}\sum\limits_{i^{\prime}=1}^{m_{k}}\left(\sum\limits_{j=1}^{{\mathcal{M}}_{k}}\sum\limits_{i=i^{\prime}}^{m_{k,j}} \kappa_{k,j,i}\boldsymbol{v}^{H}_{k,j}(i-i^{\prime}+1)\boldsymbol{w}\right) \\ &\quad \times \binom{n}{i^{\prime}-1}\lambda_{k}^{n-i^{\prime}+1}. \end{array} $$
(35)

The exchange of i for i in (35) brings us to (25) and (26).

For example, let us consider the case where the constants κk,j,i are all 1. Then u becomes the sum of all the eigenvectors and generalized eigenvectors. Moreover, let w=VHα in (25) where α is an M-dimensional vector with all the entries 1. Then it holds that \(\kappa _{k,j,i}\boldsymbol {v}_{k,j}^{H}(i^{\prime }-i+1)\boldsymbol {w}=\boldsymbol {e}_{k,j}^{\top }(i^{\prime }-i+1)\boldsymbol {\alpha }=1\). Thus, it is concluded that ck,i≠0. The above discussion suggests that there exists at least a pair of u and w for satisfying (27).

Proposition 1 leads to a theorem concerning the generating function F(z) with the moments f n =wHAnu.

Theorem3.

Let F(z) be the generating function with the moments f n =wHAnu. Then, F(z) converges absolutely in the disk D:|z|<|λ1|−1, and F(z) is expressed as

$$\begin{array}{*{20}l} F(z)=\sum\limits_{k=1}^{N}\sum\limits_{i=1}^{m_{k}}\frac{c_{k,i}z^{i-1}}{(1-\lambda_{k} z)^{i}}. \end{array} $$
(36)

Especially, if λ N =0, then F(z) is expressed as

$$\begin{array}{*{20}l} F(z) = &c_{N,1}+c_{N,2}z+\cdots+c_{N,m_{N}}z^{m_{N}-1} \\ &\quad +\sum\limits_{k=1}^{N-1}\sum\limits_{i=1}^{m_{k}}\frac{c_{k,i}z^{i-1}}{(1-\lambda_{k}z)^{i}}. \end{array} $$
(37)

Let us assume that (27) holds for suitable u and w. If λ N ≠0, then F(z) has the finite poles \(\lambda _{1}^{-1},\lambda _{2}^{-1},\dots,\lambda _{N}^{-1}\) of the orders m1,m2,…,m N , respectively, and the sum of the orders is m=m1+m2++m N . If λ N =0, then F(z) has the pole of the order m N −1 at infinity and the finite poles \(\lambda _{1}^{-1},\lambda _{2}^{-1},\dots,\lambda _{N-1}^{-1}\) of the orders m1,m2,…,m N , respectively, and the sum of the orders of all the finite poles is mm N .

Proof.

By substituting f n in (26) into F(z) in (2), we get

$$\begin{array}{*{20}l} F(z)&=\sum\limits_{k=1}^{N}\sum\limits_{i=1}^{m_{k}}c_{k,i}\left(\sum\limits_{n=0}^{\infty}z^{n}\binom{n}{i-1}\lambda_{k}^{n-i+1}\right)\\ &=\sum\limits_{k=1}^{N}\sum\limits_{i=1}^{m_{k}}c_{k,i}\left(\sum\limits_{n=i-1}^{\infty}z^{n}\binom{n}{i-1}\lambda_{k}^{n-i+1}\right). \end{array} $$
(38)

By letting n=n+i−1, we derive

$$\begin{array}{*{20}l} F(z)=\sum\limits_{k=1}^{N}\sum\limits_{i=1}^{m_{k}}c_{k,i}z^{i-1}\left(\sum\limits_{n^{\prime}=0}^{\infty}\binom{n^{\prime}+i-1}{i-1}(\lambda_{k}z)^{n^{\prime}}\right). \end{array} $$
(39)

It is noted that, for |z|<1,

$$\begin{array}{*{20}l} \sum\limits_{n^{\prime}=0}^{\infty}\binom{n^{\prime}+i-1}{i-1}z^{n^{\prime}}=\frac{1}{(1-z)^{i}}. \end{array} $$
(40)

From (39) and (40), it turns out that F(z) converges absolutely in the disk D:|z|<|λ1|−1. Simultaneously, we have (36) for zD. It is obvious that (36) with λ N =0 becomes (37). Moreover, (36) and (37) immediately lead to the latter half concerning the poles of F(z).

Let ψ A (z) be the polynomial whose degree is the smallest such that ψ A (A)=O. Here ψ A (z) is called the minimal polynomial of A. Let us recall here that the maximal dimension of the Jordan blocks \(J_{k,1},J_{k,2},\dots,J_{k,{\mathcal {M}}_{k}}\) corresponding to λ k is m k . So, ψ A (z) is representable as

$$\begin{array}{*{20}l} \psi_{A}(z)=(z-\lambda_{1})^{m_{1}}(z-\lambda_{2})^{m_{2}}\cdots(z-\lambda_{N})^{m_{N}}. \end{array} $$
(41)

Therefore, we have the main theorem in this section for the relationship between the minimal polynomial of a general matrix A and the characteristic polynomial of a tridiagonal matrix \(T_{l}^{(n)}\).

Theorem4.

Let F(z) be given by the generating function with the moments f n =wHAnu. Let us assume that (6) and (27) hold for suitable u and w. If λ1≠0,λ2≠0,…,λ N ≠0, then it holds that

$$\begin{array}{*{20}l} \det\left({zI}_{m}-T_{m}^{(n)}\right)=\psi_{A}(z),\quad n=0,1,\dots, \end{array} $$
(42)

otherwise,

$$\begin{array}{*{20}l} \det\left({zI}_{m-m_{N}}-T_{m-m_{N}}^{(n)}\right)=\frac{\psi_{A}(z)}{z^{m_{N}}},\quad n=0,1,\dots. \end{array} $$
(43)

Proof.

It is remarkable that three integers L,l,l k and a complex z k associated with the tridiagonal matrix \(T_{l}^{(n)}\) in Theorem 2 are given in terms of three integers N,m,m k and a complex λ k associated with a general matrix A. If λ N ≠0, then it follows from the latter half of Theorem 3 that L=N, l=m, l0,l1=m1,l2=m2,…,l N =m N and \(z_{k}=\lambda _{k}^{-1}\). So, from (11) and (41), we derive (42). Similarly, if λ N =0, then L=N−1, l=mm N , l0=m N −1,l1=m1,l2=m2,…,lN−1=mN−1 and \(z_{k}=\lambda _{k}^{-1}\). Thus (11) and (41) lead to (43).

Incidentally, the editors in ([9], pp. 444–445) give a simple example with short comments concerning the minimal polynomial, the Jordan canonical form of A and the multiple poles of F(z).

4 Minimal polynomial of tridiagonal matrix

In this section, with the help of the Jordan canonical form, we clarify the relationship of the characteristic polynomial of the tridiagonal matrix \(T_{l}^{(n)}\) to the minimal one.

For simplicity, let us here adopt the following abbreviations for matrices \(T_{s}^{(n)}\),

$$\begin{array}{*{20}l} T_{s}=\left(\begin{array}{cccc} u_{1} & v_{1} & &\\ 1 & u_{2} & \ddots & \\ & \ddots & \ddots & v_{s-1}\\ & & 1 & u_{s} \end{array} \right),\quad s=0,1,\dots,l, \end{array} $$
(44)

where l=m if λ N ≠0 or l=mm N if λ N =0. Let p0(z)=1 and p s (z)= det(zI s T s ) for s=1,2,…,l. Then p l (z) is just the characteristic polynomial of T l , namely,

$$\begin{array}{*{20}l} \phi_{T}(z) = (z-\lambda_{1})^{m_{1}}(z-\lambda_{2})^{m_{2}} \cdots (z-\lambda_{L})^{m_{L}}, \end{array} $$
(45)

where L=N if λ N ≠0 or L=N−1 if λ N =0. The following proposition gives the Jordan canonical form of the tridiagonal matrix T l .

Proposition2.

There exists a nonsingular matrix P such that

$$\begin{array}{*{20}l} & P^{-1}(T_{l})^{\top}P=\hat{J}, \end{array} $$
(46)
$$\begin{array}{*{20}l} & \hat{J}= \text{diag} (J_{1,1},J_{2,1},\dots,J_{L,1})\in\mathbb{C}^{l\times l}, \end{array} $$
(47)

where J1,1,J2,1,…,JL,1 are of the same form as (23).

Proof.

The characteristic polynomials p0(z), p1(z), …, p l (z) satisfy

$$\begin{array}{*{20}l} \left\{ \begin{array}{lcc} {zp}_{0} (z)=u_{1} p_{0} (z)+p_{1} (z),\\ {zp}_{s} (z)=v_{s} p_{s-1}(z)+u_{s+1}p_{s} (z)+p_{s+1}(z),\\ \qquad s=1,2,\dots,l-1. \end{array} \right. \end{array} $$
(48)

This is easily derived from the expansion of det(zI s T s ) by the sth row minors. By taking the 0th, the 1st, …, the (m k −1)th derivatives with respect to z in (48), we get

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} zD^{i} p_{0}(z)+iD^{i-1}p_{0}(z)=u_{1} D^{i} p_{0} (z)+D^{i} p_{1} (z),\\ \qquad i=0,1,\dots,m_{k} -1,\\ zD^{i} p_{s} (z)+iD^{i-1}p_{s} (z) \\ \quad =v_{s} D^{i} p_{s-1}(z)+u_{s+1}D^{i} p_{s} (z)+D^{i} p_{s+1}(z),\\ \qquad i=0,1,\dots,m_{k} -1,\quad s=1,2,\dots,l-1, \end{array} \right. \end{array} $$
(49)

where Dip s (z) denotes the ith derivative of p s (z) with respect to z. Let \(\boldsymbol {p}_{k,i}=(D^{i}p_{0}(\lambda _{k}),D^{i}p_{1}(\lambda _{k}),\dots,D^{i}p_{l-1}(\lambda _{k}))^{\top }\in \mathbb {C}^{l}\). Then, by substituting z=λ k in (49) and by taking account that \(\phantom {\dot {i}\!}D^{i}p_{l}(\lambda _{k})=D^{i}(z-\lambda _{1})^{m_{1}}(z-\lambda _{2})^{m_{2}}\cdots (z-\lambda _{l})^{m_{l}}|_{z=\lambda _{k}}=0\) for i=0,1,…,m k −1, we obtain

$$\begin{array}{*{20}l} \lambda_{k}\boldsymbol{p}_{k,i}+i\boldsymbol{p}_{k,i-1}=(T_{l})^{\top}\boldsymbol{p}_{k,i},\quad i=0,1,\dots,m_{k}-1. \end{array} $$
(50)

Moreover, it follows that

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} (T_{l})^{\top}P_{k,0}=\lambda_{k} P_{k,0},\\ (T_{l})^{\top}P_{k,i}=\lambda_{k} P_{k,i}+P_{k,i-1},\\ \qquad i=1,2,\dots,m_{k}-1, \end{array} \right. \end{array} $$
(51)

where Pk,i=(1/i!)pk,i. Thus, by letting \(P=(P_{1,0}\,P_{1,1}\,\cdots \,P_{1,m_{1}-1}\,|\,P_{2,0}\,P_{2,1}\cdots \,P_{2,m_{2}-1}\,|\cdots \,|\,P_{L,0}\,P_{L,1}\,\cdots \,P_{L,m_{L}-1})\in \mathbb {C}^{l\times l}\), we have \((T_{l})^{\top }P=P\hat {J}\).

Here, it remains to prove that P is nonsingular. Of course, Pk,iO since the (i+1)th row of Pk,i is Dip i (λ k )/i!=1. Let Wk,i=Ker((T l )λ k I l )i for i=1,2,…m k −1, which indicates the generalized eigenspace of (T l ) corresponding to λ k . Then it is obvious from (51) that ((T l )λ k I l )Pk,0=O and Pk,0Wk,1. Eq. (51) with i=1 also leads to that ((T l )λ k I l )2Pk,1=O and Pk,1Wk,2. Simultaneously, it is observed that Pk,1Wk,1. Let us assume that Pk,1Wk,1, namely, (T l )Pk,1=λ k Pk,1. Then, from (51), we derive Pk,0=O, which contradicts with Pk,0O. Thus, it follows that Pk,1Wk,1. Similarly, by induction for i=2,3,…,m k −1 in Pk,i, we have

$$\begin{array}{*{20}l} & P_{k,i} \notin W_{k,1},W_{k,2},\dots,W_{k,i},\quad P_{k,i} \in W_{k,i+1}, \\ &\qquad i=1,2,\dots,m_{k}-1. \end{array} $$
(52)

From (52), it turns out that Pk,i for i=0,1,…,m k −1 and k=1,2,…,L are linearly independent. Therefore, it is concluded that P is nonsingular and the Jordan canonical form of (T l ) is given by (46).

Proposition 2 suggests that the minimal polynomial of (T l ) becomes

$$\begin{array}{*{20}l} \psi_{T}(z)=(z-\lambda_{1})^{m_{1}}(z-\lambda_{2})^{m_{2}}\cdots (z-\lambda_{L})^{m_{L}}, \end{array} $$
(53)

which is equal to the characteristic polynomial of T l in (45). If m1=m2==m L =1, then it is obvious that T l is diagonalizable. Otherwise, T l is not diagonalizable. This is because multiplicity of roots in minimal polynomial coincides with maximal size of the Jordan blocks. To sum up, we have a theorem for the properties of the tridiagonal matrix T l .

Theorem5.

The minimal polynomial of T l is equal to the characteristic one. Also, T l is diagonalizable tridiagonal matrix if and only if it has no multiple eigenvalues.

5 Procedure for constructing tridiagonal matrix and its examples

In this section, based on the discussions in the previous sections, we first design a procedure for constructing a tridiagonal matrix with specified multiple eigenvalues. We next give four kinds of examples for demonstrating that the resulting procedure can provide with tridiagonal matrices with multiple eigenvalues. Examples have been carried out with our computer with OS: Mac OS X 10.8.5, CPU: Intel Core i7 2 GHz, RAM: 8 GB. We also use the scientific computing software Wolfram Mathematica 9.0. In every example, all the entries of u are simply set to 1 and those of w are not artificial. The readers will realize that the settings of u and w are not so difficult for satisfying (6) and (27).

Let us here consider the relationship of five theorems in the previous sections. Theorem 2 shows that the eigenvalues of \(T_{l}^{(n)}\) in the tridiagonal form as (10) are equal to the poles of the generating function F(z) and the multiplicity of the eigenvalues coincide with the those of the poles of F(z). Theorems 3 and 4 claim that the minimal polynomial of a general matrix A, denoted by ψ A (z) is just the denominator of F(z) involving f n =wHAnu, and it coincides with the characteristic polynomial of \(T_{l}^{(n)}\) denoted by ϕ T (z), except for the factor corresponding to zero-eigenvalues. With the help of Theorem 1, we thus realize that the nonzero eigenvalues of \(T_{l}^{(0)}\) with the entries involving \(q_{1}^{(0)},q_{2}^{(0)},\dots,q_{l}^{(0)}\) and \(e_{1}^{(0)},e_{2}^{(0)},\dots,e_{l-1}^{(0)}\) become roots of the minimal polynomial ψ A (z) in the case where \(q_{1}^{(0)},q_{2}^{(0)},\dots,q_{l}^{(0)}\) and \(e_{1}^{(0)},e_{2}^{(0)},\dots,e_{l-1}^{(0)}\) are given by the qd formula (4) under the initial settings \(e_{0}^{(n)}=0\) and \(q_{1}^{(n)}=f_{n+1}/f_{n}\) with f n =wHAnu. See also Figure 1 for the diagram for getting \(q_{s}^{(n)}\) and \(e_{s}^{(n)}\) by the qd formula (4). A procedure for constructing \(T=T_{l}^{(0)}\) with the same nonzero eigenvalues as A is therefore as follows.

  1. 1:

    Set l=m if λ N ≠0 or l=mm N if λ N =0.

  2. 2:

    Choose u and w as in (6) and (27).

  3. 3:

    Compute f n =wHAnu for n=0,1,…,2l−1.

  4. 4:

    Set \(e_{0}^{(n)}=0\) for n=0,1,…,2l−3.

  5. 5:

    Compute \(q_{1}^{(n)}=f_{n+1}/f_{n}\) for n=0,1,…,2l−2.

  6. 6:

    Repeat (a) and (b) for s=2,3,…,l.

    1. (a)

      Compute \(e_{s-1}^{(n)}=q_{s-1}^{(n+1)}+e_{s-2}^{(n+1)}-q_{s-1}^{(n)}\) for n=0,1,…,2l−2s+1.

    2. (b)

      Compute \(q_{s}^{(n)} = q_{s-1}^{(n+1)}e_{s-1}^{(n+1)}/e_{s-1}^{(n)}\) for n=0,1,…,2l−2s.

  7. 7:

    Construct a tridiagonal matrix by arranging \(q_{1}^{(0)},q_{2}^{(0)},\dots,q_{l}^{(0)}\) and \(e_{1}^{(0)},e_{2}^{(0)},\dots,e_{l-1}^{(0)}\).

Figure 1
figure 1

The qd diagram for a tridiagonal matrix construction.

According to Theorem 5, the minimal and the characteristic polynomials of the resulting tridiagonal matrix T are equal to each other. Moreover, T is diagonalizable if and only if it has no multiple eigenvalues.

It is necessary to control the eigenvalues of A for getting T as a tridiagonal matrix with specified eigenvalues. It is easy to specify the eigenvalues of the diagonal matrix and those of the Jordan matrix.

First, in the procedure, let us consider the case where

$$\begin{array}{*{20}l} A=\text{diag}(2,2,2,1,1,1) \in\mathbb{R}^{6\times 6} \end{array} $$

which is a diagonal matrix with two eigenvalues 1 and 2 each of multiplicity 3. Obviously, the characteristic and the minimal polynomials are factorized as (z−1)3(z−2)3 and (z−1)(z−2), respectively. So, the integers l and m are immediately determined as l=2 and m=6. Moreover, by letting u=(1,1,1,1,1,1) and w=(1,1,1,1,1,1), we derive a tridiagonal matrix as

$$\begin{array}{*{20}l} T=\left(\begin{array}{cc} \frac{3}{2} & \frac{1}{4}\\ 1 & \frac{3}{2} \end{array} \right) \in\mathbb{R}^{2\times 2} \end{array} $$

whose characteristic and minimal polynomials are both factorized as (z−1)(z−2). The tridiagonal matrix T is a diagonalizable matrix with the distinct eigenvalues 1 and 2.

Next, let us adopt a bidiagonal matrix, which can be regarded as the Jordan matrix, with eigenvalues 2 of multiplicity 6 as A, namely,

$$\begin{array}{*{20}l} A=\left(\begin{array}{lcccccc} 2 & 1 & & & &\\ & 2 & 1 & & &\\ & & 2 & 1 & &\\ & & & 2 & 1 &\\ & & & & 2 & 1\\ & & & & & 2 \end{array} \right) \in\mathbb{R}^{6\times 6}, \end{array} $$

in the procedure. Since the characteristic polynomial of A is equal to the minimal one, the integers l and m are determined as l=m=6. Then the procedure with u=(1,1,1,1,1,1) and w=(1,1,0,1,0,1) constructs a tridiagonal matrix, which can not be symmetrized,

$$\begin{array}{*{20}l} T=\left(\begin{array}{cccccc} \frac{11}{4} & \frac{3}{16} & & & & \\ 1 & \frac{11}{12} & -\frac{4}{9} & & & \\ & 1 & \frac{10}{3} & 3 & & \\ & & 1 & 0 & -8 & \\ & & & 1 & \frac{29}{8} & -\frac{1}{64}\\ & & & & 1 & \frac{11}{8} \end{array} \right) \in\mathbb{R}^{6\times 6}. \end{array} $$

The characteristic and the minimal polynomials of A and T are all the same polynomial with respect to z, which is factored as (z−2)6. So, the tridiagonal matrix T is not diagonalizable.

Let us prepare the Jordan matrix

$$\begin{array}{*{20}l} A=\left(\begin{array}{cccccccc} 3 & 1 & & & & & & \\ & 3 & 1 & & & & & \\ & & 3 & & & & & \\ & & & 3 & 1 & & & \\ & & & & 3 & & & \\ & & & & & 3 & & \\ & & & & & & 2 & 1\\ & & & & & & & 2 \end{array} \right) \in\mathbb{R}^{8\times 8}. \end{array} $$

The matrix A has multiple eigenvalues such as λ1=3, λ2=3, λ3=3, λ4=3, λ5=3, λ6=3, λ7=2, λ8=2. It is noted that |λ1|=|λ2|=|λ3|=|λ4|=|λ5|=|λ6|>|λ7|=|λ8|>0. The characteristic and the minimal polynomials of A are factorized as (z−2)2(z−3)6 and (z−2)2(z−3)3, respectively. So, let l=5 and m=8 in the procedure. Then, the settings u=(1,1,1,1,1,1,1,1) and w=(1,1,1,1,1,1,1,1) bring us to a tridiagonal matrix, which can not be symmetrized,

$$\begin{array}{*{20}l} T=\left(\begin{array}{ccccc} \frac{13}{4} & \frac{1}{16} & & & \\ 1 & \frac{17}{4} & -\frac{13}{2} & & \\ & 1 & -\frac{11}{26} & \frac{116}{169} & \\ & & 1 & \frac{1232}{377} & -\frac{13}{841}\\ & & & 1 & \frac{77}{29} \end{array} \right) \in\mathbb{R}^{5\times 5} \end{array} $$

whose characteristic and minimal polynomials are both factorized as (z−2)2(z−3)3, which is just equal to the minimal one of A. The tridiagonal matrix T is not a diagonalizable matrix with eigenvalues 2 and 3 of multiplicity 2 and 3, respectively.

Finally, let us A be set as the Jordan matrix with complex eigenvalues 2+i and 2−i each of multiplicity 2 and distinct real eigenvalues 1 and 2, namely,

$$\begin{array}{*{20}l} A=\left(\begin{array}{cccccc} 2+i & 1 & & & &\\ & 2+i & & & &\\ & & 2-i & 1 & &\\ & & & 2-i & &\\ & & & & 2 & \\ & & & & & 1 \end{array} \right) \in\mathbb{C}^{6\times 6}, \end{array} $$

in this procedure. By taking account that the characteristic and the minimal polynomials of A are equal to each other, let l=m=6 in the procedure. Under the settings u=(1,1,1,1,1,1) and w=(1,1,1,1,1,1), the resulting matrix T is a real tridiagonal matrix, which can not be symmetrized,

$$\begin{array}{*{20}l} T=\left(\begin{array}{cccccc} \frac{13}{6} & -\frac{19}{36} & & & & \\ 1 & \frac{443}{114} & -\frac{1920}{361} & & & \\ & 1 & -\frac{363}{760} & -\frac{209}{1600} & & \\ & & 1 & \frac{1187}{440} & -\frac{240}{121} & \\ & & & 1 & \frac{37}{66} & \frac{11}{36}\\ & & & & 1 & \frac{13}{6} \end{array} \right) \in\mathbb{R}^{6\times 6}. \end{array} $$

The characteristic and the minimal polynomials of A and T are all the same polynomial with respect to z, which is factorized as (z−2+i)2(z−2−i)2(z−2)(z−1). So, the tridiagonal matrix T is not a diagonalizable matrix with the same complex multiple eigenvalues and real distinct ones as A.

6 Conclusion

In this paper, we clarify that the qd recursion formula is applicable to constructing a tridiagonal matrix with specified multiple eigenvalues. We first investigate the denominator of the generating function associated with the sequence given from two suitable vectors and the powers of a general matrix A, through considering the Jordan canonical form of A. Accordingly, it is observed that the minimal polynomial of A coincides with the characteristic polynomial of a tridiagonal matrix T, denoted by ϕ T (z), or the polynomial \(\phantom {\dot {i}\!}z^{m_{L}}\phi _{T}(z)\) for the multiplicity m L of the zero-eigenvalues of A. Next, by taking account of the Jordan canonical form of T, we show that the characteristic and the minimal polynomials of T are equal to each other. We finally present a procedure for constructing a tridiagonal matrix with specified multiple eigenvalues, and then give four examples for the resulting procedure.

References

  1. Chu, MT., Golub, GH.: Inverse Eigenvalue Problem: Theory, Algorithms, and Applications. Oxford University Press, New York (2005).

    Book  Google Scholar 

  2. Gutknecht, MH., Parlett, BN.: From qd to LR, or, how were the qd and LR algorithms discovered?IMA J. Numer. Anal. 31, 741–754 (2011).

    Article  MathSciNet  Google Scholar 

  3. Henrici, P., Watkins, BO.: Finding zeros of a polynomial by the Q-D algorithm. Commut. ACM. 8, 570–574 (1965).

    Article  MathSciNet  Google Scholar 

  4. Henrici, P.: Applied and, Computational Complex Analysis, Vol. 1. John Wiley, New York (1974).

    Google Scholar 

  5. Henrici, P.: Applied and Computational Complex Analysis, Vol. 2. John Wiley, New York (1977).

    Google Scholar 

  6. Meyer, CD.: Matrix Analysis and Applied Linear Algebra. SIAM, Philadelphia (2000).

    Book  Google Scholar 

  7. Parlett, BN.: The new qd algorithms. Acta Numerica. 4, 459–491 (1995).

    Article  MathSciNet  Google Scholar 

  8. Rutishauser, H.: Bestimmung der Eigenwerte und Eigenvektoren einer Matrix mit Hilfe des Quotienten-Differenzen-Algorithmus. Z. Angew. Math. Phys. 6, 387–401 (1955).

    Article  MathSciNet  Google Scholar 

  9. Rutishauser, H.: Lectures on Numerical Mathematics. Birkhäuser, Boston (1990).

    Book  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewer for his/her careful reading and beneficial suggestions. This work is supported by JSPS KAKENHI Grant Number 23654032.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kanae Akaiwa.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Akaiwa, K., Iwasaki, M., Kondo, K. et al. A tridiagonal matrix construction by the quotient difference recursion formula in the case of multiple eigenvalues. Pac. J. Math. Ind. 6, 10 (2014). https://doi.org/10.1186/s40736-014-0010-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40736-014-0010-0

Keywords