University of Ljubljana Institute of Mathematics, Physics and Mechanics Department of Mathematics Jadranska 19, 1000 Ljubljana, Slovenia Preprint series, Vol. 47 (2009), 1086 ON THE SINGULAR TWO-PARAMETER EIGENVALUE PROBLEM Andrej Muhic Bor Plestenjak ISSN 1318-4865 Ljubljana, March 18, 2009 ON THE SINGULAR TWO-PARAMETER EIGENVALUE PROBLEM* ANDREJ MUHICt AND BOR PLESTENJAK* Abstract. In the 1960s Atkinson introduced an abstract algebraic setting for multiparameter eigenvalue problems. He showed that a nonsingular multiparameter eigenvalue problem is equivalent to the associated system of generalized eigenvalue problems. Many theoretical results and numerical methods for nonsingular multiparameter eigenvalue problems are based on this relation. We extend the above relation to singular two-parameter eigenvalue problems and show that the simple finite regular eigenvalues of both problems agree. This enables one to solve a singular two-parameter eigenvalue problem by computing the common regular eigenvalues of the associated system of two singular generalized eigenvalue problems. Key words. Singular two-parameter eigenvalue problem, operator determinant, Kronecker canonical form, minimal reducing subspace AMS subject classifications. 15A18, 15A69, 65F15. 1. Introduction. We consider the algebraic two-parameter eigenvalue problem where Ai,Bi, and Ci are ni x ni matrices over C, A, p G C, and xi G Cni. A pair (A, p) is an eigenvalue if it satisfies (1.1) for nonzero vectors x1,x2, and the tensor product xi < x2 is the corresponding (right) eigenvector. Similarly, y1 < y2 is the corresponding left eigenvector if yi = 0 and y*Wi(A, p) = 0 for i = 1, 2. The eigenvalues of (1.1) are the roots of the following system of two bivariate polynomials * Version: March 17, 2009 t Institute for Mathematics, Physics and Mechanics, Jadranska 19, SI-1000 Ljubljana, Slovenia (andrej.muhic@fmf.uni-lj.si). Supported by the Ministry of Higher Education, Science and Technology of Slovenia. ^Department of Mathematics, University of Ljubljana, Jadranska 19, SI-1000 Ljubljana, Slovenia (bor.plestenjak@fmf.uni-lj.si) Supported in part by the Ministry of Higher Education, Science and Technology of Slovenia. (1.1) W1(A, p)x1 := (A1 + AB1 + pC1)x1 = 0, W2(A,p)x2 := (A2 + AB2 + pC2)x2 =0, (1.2) Pi(A,p) :=det(Wi(A,p)) =0, P2(A,P) :=det(W2(A,p)) = 0. A two-parameter eigenvalue problem can be expressed as two coupled generalized eigenvalue problems. On the tensor product space S := C"1 < C"2 of the dimension N := niU2 we define operator determinants Aq = Bi < C2 - Ci < B2, for details see, e.g., [1]. The problem (1.1) is then related to a coupled pair of generalized eigenvalue problems for decomposable tensors z G S, z = x < y. Usually we assume that the two-parameter eigenvalue problem (1.1) is nonsingu-lar, i.e., the corresponding operator determinant Aq is nonsingular. In this case the matrices A-1A1 and A-1A2 commute and the eigenvalues of (1.1) agree with the eigenvalues of (1.4). By applying this relation, a nonsingular two-parameter eigenvalue problem can be numerically solved using standard tools for the generalized eigenvalue problems, for an algorithm see, e.g., [8]. However, several applications lead to singular two-parameter eigenvalue problems where Aq is singular and (1.4) is a pair of singular generalized eigenvalue problems. Two such examples are the model updating [3] and the quadratic two-parameter eigenvalue problem [12]. Apart from a few theoretical results and numerical methods in [3] and [12], which only cover very specific examples, the singular two-parameter eigenvalue problem is still open. We extend Atkinson's results from [1] and show that simple eigenvalues of the singular two-parameter eigenvalue problem (1.1) can be computed from the eigenvalues of the corresponding pair of singular generalized eigenvalue problems (1.4). This opens new possibilities in the study of singular two-parameter eigenvalue problems. The new results justify that the numerical method presented in [12] can be applied not only to the linearization of a quadratic two-parameter eigenvalue problem but also to a general singular two-parameter eigenvalue problem. Definition 1.1. The normal rank of a two-parameter matrix pencil Wj(A, is (1.3) A1 = C1 < A2 - A1 < C2, A2 = A1 < B2 - B1 < A2, (1.4) A1Z = AAqZ, A2 z = ^Aq z, nrank(Wj(A, = max rank(Wj(A, for i = 1, 2. A pair (A*,^*) G C2 is a finite regular eigenvalue of the two-parameter eigenvalue problem (1.1) if rank(Wj(A*, < nrank(Wj(A, for i = 1,2. The geometric multiplicity of the eigenvalue (A*,^*) is equal to 2 JJ ( nrank(Wj(A, - rank(Wj(A*, ^*)) Throughout this paper we assume that the two-parameter eigenvalue problem (1.1) is regular, which means that both matrix pencils W1(A, and W2(A, have full normal rank, i.e., nrank(Wj(A, = n for i = 1, 2. This is equivalent to the condition that both polynomials p1 and p2 are not identical to zero. We also assume that pi and p2 do not have a nontrivial common divisor, because this would lead to infinitely many eigenvalues. If the greatest common divisor of p1 and p2 is a scalar, then (1.1) has (counting with multiplicities) k < N finite regular eigenvalues, where k < rank(Ao). If the problem (1.1) is nonsingular, then k = N; if (1.1) is singular, then k < N and the remaining N — k eigenvalues, which belong to the singular part, can be considered as infinite eigenvalues. In addition, we assume that neither p1 nor p2 has a factor that depends on exactly one of the variables A or If this is not true then we can fix it by applying a linear on the variables of (1.1), where M is a nonsingular 2 x 2 "A'" A =M y. substitution matrix. As a result, for each A there exists a ^ (and vice-versa) such that pj(A, = 0 for i = 1, 2. Definition 1.2. The normal rank of a square matrix pencil A — AB is nrank(A — A B) = maxrank(A — A B). xec A scalar A* G C is a finite regular eigenvalue of the matrix pencil if rank(A — A*B) < nrank(A — AB). The geometric multiplicity of A* is nrank(A — AB) — rank(A — A*B). In the next section we introduce the Kronecker canonical form and other auxiliary results. In Section 3 we show that all simple eigenvalues of a singular two-parameter eigenvalue problem (1.1) agree with the finite regular eigenvalues of the associated pair of generalized eigenvalue problems (1.4). In Section 4 we give examples of small two-parameter eigenvalue problems that support the theory and in the final section we review how to numerically solve a singular two-parameter eigenvalue problem using the algorithm for the computation of the common regular subspace of a singular matrix pencil, presented in [12]. 2. Auxiliary results. In this section we review the Kronecker canonical form and Kronecker chains of a matrix pencil. More about the Kronecker canonical form and its numerical computation can be found in, e.g., [4], [5], [7], and [14]. Definition 2.1. Let A - AB e Cmxn be a matrix pencil. Then there exist nonsingular matrices P e Cmxm and Q e cnxn such that P-1(A - AB)Q = A - A B = diag(Ai - ABi, ...,Ak - A Bk) is the Kronecker canonical form. Each block Ai - ABi, i = 1, • of the following forms: Jj(a), Nj, Lj, or LJ, where blocks , k, must be of one Jp(a) A1 aA Np 1A -A 1 Lp A1 A1 LJ -A 1 -A 1 represent a finite regular block, an infinite regular block, a right singular block, and a left singular block, respectively. To each Kronecker block of the matrix pencil A-AB we can associate a Kronecker chain of linearly independent vectors as follows (see, e.g., [11]): a) A finite regular block Jp(a) is associated with vectors u1,... ,up that satisfy (A - aB)u1 = 0, (A - aB)ui+i = Bui, i = 1,... ,p - 1. b) An infinite regular block Np is associated with vectors u1 , • • • , up that satisfy Bu1 = 0, Bui+i = Aui, i = 1,... ,p - 1. c) A right singular block Lp is associated with vectors u1,..., up+1 that satisfy Au1 = 0, Aui+1 = Bui, i = 1,... ,p, 0 = Bup+1. d) A left singular block LJ, p > 1, is associated with vectors u1,... ,up that satisfy Aui = Bui+1, i = 1,... ,p - 1. a 1 The union of the Kronecker chains for all Kronecker blocks is a basis for Cn. We say that a subspace M C Cn is a reducing subspace for the pencil A — AB if dim(AM + BM) = dim(M) — s, where s is the number of the right singular blocks Lp in the Kronecker canonical form for A — AB. The vectors from the Kronecker chains of all right singular blocks Lp form a basis for the minimal reducing subspace R(A, B), which is a subset of all reducing subspaces. The minimal reducing subspace is unique and can be numerically computed in a stable way from the generalized upper triangular form (GUPTRI), see, e.g., [4, 5]. Definition 2.2. Let A* be a finite regular eigenvalue of a matrix pencil A — AB and let (A — A*B)z = 0. We say that z is a regular eigenvector if z does not belong to the minimal reducing subspace R(A, B). In the other case, we say that z is a singular-eigenvector. It is trivial to construct a basis for the kernel of the tensor product A < D from the kernels of A and D. The task is much harder if we take a difference of two tensor products, which is the form of the operator determinants (1.3). Kosir showed in [11] that the kernel of an operator determinant A = A < D — B < C can be constructed from the Kronecker chains of matrix pencils A — AB and C — pD. Theorem 2.3 ([11, Theorem 4]). A basis for the kernel of A = A < D — B < C is the union of sets of linearly independent vectors that belong to the following pairs of Kronecker blocks of pencils A — AB and C — pD, respectively: a) Jpi (a) and Jp2 (a), b) Np1 and Np2, c) Lp1 and Lp2, d) Lpi and Lj2, where pi P2, f) Lp1 and Jp2 (a), g) Jpi (a) and Lp2. Each pair is associated with a set of linearly independent vectors. The construction for a pair from points a) and b) is as follows: If ui, ..., upi and vi, .. ., vp2 are the associated Kronecker chains for the pencils A — AB and C — pD, respectively, then the vectors j Zj = ^2ui < Vj+i-i, j = 1,..., min(pi,p2), i=i form a set of linearly independent vectors for this pair of Kronecker blocks. In the above theorem we omitted the constructions for all pairs of Kronecker blocks that are not relevant for our case. For a complete description see [11]. We will also need the following relations between a submatrix of a matrix and its complementary submatrix in the inverse matrix. LEMMA 2.4. Let A be a nonsingular n x n matrix, and let A and A-1 be partitioned as A = A11 A12 A21 A22 A-1 = B11 B12 B21 B22 where An and B11 are r x r matrices, 1 < r < n. Then a) dim(Ker(A11)) = dim(Ker(B22)), b) det(A11) = det(B22)det(A). Proof. Point a) is a well-known nullity theorem by Fiedler and Markham [6]. If A11 is singular, then point a) yields that B22 is singular as well. So det(A11) det(B22) = 0 and point b) is true. If A11 is nonsingular, then we can write A11 A12 A21 A22 I 0 1 LA21A-11 IJ A11 0 0 S I A-1 A 0 11 A12 I where S = A22 — A21A111A12 is the Schur complement of A11. The proof now follows from the relations det(A) = det(A11) det(S) and B22 = S-1. □ 3. Delta matrices and simple eigenvalues. In the nonsingular case the eigenvalues of (1.1) agree with the eigenvalues of the associated pair of generalized eigenvalue problems (1.4), see, e.g., [1]. In this section we show that in a similar way the finite regular eigenvalues of (1.1) are related to the finite regular eigenvalues of (1.4). Definition 3.1. A pair (A*,y*) € C2 is a finite regular eigenvalue of matrix pencils A1 — AAo and A2 — yA0 if the following is true: a) A* is a finite regular eigenvalue of A1 — AA0, b) y* is a finite regular eigenvalue of A2 — yA0, c) there exists a common regular eigenvector z, i.e., z = 0 such that (A1 — A* A0)z = 0, (A2 — y* A0)z = 0, and z € R(A,, A0) for i = 1, 2. The geometric 'multiplicity of (A*,y*) is dim(N) — dim(N nR(A1, A0) nR(A2, A0)), where N = Ker(A1 — A*A0) n Ker(A2 — y*A0). In order to obtain a general result, instead of the linear two-parameter eigenvalue problem (1.1) we consider a nonlinear two-parameter eigenvalue problem (3.1) T1(A, u)x1 = 0 T2(A,y)x2 = 0, where T*(., .) : C x C ^ C"ixn is differentiate for i = 1, 2. If (3.1) is satisfied for nonzero vectors x1 and x2, then (A, m) is an eigenvalue and x1 X2) = y*B1X1 y*C1X1 y*B2X2 y*C2X2 = 0. Let us remark that the result in Corollary 3.3 was obtained for the nonsingular multiparameter eigenvalue problem by Kosir in [10, Lemma 3]. The connection (3.9) between the Jacobian matrix of the polynomial system (1.2) and the matrix y*B1 X1 y*C1X1 y*B2 X2 y*C2X2 was established for the nonsingular right definite two-parameter eigenvalue problem in [9, Proposition 13]. Lemma 3.4. Let A(.) : C ^ Cnxn be a -matrix polynomial and let A* be its eigenvalue, i.e., det(A(A*)) = 0, with the corresponding right and left eigenvector x and y, respectively. If (3.11) y*A'(A*)x = 0 then A* is a finite regular eigenvalue. Proof. It suffices to show that the eigenvector x is not part of the singular subspace of A(A), which is composed of vector polynomials u(A) such that A(A)u(A) = 0 for all A. Suppose that there exists a polynomial u(A) such that (3.12) A(A)u(A) = 0 for all A and u(A*) = x. If we differentiate (3.12) at A = A* then we obtain A(A*)u'(A*)+ A'(A*)x = 0. When we multiply this equality by y* we get y*A'(A* )x = 0, which contradicts the assumption (3.11). So, such a polynomial u(A) does not exist and x is not in the singular subspace. Therefore, the rank of A(A) drops at A = A* and A* is a finite regular eigenvalue. □ Theorem 3.5. Let us assume that all finite eigenvalues of a regular two-parameter eigenvalue problem (1.1) are algebraically simple. If (A*,p*) is a finite regular-eigenvalue of (1.1) then (A*,p*) is a finite regular eigenvalue of the associated pair of generalized eigenvalue problems (1.4). Proof. Let (A*, p*) be a finite regular eigenvalue of (1.1) and let z = xi < x2 and w = yi < y2 be the corresponding right and left eigenvector, respectively. It follows from Corollary 3.3 that w*A0z = 0. Now we can apply Lemma 3.4 to conclude that A* is a finite regular eigenvalue of Ai — AA0. We can show the same for the eigenvalue p* of the matrix pencil A2 — pA0. It follows that (A* , p* ) is a finite regular eigenvalue of pencils Ai — AA0 and A2 — pA0 with the common regular eigenvector z. □ In order to establish a bidirectional link between the eigenvalues of the two-parameter eigenvalue problem (1.1) and the eigenvalues of the associated pair of generalized eigenvalue problems (1.4), we have to prove the relation in the opposite direction as well. We do this in the following theorem. Theorem 3.6. Let us assume that all finite eigenvalues of a regular two-parameter eigenvalue problem (1.1) are algebraically simple. Let (A*,p*) be a finite regular-eigenvalue of the associated pair of generalized eigenvalue problems (1.4) such that A* and p* are nondefective eigenvalues of the pencils Ai — AAo and A2 — pAo, respectively. Then (A*,p*) is a finite regular eigenvalue of the regular two-parameter eigenvalue problem (1.1). Proof. Let z be a common regular eigenvector for the eigenvalue (A*, p*). We can write (3.13) Ai — A*Ao = Wi(A*, 0) < C2 — Ci < W2(A*, 0). It follows from Theorem 2.3 and (3.13) that z is a linear combination of vectors associated with appropriate pairs of Kronecker blocks of pencils Wi(A*, 0) — aiCi and W2 (A* , 0) — a2 C2 . Since z is regular, at least one of this vectors has to be regular. Among all combinations of Kronecker blocks that are listed in Theorem 2.3, the possible ones are either of type a) (Jpi (a), Jp2 (a)) or b) (Npi, Np2). Namely, in all other combinations one of the Kronecker blocks is of type Lp. Suppose, without loss of generality, that Lp belongs to the pencil Wi(A*, 0) — aiCi. Then there exists a polynomial vector u(a) of order p such that Wi(A*, —a)u(a) = (Wi(A*, 0) — aCi)u(a) = 0 for all a. But, then det(Wi(A, —a)) = 0 for all a, which contradicts the assumption that for each A there exists a p such that det(Wi(A, p)) = 0. Suppose that the pencil Wj(A*, 0) — a^Q has a Kronecker block NPi for i = 1, 2. The two Kronecker blocks are associated to the Kronecker chains u1,...,uPl and v1,..., vp2 such that C1u1 = 0, C1U +1 = W1 (A*, 0)ui, i = 1,...,p1 — 1, and C2V1 = 0, C2Vi+1 = W2 (A*, 0)vi, i = 1,...,p2 — 1. If follows from Theorem 2.3 that the vectors z1,..., zp, where p = min(p1,p2) and j zj =^2 ui ® Vj+1-i, j = 1, ...,p, i=1 are in the basis for Ker(A1 — A*A0). For z1 = u1 ® v1 it is easy to see that (3.14) A1z1 = A0z1 = 0. Therefore, z1 is not a regular eigenvector of A1 — AA0 at A = A*. If p > 1 then j A0zj = (B1 C2 — C1 B2) ui Vj+1-i i=1 j-1 j (3.15) = B1 W2(A*, 0)£ ui Vj-i — W1(A*, 0) Ui-1 ® Vj+1- i=1 i=2 j-1 = (B1 ® W2(A*, 0) — W1(A*, 0) ® B^ ^ui ® Vj-i = —A2zj-1. i=1 Suppose that the basis for the kernel of A1 — A*A0 is a union of sets of vectors that belong to m pairs of Kronecker blocks of the type (NPl, NP2) only. Then it follows from (3.14) and (3.15) that there exist vectors zk1,..., zkpk for k = 1,..., m such that (3.16) A0zfc1 =0 and (3.17) A0zfcj = —A2zfc,j-1 for j = 2,... ,pk. By Theorem 2.3 we can expand the common regular eigenvector z € Ker(A1 — A*A0) n Ker(A2 — y*A0) in this basis as m Pj z = jzjk. j=1k=1 Using the relation (3.17), we define a chain of vectors such that (m / Pj Pj \ \ 53 + M* 53jZj,k-1 I = A2Z(0) = 0 j=1 \k = 1 k=2 / ) m Pj Pj Aoz(0) = -A2 ( I X] jZj,k-1 + M* 53 jzj,fc_2 ) I = -A2z(1) U' = 1 V k=2 k=3 Pj Pj A0Z(P-1) = -A2 53 jZj,fc-P + M* 53 jZj,fc-P-1 ) I = -A2Z(P), L j = 1 \k=P +1 k=P+2 where P = max{k — 1 : j =0, j = 1,..., m, k = 1,... ,pj}. The chain ends with A0z(P) = 0, which follows from (3.16) and ,(P) = E c jP zj1. The relations A2z(0) = 0, A0z(0) = —A2z(1), ... , A0z(P-1) = —A2z(P), A0z(P) = 0 show that belong to the right singular subspace of the pencil A2 — mA0 (see, e.g., [7, Section 12.3]). It follows that z, which is a linear combination of the vectors z(0) ,...,z(P), belongs to the singular part of A2 — mA0. We conclude that by vectors solely from the combinations (NP1, NP2) it is not possible to write down the common regular eigenvector z. From the above discussion we see that the only option for the existence of a common regular eigenvector is that each pencil Wj(A*, 0) — ai Ci has a Kronecker block JPi (a) for i = 1, 2. Then det(Wj(A*, —a)) = 0 for i = 1, 2 and (A*, —a) is a finite eigenvalue of the two-parameter eigenvalue problem (1.1). As we assume that all finite eigenvalues of (1.1) are algebraically simple, p1 = p2 = 1 and it follows from Theorem 3.5 that (A*, —a) is an eigenvalue of the associated pair of generalized eigenvalue problems (1.4). Let m1 be the geometric multiplicity of A* as an eigenvalue of the pencil A1 — AA0. Then there exist a1,..., ami and linearly independent vectors x11 < x21,...,x1mi < x2mi such that (A*, — aj) is an eigenvalue of (1.1) and x1j < x2i is the corresponding eigenvector for i = 1 , . . . , m1 . By repeating the above steps for the pencil A2 — mA0 we can show that there exist p,..., pm2 and linearly independent vectors u11 < u21,..., u1m2 < u2m2 such that (pi, m*) is an eigenvalue of (1.1) and u1i < u2i is the corresponding eigenvector p,- >p+1 for i = 1,... ,m2, where m2 is the geometric multiplicity of p* as an eigenvalue of A2 — pAo. Suppose that A* = pi for i = 1,...,m2. It follows that the vectors xii < x2i,..., ximi < x2mi, uii < u2i,..., uim2 < u2m2 are linearly independent and they are not elements of the minimal reducing subspace R(Ai, A0). There exists a basis B for S that contains all vectors xii