Also available at http://amc.imfm.si ISSN 1855-3966 (printed edn.), ISSN 1855-3974 (electronic edn.) ARS MATHEMATICA CONTEMPORANEA 3 (2010) 99–108 Reducibility of semigroups and nilpotent commutators with idempotents of rank two Matjaž Omladič ∗ Department of Mathematics, University of Ljubljana, Ljubljana, Slovenia Heydar Radjavi † Department of Pure Mathematics, University of Waterloo, Waterloo, Canada Received 13 February 2010, accepted 20 April 2010, published online 3 May 2010 Abstract Let f be a noncommutative polynomial in two variables. Let S be a multiplicative semigroup of linear operators on a finite-dimensional vector space and T a fixed linear operator such that f(T, S) is nilpotent for all S in S. In [3] the authors propose questions, what one can say about the invariant subspace structure of S under this and other related conditions. In particular, they study the question under the condition that [S, T ]2 = 0, where T is a given idempotent of rank one. In this paper we extend some of the results given there to the case that T is a given idempotent of rank two. Keywords: Reducibility, semigroups, commutators, nilpotent operators. Math. Subj. Class.: 15A30, 47A15 1 Introduction Several authors have studied the effect of polynomial conditions on reducibility and (simul- taneous) triangularizability of semigroups of operators in the following sense. Let S be a (multiplicative) semigroup of linear operators on a finite-dimensional vector space V over an algebraically closed field. (The semigroup may have to satisfy more conditions to get certain results, e.g., it may be required that it be a group or an algebra.) Let f be a noncom- mutative polynomial in two variables. One may assume conditions such as f(S, T ) = 0 for all S and T in S, or tr f(S, T ) = 0, or f(S, T ) is nilpotent for all pairs. When does such a condition imply that S is abelian, (simultaneously) diagonalizable, triangularizable, or at least reducible, i.e., has a common nontrivial invariant subspace? ∗Supported by a grant from the Slovenian Research Agency – ARRS. †Supported by a grant from NSERC, Canada. E-mail addresses: matjaz@omladic.net (Matjaž Omladič), hradjavi@math.uwaterloo.ca (Heydar Radjavi) Copyright c© 2010 DMFA Slovenije 100 Ars Math. Contemp. 3 (2010) 99–108 Guralnick proved in [1] that if the polynomial xy−yx is nilpotent on S, then S is trian- gularizable. In particular, if S is a bounded group of operators on a complex vector space, then this condition implies that S is actually commutative (and therefore diagonalizable). For an extension of this result to infinite dimensions see [5]. For results on more general polynomials see [2]. In this paper we consider polynomial conditions involving operators from outside the semigroup S. For example, if T is a fixed operator (not necessarily from the semigroup) and f(S, T ) = 0 for all S in S, what can we conclude about the structure of invariant subspaces of S? Even for simple polynomials, this question seems much harder than the previously mentioned kind of questions, perhaps not surprisingly. In [3] the authors study reducibility conditions on semigroups satisfying a nilpotency condition with respect to an idempotent of rank one. Assuming higher ranks in the condition studied there seems to yield a much harder problem. It is the main purpose of this paper to study a similar condition with an idempotent of rank two. In order to do so we have to develop some techniques which appear to be new in the field. Throughout the paper we will consider only complex vector spaces, although most of our results extend easily to the case of an arbitrary algebraically closed field. 2 Singly generated semigroups In this Section we fix an idempotent P ∈Mn(C) of rank two. We will consider commuta- tors of P with elements S from a semigroup S ⊂ Mn(C) of matrices. We continue study of the condition [P, S]2 = 0 (2.1) satisfied by all S ∈ S started in [3]. We need to introduce some new techniques. Observe that the dimension of the underlying space n has to be at least 2 so that idempotents of rank two exist and it has to be at least 3 in order that this condition is not always satisfied. However, in dimension n = 3 the idempotent Q = I − P of rank one yields an equivalent condition to (2.1), so that it is reasonable to assume that n is no smaller than 4 since the case of idempotent of rank one has already been covered in [3]. Write the matrix S in a block form with respect to the decomposition induced by the idempotent P as S = ( A B C D ) , so that [P, S]2 = ( −BC 0 0 −CB ) and condition (2.1) on S implies that BC = 0 and CB = 0. So, we can conclude. Lemma 2.1. If condition (2.1) holds for a given matrix S, then either (a) at least one of operators B and C is zero, or (b) operators B and C are both of rank one. Proof. If (a) is not true, then B and C are both nonzero. However, since their product CB is zero, neither of them is of full rank two. M. Omladič and H. Radjavi: Reducibility of semigroups and nilpotent commutators. . . 101 If S ∈ S satisfies condition (b) of this lemma, then we call it (just for the purpose of this paper) a matrix of rank one corners. Let now S be an invertible matrix of rank one corners satisfying condition (2.1). Then, we can write B = xutr and C = vytr for some nonzero two-dimensional columns x and y and (n − 2)-dimensional columns u and v. If we write the matrix S2 in the same block form S2 = ( à B̃ C̃ D̃ ) , we get that B̃ = Axutr + xutrD and C̃ = Dvytr + vytrA. The fact that S is invertible has a few simple implications that will be used frequently in the sequel. First, CB = 0 implies that AB 6= 0 and similarly BD 6= 0. If, in addition, S2 also satisfies condition (2.1), then the products Ax, utrD, Dv, and ytrA are all nonzero. These follow immediately from the next proposition. Proposition 2.2. Let S be an invertible matrix such that S and S2 both satisfy condition (2.1). Let S have rank one corners and denote by xutr, respectively vytr, the northeast corner, respectively southwest corner of S, then either (b1) vector ( x 0 ) ∈ ImP is a right eigenvector of S, vector ( ytr 0 ) ⊥ KerP is a left eigenvector of S, and ytrx = 0, or (b2) vector ( 0 v ) ∈ KerP is a right eigenvector of S, and vector ( 0 utr ) ⊥ ImP is a left eigenvector of S. In both cases the northwest corner and the southeast corner of S are invertible matrices. Proof. Matrix S2 also satisfies either (a) or (b) of the Lemma. If in the first case B̃ is zero, then, in particular, there exists a scalar α such thatAx = αx. The fact that ytrA = βytr for some scalar β then follows by the fact that ytr is the (up to multiplicative constant) unique row vector perpendicular to x. Since CB = 0, we have that Cx = 0 and ytrB = 0, and consequently, ( x 0 ) is a right eigenvector of S and ( ytr 0 ) is a left eigenvector of S. The case C̃ = 0 is obtained from the previous case by going to the transposed matrix of the matrix S. In either of the cases we have shown that the assertion (b1) holds. Now, if B̃ and C̃ are both nonzero, then they are of rank one by Lemma 2.1. If Ax and x are parallel, i.e. linearly dependent, we proceed as above. A similar conclusion follows if ytr and ytrA are parallel, so that we are again in case (b1). It remains to treat the case that both pairs of vectors are linearly independent. In this case we get that utr and utrD are parallel and also that v and Dv are parallel, so that utr is a left eigenvector of D and v is a right eigenvector of D. The fact that these are also eigenvectors of S now follows from equality BC = 0 and (b2) holds. It remains to verify the last assertion of the proposition. In case (b1) it is clear that α and β, the respective eigenvalues corresponding to the two eigenvectors, are nonzero since they are eigenvalues of the invertible matrix S. However, they are also (both) eigenvalues of A (in the above notation), it follows that A is invertible. Invertibility of D now follows 102 Ars Math. Contemp. 3 (2010) 99–108 by, say, a standard exercise in determinant analysis. Write the matrix of S in a basis in which A is upper triangular with α and β on the diagonal. The determinant of S can now be computed by expanding on the first column and the second row to get the product of α, β, and the determinant ofD. So, S is invertible if and only ifD is invertible. Case (b2) goes similarly. We first observe that the two eigenvalues are nonzero and then write the matrix of S so that the corresponding eigenvectors are set on the first column and respectively the last row of the second block. Next, we expand the determinant of S first on this column and then on this row. Finally, we observe that the only nonzero term in the expansion is the determinant of a block-diagonal matrix, the blocks being equal to A, respectively to the rest of D after crossing out the first of its columns and the last of its rows. Let P ∈ Mn(C) be an idempotent of rank two such that condition (2.1) is satisfied for all S ∈ S , where S ⊂ Mn(C) is a one-generated semigroup of invertible matrices. Apply Lemma 2.1 and Proposition 2.2 to the generator S of the semigroup. Case (b) of the lemma yields either (b1) of the proposition implying case 1 below, or (b2) implying case 2. Case (a) yields the other two possibilities. Thus, at least one of the following conditions holds true: 1. The generator satisfies condition (b1) of the proposition. 2. The generator satisfies condition (b2) of the proposition. 3. The semigroup leaves ImP invariant. 4. The semigroup leaves KerP invariant. We will call within this paper a one-generated semigroup (or its generator) to be of type i if it satisfies the i-th condition of this list. Note that a generator can be of more than one type at the same time. In particular, the generator can be simultaneously of type 1 and 2, or it can be simultaneously of type 3 and 4. However, if the generator is of either type 1 or 2, it has rank one corners and it cannot be of neither type 3 nor 4. Moreover, if it is of either type 3 or 4, it does not have rank one corners and it cannot be of neither type 1 nor 2. On the other hand, the type of the generator does not determine the type of its powers. For example, the 4× 4 matrix S = ( I N N −I ) , where I is the 2×2 identity andN is a 2×2 nilpotent of rank one, generates a group of two elements. If P is the projection on the first block along the second one, the group satisfies condition (2.1) and S is simultaneously of type 1 and 2, while its square is simultaneously of type 3 and 4. 3 Two-generated semigroups In this section we consider an idempotent P ∈ Mn(C) of rank two and two invertible elements S and T of a semigroup S ⊂ Mn(C) satisfying condition (2.1) for all S ∈ S . Each of these two elements and their products is of one of the four types introduced in the previous section. We will try to reduce the number of all these possibilities. We assume that S has rank one corners, so that we can write it in the form S = ( A xutr vytr D ) . M. Omladič and H. Radjavi: Reducibility of semigroups and nilpotent commutators. . . 103 If S is of type 1, then its k-th power can be computed as Sk = ( Ak + λkxytr xutrk vky tr Dk ) (3.1) for a sequence of scalars λk and sequences of (n− 2)-dimensional columns uk and vk, for any k ∈ N. This can be shown inductively: Sk+1 = Sk · S = ( (Ak + λkxytr)A+ x(utrk v)y tr Akxutr + xutrk D vky trA+Dkvytr Dk ·D ) . Actually, this proof also yields the formulas for computing the two sequences of vectors utrk = u tr ∑ i+j=k−1 αiDj and vk = ∑ i+j=k−1 βiDjv, where α and β are the eigenvalues of A defined by Ax = αx and ytrA = βytr, i.e. they are of the form a polynomial in D multiplied by utr on the left, respectively v on the right. Recursive formulas for the sequence λk are also obtainable from the above. If S is of type 2, then its k-th power can be computed as Sk = ( Ak xku tr vytrk D k + µkvutr ) (3.2) for a sequence of scalars µk and sequences of 2-dimensional columns xk and yk, for any k ∈ N. This can again be shown inductively: Sk+1 = Sk · S = ( Ak ·A Akxutr + xkutrD vytrk A+D kvytr v(ytrk x)u tr + (Dk + µkvutr)D ) , and this proof also yields formulas for computing the two sequences of vectors xk =∑ i+j=k−1 γiAjx and ytrk = y tr ∑ i+j=k−1 δiAj , where γ and δ are the eigenvalues of D de- fined by utrD = γutr and Dv = δv, i.e. they are of the form a polynomial in A multiplied by x on the right, respectively ytr on the left. Recursive formulas for the sequence µk are also obtainable from the above. Consider another operator T that also has rank one corners and write it similarly T = ( à x̃ ũtr ṽ ỹtr D̃ ) . The powers of T can be written either in the form (3.1) or (3.2) depending on whether T is of type 1 or 2. For any given indices j, k ∈ N the northeast corner of the product SjT k is of one of the following four types depending on whether operator S, respectively T , is of type 1, respectively 2: NE11 x̃ũtrk D̃−k + (A−j − λjA−jxytrA−j)xutrj . NE12 x̃kũtr(D̃−k − µ̃kD̃−kṽũtrD̃−k) + (A−j − λjA−jxytrA−j)xutrj . NE21 x̃ũtrk D̃−k +A−jxjutr. NE22 x̃kũtr(D̃−k − µ̃kD̃−kṽũtrD̃−k) +A−jxjutr. 104 Ars Math. Contemp. 3 (2010) 99–108 We have labelled these cases in a self-explanatory way: NEpq means the northeast corner of SjT k whenever S is of type p and T is of type q for p, q = 1, 2. Note that here we do not give the actual northeast corner blocks of the product SjT k, but equivalent matrices obtained by multiplying these blocks on the left and on the right with the respective diago- nal blocks of Sj and T k. Similarly, in these cases the southwest corner of the same power is of one of the following four types (labelled as above): SW11 D−jvjytr + ṽkỹtr(Ã−k − λ̃kÃ−kx̃ỹtrÃ−k). SW12 D−jvjytr + ṽỹtrk Ã−k. SW21 (D−j − µjD−jvutrD−j)vytrj + ṽkỹtr(Ã−k − λ̃kÃ−kx̃ỹtrÃ−k). SW22 (D−j − µjD−jvutrD−j)vytrj + ṽỹtrk Ã−k. In the proof of these facts we need the following lemma: Lemma 3.1. IfA is an invertible matrix andN is a square zero nilpotent such that the prod- ucts NA and AN are both scalar multiples of N , then (A+N)−1 = A−1 −A−1NA−1. Proof. A straightforward computation after observing that the products NA−1 and A−1N are also scalar multiples of N . It is now easy to get the above eight assertions. To prove NE12, say, one has to multiply the first block-row of the matrix (3.1) with index k replaced by j with the second block- column of the matrix (3.2) with matrices xk, u and so on replaced respectively by x̃k, ũ and so on. The result is then multiplied by the inverses of matrices Aj + λjxytr from the left hand side and D̃k + µ̃kṽũtr from the right hand side. In this way each of the above eight expressions becomes a sum of two rank-one operators, one depending only on the j- th power of operator S and the other one depending only on the k-th power of operator T . Using the previously established eigenvector-eigenvalue relations we can slightly simplify these equations further into: NE11’ x̃ũtrk D̃−k + α−jxutrj . NE12’ x̃kũtrγ̃−k + α−jxutrj . NE21’ x̃ũtrk D̃−k +A−jxjutr. NE22’ x̃kũtrγ̃−k +A−jxjutr. SW11’ D−jvjytr + ṽkỹtrβ̃−k. SW12’ D−jvjytr + ṽỹtrk Ã−k. SW21’ δ−jvytrj + ṽkỹtrβ̃−k. SW22’ δ−jvytrj + ṽỹtrk Ã−k. Maximal possible rank of any of these corners is two in which case we will say that this corner is of full rank. If S and T are both of type 1 and the two eigenvectors of their respective northwest corners are parallel, and may therefore be assumed with no loss of generality equal, i.e. x = x̃ and ytr = ỹtr, we will say that they are simultaneously of type 1. Similarly, if they are both of type 2 and the two eigenvectors of their southeast corner M. Omladič and H. Radjavi: Reducibility of semigroups and nilpotent commutators. . . 105 are parallel, and may therefore be assumed with no loss of generality equal, i.e. utr = ũtr and v = ṽ, we will say that they are simultaneously of type 2. The following proposition says that these are the only two cases to be considered. Observe that in the proofs of the following Proposition we only need condition (2.1) to be satisfied by elements of a subsemigroup of S generated by two elements S and T of S. Proposition 3.2. Let a semigroup S ⊂Mn(C) of invertible matrices satisfy condition (2.1) for all S ∈ S and a fixed idempotent P ∈Mn(C) of rank two, and let elements S and T of S have rank one corners. Then they are either simultaneously of type 1 or simultaneously of type 2. Proof. We treat the four possible cases separately. Case I. Elements S and T of the semigroup are both of type 1 but not necessarily simul- taneously. In the notation above we have in our case that either x is not parallel to x̃ or ytr is not parallel to ỹtr. By considerations as at the beginning of the proof of Proposition 2.2 these two conditions are actually equivalent so that we have that neither x is parallel to x̃ nor ytr is parallel to ỹtr. We consider the northeast corner, respectively the southwest corner of products ST and ST 2. None of these corners can be zero and consequently all of them are of rank one by Lemma 2.1. It follows that ṽ is parallel to D−1v and also ṽ2 is parallel to D−1v, so that they are parallel to each other and ṽ must be an eigenvector of D̃. Alternatively, one can use the above arguments on TS, respectively TS2, in place of ST , respectively ST 2, to conclude that D−1v is parallel to D−2v2 implying that v is an eigenvector of D. The fact that ṽ is parallel to v is now clear since the southwest corner of ST is of rank one. An equivalent consideration with columns and rows interchanged yields the rest of what we wanted to show that utr is parallel to ũtr and that they are left eigenvectors of both D and D̃. So, S and T are simultaneously of type 2. Case II. Element S is of type 1 and element T is of type 2. We first suppose that there is an index j ∈ N such that ũtr is not parallel to utrj . Under this assumption it may happen that for some index k ∈ N either (1) x̃k is not parallel to x, or (2) x̃k is parallel to x. If (1) is true, then the northeast corner of the product SjT k is of full rank implying that its southwest corner is zero, so that, in particular, ỹtrk à −k is parallel to ytr. If, on the other hand, (2) is true, then the northeast corner of this product is of rank one. The fact that the product of the two corners is zero (together with the fact that the product of the corresponding corners of Sj is zero) now implies that, again, ỹtrk à −k is parallel to ytr. Therefore, the latter conclusion is true for all k ∈ N forcing operator T to be also of type 1 and reducing this case to Case I. Next, we suppose that there is an index j ∈ N such that D−jvj is not parallel to ṽ. Under this assumption it may happen that for some index k ∈ N either (1) ỹtrk Ã−k is not parallel to ytr, or (2) ỹtrk à −k is parallel to ytr. If (1) is true, then the southwest corner of the product SjT k is of full rank implying that its northeast corner is zero, so that, in particular, x̃k is parallel to x. If, on the other hand, (2) is true, then the southwest corner of this product is of rank one. The fact that the product of the two corners is zero now implies that, again, x̃k is parallel to x. Therefore, the latter conclusion is true for all k ∈ N forcing again operator T to be also of type 1 and reducing this case to Case I. In order to finish the proof of Case II it remains to treat the possibility that for all indices j ∈ N we have that utrj is parallel to ũtr and that also D−jvj is parallel to ṽ. This implies that ũtr, being a left eigenvector of D̃, is also a left eigenvector of D, and similarly, ṽ, 106 Ars Math. Contemp. 3 (2010) 99–108 being a right eigenvector of D̃, is also a right eigenvector of D. So, operators S and T are simultaneously of type 2. Case III. Element S is of type 2 and element T is of type 1. This case can be brought to case II by considering the semigroup of matrices made of transposed matrices of the semigroup S. Observe that the new semigroup has the opposite multiplication, while the types of corresponding elements do not change. Case IV. Elements S and T of the semigroup are both of type 2 but not necessarily simultaneously. We first study the case that ũtr is not parallel to utr and consider the product SjT k for j ∈ N and k ∈ N. Just as in the proof of Case II we show that given any j ∈ N we have that ytrj is parallel to ỹtrk Ã−k for all k ∈ N. Consequently, T is of type 1 reducing our case to case III. Next, if ṽ is not parallel to v, we can reduce the problem to the previous one by going to the transposes. And finally, if both ũtr is parallel to utr and ṽ is parallel to v, then S and T are simultaneously of type 2. 4 Semigroups of invertible elements For a given idempotent P of rank 2 let condition (2.1) hold for all S in a semigroup S of invertible matrices and let C be the set of all S ∈ S with rank one corners. Lemma 4.1. All elements of C are either simultaneously of type 1 or simultaneously of type 2. Proof. First assume that C contains an operator S ∈ C that is of type p and not of type q for p 6= q ∈ {1, 2}. Furthermore, choose an operator T ∈ C different form S. By Proposition 3.2, T has to be simultaneously of the same type with S. Since S is not of type q, they are simultaneously of type p. Now, since T is arbitrary, the lemma follows. It remains to treat the case that all the members of C are of both types. We want to show that there exist a type p ∈ {1, 2} such that they are all simultaneously of type p. Assume the contrary. Then there are operators S, T ∈ C such that they are (by Proposition 3.2) simultaneously of type p but not simultaneously of type q for some p 6= q ∈ {1, 2}; and furthermore, there exists an operator R ∈ C which is (again by Proposition 3.2) simul- taneously of type q with S and simultaneously of type q with T . This brings us to the conclusion that S and T are simultaneously of type q in contradiction with the above. So, we can speak about the type of the set C and write C = C1 if the members of C are of type 1 and C = C2 in the other case. In the proof of the following theorem we need an easy and well-known fact that we state here for reference. Lemma 4.2. Let S be invertible. If TS and S have a joint right eigenvector x, then x is also an eigenvector of T . If ST and S have a joint left eigenvector ytr, then ytr is also an eigenvector of T . Proof. From Sx = αx and TSx = βx for some scalars α and β we get α 6= 0 and Tx = α−1T (Sx) = α−1βx. The second claim follows from the first one by going to the transposes. Theorem 4.3. Let a semigroup S ⊂ Mn(C) of invertible matrices satisfy condition (2.1) for all S ∈ S and a fixed idempotent P ∈ Mn(C) of rank two and denote by C the set of elements with rank one corners. Then either: M. Omladič and H. Radjavi: Reducibility of semigroups and nilpotent commutators. . . 107 (a) C is empty and either • ImP is invariant for S, or • KerP is invariant for S; or (b) C is nonempty and either • S has a joint right eigenvector in ImP and a joint left eigenvector perpendicu- lar to KerP , or • it has a joint right eigenvector in KerP and a joint left eigenvector perpendic- ular to ImP . Proof. If ImP is invariant for S, then C is empty and we are done. If KerP is invariant for S, we are done as well. In both cases (a) holds. Let us assume from now on that (a) is not true. We will first show that in this case C is nonempty. Assume the contrary. Then, there is an S ∈ S of the form S = ( A B 0 D ) and a T ∈ S of the form T = ( à 0 C̃ D̃ ) such that B and C̃ are both nonzero. Since A and à are invertible, it follows easily by Lemma 2.1 that TS ∈ C, so that C is nonempty, contradicting the above. By Lemma 4.1 we have that either C = C1 or C = C2. Consider the case that S contains operators S and T of the form S = ( A B 0 D ) and T = ( à 0 C̃ D̃ ) with B and C̃ both nonzero. It turns out that they have to be of rank one, say, B = xutr and C̃ = vytr, so that ST = ( AÃ+ xutrvytr xutrD̃ Dvytr DD̃ ) and TS = ( ÃA Ãxutr vytrA vytrxutr + D̃D ) . If in this case C = C1, then x is the common right eigenvector of all northwest corners of elements of C, because it is a column of the northeast corner of ST ∈ C1. So, it is an eigenvector of Aà + xutrvytr and consequently also an eigenvector of AÃ. Moreover, since Ãx is a column of the northeast corner of TS ∈ C1, it has to be parallel to x, so that x is also an eigenvector of Ã. By Lemma 4.2, x is also an eigenvector of A. A similar argument proves that ytr is a left eigenvector of both A and Ã. The fact that ytrx = 0 then implies that x = ( x 0 ) , 108 Ars Math. Contemp. 3 (2010) 99–108 which is a common right eigenvector of all elements of C, is also a right eigenvector of both S and T . Similarly, y = ( ytr 0 ) is a common left eigenvector of all elements of C together with S and T . The case that C = C2 goes similarly. If S \ C contains an operator T of block-diagonal form T = ( A 0 0 D ) it follows easily that T has the same right eigenvector and the same left eigenvector as all the members of C. Finally, it remains to treat the cases that either all the members of S \ C are block upper-triangular or they are all block lower-triangular. In either case elements of C have a common right eigenvector and a common left eigenvector. In order to finish the proof it suffices to show that in both cases the two eigenvectors are common for elements T ∈ S\C as well. Now, assume that all the members of S \ C are block upper-triangular, choose a T of the kind and an S ∈ C. Then neither ST nor TS is block upper-triangular, so they both belong to C. It follows by Lemma 4.2 that T has the same right eigenvector as TS and S and the same left eigenvector as ST and S and we are done. The other case follows by going to the transposes. References [1] R. M. Guralnick, Triangularization of sets of matrices, Linear and Multilinear Algebra 9 (1980), 133–140. [2] H. Radjavi, Polynomial conditions on operator semigroups, J. Operator Theory 53 (2005), 197– 220. [3] H. Radjavi and M. Omladič, Nilpotent commutators and reducibility of semigroups, Linear and Multilinear Algebra, 57 (2009), 307–317. [4] H. Radjavi and P. Rosenthal, Simultaneous Triangularization, Springer-Verlag, New York, 2000. [5] H. Radjavi, P. Rosenthal and V. Shulman, Operator semigroups with quasinilpotent commuta- tors, Proc. Amer. Math. Soc. 128 (2000), 2413–2420.