IMFM Institute of Mathematics, Physics and Mechanics Jadranska 19, 1000 Ljubljana, Slovenia Preprint series Vol. 52 (2014), 1199 ISSN 2232-2094 ON THE SPECTRUM AND THE SPECTRAL MAPPING THEOREM IN MAX ALGEBRA Vladimir Müller Aljosa Peperko Ljubljana, May 20, 2014 о сч о\ ON THE SPECTRUM AND THE SPECTRAL MAPPING THEOREM IN MAX ALGEBRA О VLADIMIR MULLER, ALJOSA PEPERKO Abstract. We give a new description of spectrum in max algebra of a given non- negative matrix A via local spectral radii and obtain a new block triangular form of A related to its Frobenius normal form. Related results for the usual spectrum of complex matrices and distinguished spectrum for non-negative matrices are also obtained. i—I As an application we provide a new proof of the spectral mapping theorem in max algebra and also generalize it to the setting of power series in max algebra. Given a non-negative bounded infinite matrix A, we show that the Bonsall's cone spectral radius of a map x ^ A ( x, with respect to the cone /2°, is included in its max algebra approximate point spectrum. Moreover, the spectral mapping theorem with О respect to point and approximate point spectrum in max algebra is investigated. The I corresponding results for more general max and max-plus type kernel operators and for tropical Bellman operators are obtained. Math. Subj. Classification (2010): 15A18, 15A80, 15A60, 15B48, 47A60. CO Key words: non-negative matrices; max algebra; eigenvalues; distinguished eigenvalues; spectral mapping theorem in max algebra; power series in max algebra; continuity; Bonsall's cone spectral radius; approximate point spectrum; max-plus kernel operators; Bellman operators; tropical algebra 1. Introduction m The algebraic system max algebra and its isomorphic versions (max-plus algebra, tropical algebra) provide an attractive way of describing a class of non-linear problems appearing for instance in manufacturing and transportation scheduling, information technology, discrete event-dynamic systems, combinatorial optimization, mathematical physics, DNA analysis, ...(see e.g. [12], [6], [17], [7],[10], [33] and the references cited there). Max algebra's usefulness arises from a fact that these non-linear problems become linear when described in the max algebra language. Moreover, recently max algebra techniques were used to solve certain linear algebra problems (see e.g. [15], [30]). In particular, tropical Date : April 18, 2014. О CSI co CD CD CO polynomial methods improved the accuracy of the numerical computation of the eigenvalues of a matrix polynomial (see e.g. [1], [2], [18], [19], [3], [11] and the references cited there). The max algebra consists of the set of non-negative numbers with sum aфb = maxja, b} and the standard product ab, where a,b > 0. Denote by П the set {1, 2,..., n} or N. Let A = [Aij] be a non-negative matrix, i.e., Aij > 0 for all i, j G П. A non-negative matrix A is called bounded if sup{Aij : i, j G П} < то. Let Rnxn (Cnxn) be the set of all n x n real (complex) matrices, R+Xn the set of all n x n non-negative matrices and M+°X(X the set of all non-negative bounded matrices in the case П = N. The operations between matrices and vectors in the max algebra are defined by analogy with the usual linear algebra. The product of non-negative bounded matrices A and B in the max algebra is denoted by A ® B, where (A ® B)»j = supken AikBkj and the sum A ф B in the max algebra is defined by (A ф B)ij- = max{Aij- ,Bij-}. The notation A® means A ® A, and A® denotes the k-th max power of A. If x = (ж»)»еп is a non-negative bounded (i.e., ||x|| = supien x» < то) vector, then the notation A ® x means (A ® x)» = supjen Aijxj. The usual associative and distributive laws hold in this algebra. The role of the spectral radius of A G R+Xn in max algebra is played by the maximum cycle geometric mean r®(A), which is defined by (1) r®(A) = maxi^A^ • • • Aiзi2 A»^ )1/k : k G N and ii,...,ik G {1,..., n}) CO L J and equal to СЧ r®(A) = maxj^A»^ ••• Aiзi2Ai2il)1/k : k < n and i1,...,ik G{1,...,n} mutually distinc^. A digraph G (A) = (N (A), E (A)) associated to A G R+Xn is defined by setting N (A) = {1,...,n} and letting (i,j ) G E (A) whenever Aj > 0. When this digraph contains at least one cycle, one distinguishes critical cycles, where the maximum in (1) is attained. A graph with just one node and no edges will be called trivial. A bit unusually, but in consistency with [12], [13], [20], [14], a matrix A G R+Xn is called irreducible if G (A) is trivial (A is 1 x 1 zero matrix) or strongly connected (for each i, j G N (A) there is a path in G (A) that starts in i and ends in j). There are many different descriptions of the maximum cycle geometric mean r®(A) (see e.g. [16], [12], [31], [32], [30] and the references cited there). It is known that r®(A) is the largest max eigenvalue of A, i.e., r®(A) = max{A : A G a®(A)}, where the spectrum in max algebra a® (A) is the set of all (max eigenvalues) A > 0 for which there exists x G R+, x = 0 with A ® x = Ax. Moreover, if A is irreducible, then r®(A) is the unique max eigenvalue and every max eigenvector is positive (see e.g. [7, Theorem 2], [12], [6], [9]). Also, the max version of the Gelfand formula holds for any A G R+Xn, i.e., (2) r®(A)= lim ||Ami|1/m О CSI for an arbitrary vector norm || • || on RnXn (see e.g. [31] and the references cited there). As outlined e.g. in [27], due to (2) a natural generalization (and unification with the usual spectral radius) in infinite dimensions is the Bonsall's cone spectral radius, which we will recall in Section 4. An eigenproblem for max type kernel operators and its isomorphic versions (and an eigenproblem for more general maps) has already received a lot of attention (see e.g. [4], [27], [29], [21], [34] and the references cited there). The results can be applied in different contexts, for instance in optimal control problems (here the max eigenvectors correspond to stationary solutions of the dynamic programming equations and the max eigenvalues correspond to the maximal ergodic rewards per time unit), in the study of discrete event systems, in statistical mechanics, in the study of delay systems, ... (see e.g. [4], [27], [13] and the references cited there). However, besides the study of the eigenproblem there seems to be a lack of more general functional analytic spectral theory in max algebra (and more generally in idempotent mathematics) in the literature, even though the need for it has already been explicitly requested by the idempotent community. CSI CSI The paper is organized as follows. In Section 2 we give a new description of spectrum in max algebra of a given non-negative matrix A via local spectral radii (Theorem 2.6) and obtain a new block triangular description of A related to its Frobenius normal form. Consequently we provide a new proof of the spectral theorem in max algebra (Corollary 2.9). Related results for the usual spectrum of complex matrices and distinguished spectrum for non-negative matrices are also obtained (Proposition 2.11 and Theorem 2.13). In Section 3 we apply results of Section 2 to obtain a new proof of the spectral mapping theorem in max algebra (Theorem 3.3) and we also generalize it to the setting of power series in max algebra (Theorem 3.7) by applying the continuity properties of the spectrum in max algebra (Proposition 3.6). Given a non-negative bounded infinite matrix A, we introduce in Section 4 the notion of the approximate point spectrum in max algebra. In particular, we show that the Bonsall's cone spectral radius of a map x M A ® x, with respect to the cone is included in its max algebra approximate point spectrum (Theorem 4.2). Moreover, the spectral mapping theorem with respect to point and approximate point spectrum in max algebra is investigated. In particular, we prove that in both cases the spectral mapping • и theorem is valid for polynomials without an absolute term (Theorem 5.4 and Corollary 5.6). Also the corresponding results for more general max and max-plus type kernel operators and for tropical Bellman operators are obtained. Ö a 2. Spectrum in max algebra for n x n matrices Jh cu In this section we prove a new description of the spectrum a® (A) in max algebra and a corresponding block triangular decomposition of a given matrix A G R+Xn. For x G R+ we define the local spectral radius in max algebra at x by rx(A) = lim sup ||A* 0 x||1/k к^то (we show later that in fact the limit limk^TO ||A^ 0 x||1/k always exists). Let e1,..., en be the standard basis in Rn. Observe that ||Ak 0 ej || is the largest entry of the jth column of the matrix A|,, i.e., CSI о Ö ||A® 0 ej ! = max{Ajfc ,jk-1 ■ ■ ■ Aj2,jl Ajl,j : 1 < j, j1, ■ ■ ■ ,jk < n}. The following result describes rej (A) for all j G {1,..., n}. Lemma 2.1. Let A G R+Xn, j G {1,... ,n}. Then rej(A) is the maximum of all t > 0 with the following property (*): there exist a > 0, b > 1 and mutually distinct indices i0 := j,i1,..., ia, ia+1,..., ia+b-1 G {1,..., n} such that a—1 a+b—1 П Ais+i,is =0 and П Ais+i,is = tb, s=0 s=a where we set ia+b = ia. О Proof. Let t, i0,..., ia+b—1 satisfy (*). Then rej (A) = lim sup ||Ak 0 e j ||1/k > lim sup |A+mb 0 ej ||1/a+mb k^TO m^TO /a— 1 a+b—1 \ 1/a+mb a—1 1/a+mb > lim sup ( П Ais+i,iJ П Ais+i,i^ I = limsup(tm П Ais+bis) a m = t. mm \s=0 s=a / s=0 Conversely, let c be the maximum of all t with property (*). Then it is not difficult to show that ||A| 0 e, || < ||A||n ■ ck—n for all k G N. Hence rej (A) < c □ The following result follows directly from Lemma 2.1 and definition (1) of r®(A). Corollary 2.2. If A G R+Xn, then r®(A) = max re,.(A). j=1,...,n Remark 2.3. Lemma 2.1 states that (using a terminology of e.g. [13], [12]) for each j G {1,..., n}, the radius t = rej (A) equals the maximum of cycle geometric means, such that the node j is accessible from one of the corresponding (t-critical) cycles. Now we could already deduce Theorem 2.6 bellow by applying the known spectral mapping theorem (see Corollary 2.9 bellow). However, we choose to prove Theorem 2.6 independently and thus we consequently provide a new proof of Corollary 2.9. First we prove the following result. Theorem 2.4. Let A G R+Xn, x = (xb ... ,xn) G R+, x = 0. Then: (i) the limit limk^^ ||A® 0 x\\l/k exists; (ii) rx(A) = maxjfe^(A) : 1 < j < n,Xj = 0}. Proof. We show first that limk^^ ||A® 0 e j ||1/k exists for each j = 1,..., n. By Lemma 2.1 there exist i0 = j,i1,..., ia+b—1 satisfying property (*) with t = rej(A). Let k e N. Write k = a + db + z with d e N0 and z < b. Then a—1 a+b—1 ^a+z-1 ||A® 0 ej || > П Ais+Uie( П Ага+1,г) П Ais+1 s=0 s=a s=a m g = П Ai +i is ■tdb ■ П A a—1 a+z—1 db is+1,is ■ t ■ J^ Ais + 1 ,is . s=0 s=a It follows that liminf ||Ak 0 ej ||1/k > t = re. (A) O and therefore rej (A) = limk^^ ||Ak 0 e j ||1/k for all j = 1,... Moreover, for each k G N, we have n. Š ..... . ......................... So k^rx which completes the proof. □ lim ||Ak 0 x||1/k = lim max{|Ak 0 ej||1/k : Xj = 0} k^rx k^rx 00 = max{ lim ||A® 0 ej ||1/k : xj = 0} = max{re. (A) : 1 < j < n,xj = 0}, СЧ £ A set C C R+ is called a max cone, if x ф y e C and Ax e C for all x, y e C and A > 0. For a set S C R+ we denote by V S the max cone generated by S, i.e., V S is the set of all x E R+ for which there exist k = k(x) e N, s1,...,sk E S and A1,..., Ak > 0 such that x = A1s1 ф ■ ■ ■ ф Aksk. A max cone C is invariant for A e R+Xn, if A 0 x e C for all x e C. We have the following result. hH Corollary 2.5. Let A e R+Xn, c > 0. Then {x e R+ : rx(A) < c} = У {ej : rej (A) < c} is a max cone invariant for A. J U Proof. The equality follows from the previous theorem. Clearly rA®x(A) = rx(A) for all x e R+, so the max cone is invariant for A. □ Now we are in position to prove the following description of spectrum in max algebra. Theorem 2.6. If A e R+Xn, then a CD a® (A) = {t : there exists j e {1,..., n},t = rej (A)}. О сч о сч & Proof. If t G a® (A), then there exists a nonzero x G R+ such that A ® x = tx. So rx (A) = ||Ak ® x||1/k = t. By Theorem 2.4, there exists j G {1,...,n} with rej (A) = t. Conversely, let j G {1,...,n} and c := rej (A). Let M = \J {es : res (A) < c}. By Corollary 2.5, the max cone M is invariant for A. For the restriction A|M of A to M we have by Corollary 2.2 that r®(A|M) = max{res(A|M) : es G M} = c. So there exists a max Perron-Frobenius eigenvector x G M C R+ with A ® x = cx. □ Remark 2.7. In [27] the equality r®(A) = max{rx(A) : x G R+} is generalized to the setting of cone preserving maps in Banach spaces. 0 Ö о СЧ 1 СЧ CO СЧ СЧ г CO со CO CD 'H jh CD CO и a CD U It follows from the proof of Theorem 2.6 that there exists a permutation matrix P such that the matrix PTAP equals " Ad 0 0. . . 0 0 * Ad-? 0. . . 0 0 ** Ad-2 . . . 0 0 (3) ** *. . . A. 2 0 ** *. . . * A1 where A1, ■ ■ ■ , Ad are square matrices, d is the cardinality of 0®(A) and r®(A?) < r^(A2) < ■ ■ ■ < r^(Ad) = r®(A) are max eigenvalues of a matrix A. Moreover, for each i = 1,..., d the set Mi = \J{ej : rej (A) < r^(Ai)} is a max cone invariant for A such that r® (A|Mi) = r®(Ai). In what follows we describe a correlation of our results with known results in terms of a Frobenius normal form of a matrix A (see e.g. [13], [12], [20], [14] and the references cited there). Each Ai for i = 1,..., d can be transformed by simultaneous permutations of the rows and columns to a Frobenius normal form (FNF) ([8], [12], [13], [20]) I [i] A 0 Aii-1 0 0 0 0 [i] A? where A?i],..., Ali] are irreducible square submatrices of Ai. 0 * * * This gives a FNF of a matrix A denoted by О СЧ о сч & 0 Ö о сч 1 сч со сч сч г со со (4) со CD 'H jh CD СО U а CD U Оч Bi 0 0 * Bi-1 0 0 0 0 * * * ... B\ For example, the decomposition (3) of the identity matrix I 1 0 0 1 has only one block I, but its FNF has two blocks equal to [1]. In general, the diagonal blocks in FNF are determined uniquely (up to a simultaneous permutation of their rows and columns), however their order is not determined uniquely. The matrices Bi,... ,Bl from (4) correspond to the sets of nodes Ni,... ,Nl of the strongly connected components of the digraph G(A) = (N (A), E (A)). Note that in (4) an edge from a node of Nß to a node of Nv in G (A) may exist only if ß < v. The reduced graph denoted by R(A) is a digraph whose nodes correspond to Nß for ß = 1, ...,l and the set of edges is {(ß,v) : there exist k E Nß and j E Nv such that Akj > 0}. By a class of A we mean a node ß (or also the corresponding set Nß) of the reduced graph R(A). A class ß is trivial if Bß is the 1 x 1 zero matrix. Class ß accesses class v, denoted ß ^ v, if ß = v or if there exists a ß — v path in R(A) (a path that starts in ß and ends in v ). A node j of G (A) is accessed by a class ß, denoted by ß ^ j, if j belongs to a class v such that ß ^ v. The following result, that describes the max eigenvalues rej (A) via the access relation, follows from Lemma 2.1. Corollary 2.8. Let A E R+Xn and let Bi,...,Bi be from (4). Then we have rej (A) = max{r® (Bß) : ß ^ j}. for all j = 1,... ,n. For each j = 1,... ,n we have rej(A) = r®(Bv) for some class v (but not vice versa in general). Now the following result (sometimes called the spectral theorem in max algebra ([13, Theorem 3.1], [12, Theorem 4.5.4], [20, Corollary 4.2(i) ], [17], [10], [9]) follows from Corollary 2.8 and Theorem 2.5. Corollary 2.9. Let A G R+Xn and let Bi,...,Bl be from (4). Then (A) = {r^(Bv) : r®(Bu) = max{r^(BM) : ß ^ v}} . О сч о сч & 0 Ö о сч 1 сч со сч сч £ со со If r®(Bv) = max{r®(BM) : ß ^ v} is satisfied, then a class v is called spectral. Thus r®(Bv) G о® (A) if a class v is spectral (but not necessarily vice versa as it is well known). The following example illustrates the results obtained above. Example 2.10. Let A 2 1 1 0 0 0 0000 3000 1 2 0000 Then A is in the forms (3) and (4), where A3 2 0 , A2 = 11 1 3 12 Ai = Bi [1], = Ai. B4 = [2], B3 =[3], B2 = A2, The spectral classes correspond to B3, B2, B1 and rei (A) = re2 (A) = 3, re3 (A) = re4 (A) = 2 and re5 (A) = 1. To conclude this section we state some related linear algebra results for the spectrum of complex matrices and for the distinguished spectrum of non-negative matrices (see e.g. [20], [14]). First we describe the set {|A| : A G о (A)} of a given matrix A G Cnxn, where о (A) denotes the usual spectrum of A, via the usual local spectral radii. Recall that for x G Cn the local spectral radius px(A) at x is defined by Px(A) = lim sup ||Akx||1/k, k^x where Ak denote the usual powers of A. Proposition 2.11. If A G Cnxn, then {|A| : A G o(A)} = {t : there exists x G Cn,t = px(A)}. CO CD 'H jh CD CO и a CD U Proof. Since the inclusion С is obvious, it remains to prove the reverse inclusion. For a given x G Cn let us denote px(A) = t. By M let us denote the linear span of the set {y : py(A) < t}. Then M is an invariant subspace and the usual spectral radius p(A|M) = t. Thus there exists A G o(A) such that |A| = t. This completes the proof. □ Proposition 2.11 implies the following known result (see e.g. [27]). Corollary 2.12. If A G Cnxn, then the usual spectral radius p(A) = max{px(A) : x G Cn}. Recall that A G о (A) of A G M+xn is called a distinguished eigenvalue if there exists x G R++, x = 0, such that Ax = Ax. Let us denote the set of all distinguished eigenvalues of A by oD(A), which is a non-empty set since p(A) G oD(A). The following result is an О Ö 3. Spectral mapping theorem in max algebra for n x n matrices analogue of Theorem 2.6 in nonnegative linear algebra (linear algebra over a semiring of non-negative reals equipped with the usual sum and product). О Theorem 2.13. If A G R+Xn, then oD(A) = {t : there exists j G {1,... ,n},t = pej (A)}. СЧ Proof. C: If Ax = Xx for some x > 0, x = 0, then Л = px(A). It is easy to see that px(A) = max{pej(A) : xj = 0}, which proves the inclusion C. D: Let t = pej (A) for some j = 1, ■ ■ ■ , n. By M let us denote the subcone generated by the set {ei : pei(A) < t}. Then AM C M and p(A|M) = t. Since p(A|M) = max{px(A|M) : x G M}, there exists x G M, x = 0, such that Ax = tx. This completes the proof. □ i—l Remark 2.14. From the proof of Theorem 2.13 we can obtain a block triangular decomposition of A G R+Xn, which is an analogue of (3) and consequently a FNF of A in a similar manner as above. We omit the details. There are several similarities, but also some differences, in the description of o®(A) and oD(A) via an access relation (we refer the reader to [20], [14], [13], [12]). СЧ The spectral mapping theorem for polynomials (see (5) bellow) in max algebra was established in [20, Theorem 3.6]. Applying Theorem 2.6 we give a different proof of this result (Theorem 3.3). Moreover, we extend it to power series in max algebra (Theorem 3.7). ^ Let P+ be the set of all polynomials with non-negative coefficients, deg p P+ = {p = ^^ a j zj : a j > 0, j = 0,... deg p}. For p,q gP+, p = ^d=goP a j zj, q = ^jg,9 ßj zj define p ф q,p ® q G P+ by max{deg p,deg q} p Ф q = ^^ max{aj, ßj }zj, W j=0 deg p+deg q p ® q = max{aißj-i : 0 < i < j}zj. j=0 These algebraic operations on P+ satisfy the familiar laws. The proof of the following result is elementary and we omit it. "Ih Proposition 3.1. Let p,q,h G P+. Then we have CD (i) p ф p = p, (ii) p ф q = q ф p, (iii) (p Ф q) Ф h = p ф (q ф h), O СЧ o со CD и CU (iv) p 0 q = q 0 p, (v) (p 0 q) 0 h = p 0 (q 0 h), (vi) p 0 (q ф h) = p 0 q ф p 0 h. Let p G P+, p = J^d=gp ajzj and t > 0. Write p®(t) = max{ajtj : 0 < j < degp}. For A G R+Xn define deg p p®(A) = ajA® The proof of the following fact is again straightforward and it is omitted. Cft Proposition 3.2. Let p,q G P+ and A G R+Xn. Then (i) (p ф q)®(A) = p®(A) ф q® (A), (ii) (p 0 q) ®(A) = p®(A) 0 q®(A). 0 Thus the mapping p м- p® (A) defines naturally a polynomial functional calculus for matrices A G R+Xn. Let A G R+Xn and p G P+. It follows from [20, Theorem 3.6] that о (5) a® (p®( A)) = p® (a®( A)). 1 In [20] the equality (5) was deduced from the existence theorem of common max eigenvectors for commutative matrices in max algebra ([20, Theorem 3.5], [12]) applied to matrices A and p®(A). Applying the results of the previous section we give a different proof of (5). Theorem 3.3. Let A G R+Xn and p G P+. Then + + (6) r®(p®(A)) = p®(r®(A)) and (7) a® (p®(A)) = p® (a®(A)). Proof. Let t G a®(A). There exists x G R+, x = 0 with A 0 x = tx. So A® 0 x = tjx for all j, and so p®(A) 0 x = p®(t) 0 x. It follows p®(a®(A)) C a®(p®(A)) and so p®(r®(A)) < r®(p®(A)). Let us denote b = r®(p®(A)). To prove (6) it remains to show that p® (r®(A)) > b if b > 0. Let us consider a critical cycle for p®(A), i.e., let i0, ii,..., ik-1, ik = io G {1,..., n} be such that k-1 n(p®(A))is+1,is = bk. s=0 For s = 0,..., k — 1 we have (p®(A)k+i,is = max{(ajA®)is+i,is : 0 < j < degp}. (Ö о Ö m CD и а on the spectrum and the spectral mapping theorem in max algebra 11 Choose j (s) G {0,1,..., degp} such that (p®(A))is+l,is = (aj(s)A® ) )is+1,is . Since is+i = is we have j (s) = 0. Choose is,0 = is, is,i,..., is,j(s) = is+i such that j(s)-1 (Aj(s)v = TT A 7is+1,is J_J_ ^is,m+1,is,m • m=0 Consider the cycle io = i0,0, i0,1, . . . , io,j(0)-1, i0,j(0) = il = il,0, il,1 . . . , il,j(1) = i2, i2,1, . . . ,ik = io ^ k- 1 of length k=0 j (s). We have k-1 j(s) 1 k-1 r®(A)^-1 j(s) > П П Ais,m+i,is,m = n(A^(s))is+i,is s=0 m=0 s=0 k-1 (P®(A))iS+i,iS bk П k-1 (s) Hs=0 aJ(s) Hence k-1 (s)i > .k П(«j(s)r®(A)j(s)) > s=0 s=0 j nk=0 j ' j ............................. (6). It remains to prove that ^®(p®(A)) C p®(o"®(A)). Suppose on the contrary that there exists s G a® (p® (A)) \ p®(a®(A)). We will show that this implies re. (p®(A)) = s for all CO j G {1,..., n}, which contradicts Theorem 2.6. Let L = {j G {1,..., n} : p®(rej(A)) < s} and XL = \J{ej : j G L}. For j G L we have p®(rej(A)) > s by Theorem 2.6. Therefore rej (p® (A)) > max {rej (amAm) : 0 . m . deg p} > max {«m (rej (A))m : 0 . m . deg p} = p® (rej (A)) > s. On the other hand, XL is invariant for A. So for j G L, rej (p® (A)) . r®(p®(A|XL)) = p®(r®(A|XL)) < s by (6) and Theorem 2.6. So re. (p® (A)) = s for all j G {1,..., n}, and so s G (p® (A)) -M j by Theorem 2.6. This contradiction completes the proof. □ Remark 3.4. Alternatively, the inequality p®(r®(A)) > r®(p®(A)) can be proved also in the following way. It follows from [12, Theorems 5.3.4 and 5.3.2] that (8) r®(A) = max{maper(B)1/k : B G Pk(A), k = 1,..., n} where Pk (A) is the set of all principal submatrices of A of order k. The permanent maper(B) in max algebra is defined by О maper(B) = max Bwn ■ ■ ■ Bwk), aePk where Pk denotes the set of all permutations of the set {1, 2,..., k}. By (8) there exist k G {1,..., n}, a G Pk and j (r) G {1,... degp} for each r G {1,..., k} such that k r®(P®(A))fc = aj( 1) ■ ■ ■ am\\ (A®r))ra(r). r=1 Since a is the product of cyclic permutations, it follows that r®(p® (A))k < Oj(i) ■ ■ ■ a j (к) r® ( A) j(1)+"'+j(k). & О СЧ I СЧ CO СЧ СЧ Thus there exist j (r) G {1,... degp} such that r„ (P„ (A)) < a., ( A)j(r) (p®(A)) < aj(r)r®(A)j(r) < p®(r®(A)), r®(p® which reproves the desired inequality. A reformulation of the equality (6) therefore asserts that max maper(B)1/к = p® ( max maper(C)1/l 1 . B€Pfc(p0(A)), k=1,...,n \C€Pl(A), l=1,...,n J In what follows we generalize Theorem 3.3 by considering power series instead of polynomials. Let A+ = {f = E°=o ajzj : aj > 0, j = 0,1,... }. Fot f g G A+, f = Y,°=o azj, g = E°=o ßjzj define CO f ® g = 5^ max{aj, ßj }zj j=o X f ( g = max{aißj-i : 0 < i < j}zj. ^^ j These operations satisfy the same rules as polynomials, i.e., an analogue of Proposition 3.1 for a set A+ holds. For f G A+, f = Ej=0 aj zj write Rf = liminfj^x a"1/j. For 0 < t < Rf write f®(t) = sup{ajtj : j = 0,1,... } (note that for t < Rf we have supj ajtj < ro). Let A G R+Xn, f = £°=0 ajzj G A+, r®(A) < Rf. Define -M j x f®(A) = 0 aj A®. j=0 Since r®(A) = limj^x ||A®||1/j this definition makes sense and f м- f®(A) defines an CU analytic functional calculus for A G R+Xn with properties analogous to the polynomial functional calculus (an analogue of Proposition 3.2 for a set A+ holds.) CSI о Ö CSI CSI s The spectral mapping theorem remains valid for power series as we will prove bellow in Theorem 3.7. For f G A+, f = Y1 °jLo a j Z and k G N denote by Sk the k-th partial sum Sk j a j zk. Clearly for A G R+Xn with r®(A) < Rf we have f®(A) = lim^ Sk® (A). Moreover, the sequence of partial sums is non-decreasing, So® (A) < Si® (A) < ■ ■ ■. It is known that the spectral radius in max algebra A м- r®(A) is continuous in R+Xn (see e.g. [30], [33], [26], [22]). However, as the following example shows the spectrum in max algebra A м a® (A) is in general not continuous. Example 3.5. Let Ak 1 0 k-1 2 A 10 02 Then a®(Ak) = {2} for all k G N, ||Ak - A|| м 0 as k м ^ and a®(A) = {1, 2}. The next result summarizes the continuity properties of the spectrum in max algebra. Proposition 3.6. (i) The spectrum in max algebra A м a® (A) is upper semi-continuous in R+Xn. (ii) If A, Ak G R+Xn, ||Ak - A|| м 0 as k м то and A1 < A2 < ■ ■ ■ then a®(Ak) м a®(A). Proof, (i) Let A, Ak G R+Xn such that ||Ak - A| м 0 as k м то. Let Ak G a®(Ak), Ak м A. For each k G N there exists an eigenvector xk G R+ such that ||xk| = 1 and Ak ® xk = Akxk. By a compactness argument there exists a convergent subsequence of the sequence (xk). Clearly its limit x satisfies ||x|| = 1 and A ® x = Ax, which proves (i). (ii) Let A, Ak G R+Xn such that || Ak - A| м 0 as k м то and A1 < A2 < ■ ■ ■. Let A G a® (A). By Theorem 2.6 there exists j G {1,... ,n} such that A = rej (A). Clearly there exists k0 such that for all k > k0 and i, l G {1,..., n}, нн (Ak)i,i = 0 ^ Ail = 0. Then Ak = rej (Ak) G a® (Ak) and Ak = rej (Ak) м rej (A) = A as k м то, which completes the proof. □ Now the spectral mapping theorem for power series in max algebra follows obviously from Theorem 3.3 and Proposition 3.6(ii). CD Theorem 3.7. Let f G A+ and A G R+Xn such that r®(A) < Rf. Then a®(f®(A)) = f®(a®(A)) and so r®(f®(A)) = f®(r®(A)). 4. Approximate point spectrum in max algebra and Bonsall's cone spectral radius Recall that by M+°Xj we denote the set of all non-negative bounded matrices (i.e., ||A|| = supijeN Aij < то for all A G MjXj). Similarly Ij denotes a cone (and also a max cone) of non-negative bounded sequences. For x, y G l j we write ||x - y|| = supieN |xi -yi|. Let A e M+x+. It is not hard to see that ||A 0 x — A 0 y|| < ||A|| ■ ||x - y|| for all x,y e l+. A map gA : l+ ^ l+, gA : x M- A 0 x is therefore a well defined continuous max linear map. We also have ||A|| =sup{||A 0 x || : ||x|| < 1, x e l+} = sup<[ ||A 0 x|\ : x = 0,x e l+i = sup ||A 0 e, ||. I ||x|\ J je N Since the map gA : l+ ^ l+ is monotone, positively homogeneous and continuous, the spectral radius r®(A) of A e M+x + in max algebra r^(A) := lim ||A*||1/k = inf ||A*||1/k ke N by definition equals the Bonsall's cone spectral radius r^ (gA) of the map gA with respect to the cone l+ (see e.g. [27], [5], [28], [22]). Let A e M+x+ and x e l+. As in the finite-dimensional case we define the local spectral radius at x by rx(A) = limsupk^+ ||A^ 0 x||1/k (in general the limit does not exist). Clearly rx(A) < r®(A) for all x e l+. Moreover, ry(A) = r®(A) for y = (1,1,... ). Let {e, : j e N} be the standard basis vectors. The following example shows that in general sup re. (A) = r®(A), an so Theorem 2.6 for infinite matrices is not true. ° J Example 4.1. Consider the direct sum A = ф +=1 Sn, where Sn is the n—dimensional left shift, i.e., A 0 e1 = 0 and A 0 e j = ej-1 for all j > 2. Then rej (A) = 0 for each basis element e j, but r®(A) = 1. СЧ The point spectrum in max algebra of A e M+x+ is the set of all t > 0 for which there exists x e R+, ||x|| = 1 with A 0 x = tx. The approximate point spectrum is the set of all t > 0 for which there exists a sequence (xk) C l+ of unit vectors such that lim ||A 0 xk — txk|| = 0. We denote the point and the approximate point spectrum in max algebra simply by op (A) and (A), respectively. Obviously ctp(A) C oop(A). Also oop(A) is always closed and nonempty, which follows from Theorem 4.2. It is proved in [27] that under certain compactness assumptions we have r®(A) = max{t : t e op (A)}. Next we show that we always have r®(A) = max{t : t e aop(A)}. Theorem 4.2. Let A e M+x + . Let sup{rej (A) : j e N} < t < r®(A). Then t e oop(A). In particular, r®(A) e oop(A). Moreover, r®(A) = max{t : t e oop(A)}. -M Proof. First we prove that r®(A) > t for all t e oop(A). If t e oop(A), then there exists a sequence (xk) of unit vectors such that limk^+ ||A 0 xk — txk|| = 0. By induction it follows that limk^+ ||A|, 0 xk — tjxk|| = 0 for all j e N. Indeed, | A 0 xk — tjxk || < | A 0 xk — A^-1 0 txk|| + 11A^—1 0 txk — tjxk|| < ||A4-1|| ■ ||A 0 xk — txk || + t|A4-1 0 xk — tj-1xk|| ^ 0 as k ^ by the induction assumption. It follows ||A®|| > limk^^ ||A® 0 xk II = tj, and so r®(A) > t. Let sup{rej(A) : j G N} < t < r®(A) and e > 0. We will construct u G , ||u|| = 1 such that ||A 0 u — tu|| < e. If t = 0, then rej(A) = limfc^^ ||A® 0 ej||1/k = 0 for all j G N. Let e > 0. Then there exists n > 0 such that A™ 0 e1 = 0 and ||A™+1 0 e1| < e||A® 0 e1|. So it is sufficient to О CSI CSI take u = pialli. Then ||u|| = 1 and ||A 0 u|| < e. Hence 0 G aap(A). So without loss of generality we may assume that t =1. Clearly ||A®|| > r® (A)j > 1 for all j. Let K = ||A||. Let e > 0. Choose mo G N such that (1 + e)m0 > 2 and no G N such that (1 + e)n0 > 2Km0(1 + e)m0. Let io G N and n > no satisfy ||A® 0 eio || > 1/2. For k > 0 set ak = ||A® 0 eÌ01|. Q\ Claim. There exist numbers ßk > ak and m > m0 such that ßm = am, ß0 > e 1 and g |e-1 — вк"-+1|< max{ek,вк+1} for all k > 0. Proof of the Claim. Set Yk = an(1 + e)|k-n|. Then Yn = an. Let m satisfy ^ = maxk{^} (such an m exists since Umk^ = "n lim^c» (^k-n = 0). In particular we have — > — = 1, so am > an > 1/2. Ym — Yn Set ßk = am(1 + e)|k-m|. Clearly ßm = am. For each k > 0 we have {^k,^k+1} CO 1 {am(1 + e)r, am(1 + e)r+1} for some r (r = min{|k — m|, |k + 1 — m|}) and so 1 1 e e le-1 - в"1 I =_1_ 1__— =_-_=_-_ |ek ek+^ am(1 + e)r 1 + e «m (1 + e)r+1 max{ek ^k+1} ' We show that m > m0. Suppose the contrary. We have am < Km and Ym = an(1 + e)n-m. So am < an(1K+m)n-m < ^m+t+g 0 < 1, a contradiction. So m > mo and во = am(1+e)m > (1+e)m > 1n Finally, for each k we have — < —. So Yk Ym ak < — = am(1 + e)1 k"n 1 " 1 m"n 1 < «m(1 + e)1 k"m 1 = ek. Continuation of the proof of Theorem 4.2. Let u = фА<1^г0. Since ek < ak for all k and em = am, we have ||u|| = 1. We show ||A 0 u — u|| < e. Fix j G N. We have j 1 ^ uj = k>o,^( ekAj'ik-1 Aik-1'ik-2 ' ' ' Ail'i0 and , . , Г 1 max (A 0 u)j = max < — Aj,i)-l Ai)-1,i)-2 ' ' ' Ai1,i0 CD For k > 1, i1,'.', ik_ 1 G N we have k>1,ii.....i)-1€N l ek-1 e have |ek — ek^1|Aj>ik-1 Aik-1,4-2 ■ ■ ■ Ai1,i0 < to Б 1 < e' max{pk > pk+1} For k = 0 we have О CSI co ei0 ßo ß0-1 < e. So II A ® u — u|| = max(A ® u — u)j < e. Hence 1 G a ар (A). □ Remark 4.3. In the proof of Theorem 4.2 we can clearly in the definition of u take a finite max sum, so we can take u with a finite support. Therefore we can conclude for example that u G c0. Note that for a left shift A from Example 4.1 we have ap(A) = aap(A) = [0,1]. On the other hand, for its restriction A|c0 to c0 we have ap(A|C0) = [0,1) and aap(A|C0) = [0,1]. We conclude this section with some remarks on Corollary 2.5 and Theorem 2.6 in the infinite setting. A max cone C С is called a—complete, if ®neNxn G C for all xn G C. For a set S С we denote by \Ja S the a—complete max cone generated by S, i.e., \/a S is the set of all x G for which there exist s1, s2,... G S and a1, a2,... > 0 such that x = ®n£Nansn- The proof of the following result is straightforward and it is omitted. Proposition 4.4. Let A G c > 0. Then \/a{ej : rej(A) < c} is a a—complete max cone invariant for A. J Corollary 4.5. If A G , then {rej (A): j G N} С aap(A). Й Proof. Let t = rej(A) for some j G N. Let M = \/{es : res (A) < t}. By Proposition 4.4 the set M is a a—complete max cone invariant for A. For the restriction A|M of A to M we have by Theorem 4.2 that sup{rei(A) : e^ G M} < t < r®(A|M) implies t G aap(A), which completes the proof. □ 5. Spectral mapping theorem in the infinite case deg q Let q G P+, q = a j zj be a polynomial with nonnegative coefficients. We write j=0 q®(A) ^ 0 d=g0q a j Ai. For t > 0 write q®(t) = max{aj tj : 0 < j < deg q}. Both the point and approximate point spectrum satisfy the spectral mapping property (for polynomials without the absolute term; see Theorem 5.4 bellow). One inclusion is simple as the next lemma shows. Lemma 5.1. Let A G , let q G V+, q = ^2=0 aj zj be a polynomial with nonneg- ative coefficients. Then q®(ap(A)) С ap(q®(A)) and q®(aap(A)) С aap(q®(A)). U CU Proof. Let t G ap(A), let x be a nonzero vector satisfying A ® x = tx. Then Ai ® x = tjx for all j, and so q® (A) ® x = q®(t)x. Hence q®(t) G ap(q®(A)). Let t E aap(Ä). Then there exists a sequence (xk) С of unit vectors with limk^^ IIA 0 xk — txk II = 0. Then for each j = 0,1,..., deg q we have О II aj ÄL 0 Xk — a j tj xk || ^ 0, О CSI Q\ О Ö and so q® (A) 0 xk — q®(t)xk = maxj\\ajA® 0 xk — ajtjxk|| : j = 0,1,... , deg q j ^ 0. ь So q®KP(A)) С aap(q®(A)). □ For the opposite inequality (for polynomials without the absolute term) we need the following lemma. Lemma 5.2. Let A G M+°x~, q E P+, q = Ed=fajzj. Suppose that q®(1) = 1, i.e., a j < 1 for all j and there exists m, 1 < m < deg q with am = 1. Let $ > 0 and x E satisfy IIxII = 1 and IIq®(A) 0 x — xII < 8. Then II Am 0 x — xII < 38deg q ■ max{1, IIq® (A) fdegq}. Proof. Write B = q®(A), n = degq and K = max{1, HBp}. For every r we have (Am 0 x — x)r < (q®(A) 0 x — x)r < 8. So it is sufficient to show (x — Am 0 x)r < 3n8K2 for all r. Fix r = ro. Find ri,..., rm such that CO xrs — Brs,rs+1 ■ xrs+1 < 8 (s = 0,1,... ,m — 1). CSI For s = 0,..., m — 1 there exists js, 1 < js < n such that Brs,rs + 1 = ajs (A® )rs,rs + 1 . CO So there exist fc,fc',0 < k < k < m such that k—1 k' —1 / J s ^T js (mod m), s=0 s=0 k' —1 js = ma CD s=k •H / for some a E N, a = m—1 Et—1 js < nm—1(k/ — k) < n. We have xro — xrfc ' (B® )ro,rfc < xro — xrk ' Bro,r1 ' Br1 ,r2 ' ' ' Br^_ 1 ,rfc < (xro xr1 ' Bro,r1 ) + Bro,r1 (xr1 xr2Br1,r2 ) + ' ' ' ' ' ' + Bro,r1 ' ' ' Brfc_2,rfc-1 (xrfc_1 — xrfcBrfc_1,rfc) < kK$. Similarly, xrfc — (B® )rfc,rk'xrk' < xrfc — xrk' ■ Brk,rfc+1 ■ Brfc+1,rfc+2 ■ ■ ■ Brk'_ 1,rk' < (k — k)K8. co So xrk - (A® )rk,ry Xry — xrk - ajk • • • ajk/_1 (A® )rk ,ry xry jk jy_1 — xrk - ajk • • • ajfc/-1 • (A® )rk,rk+1 • • • (A® )r,, -. r, ,x xrk Brk ,rk +1 • • • Brk/_1,rk/ Xry — (k k)K5. Thus xro — (B® 0 Am 0 x)ro — (xro — (B® )ro,rk xrk) + (B® )ro,r^xrk — (Am )rk ,rk/^ xr — kK8 + K2 (k' - k)S — nK2S. Hence g xro - (Ama 0 x)ro — (xro - (A®ma 0 B® 0 x)ro) + (a^ 0 B® 0 x - Ama 0 x)) — nK2 S + || A^H (j|B® 0 x - B®-1 0 x|| + ||B®—1 0 x - B®—2 0 x || + • • • + ||B 0 x - x||) — nK 2 S + K 2kS — 2nK 26. rH Thus there exist s0 = r0, s1,..., sa such that xso - (Am)so,s1 (Am)s1,s2 • • • (Am )sa-1,saxsa — 2nK S and xro - (Am 0 x)ro — xso - (Am)so,S1 xs1 сч CO + (Am )so,S1 • • • (Am )Sa-2,Sa-A (Am )Sa-1,Sa xSa - xSa-1 ) + ^ ^ ^ • • • + (A"" •) CO Since r0 was arbitrary, we have || Am 0 x|| — 3nK2S. □ — (xso (Am)So,S1 (Am)S1,S2 ••• (Am)Sa-1,Sa xSaj • • • + (A®m)so(Am)s1,s2xs2 - x^) — 2nK2S + aKS — 3nK2S. im ^ ,v,|| ^ Q„ 2^ Corollary 5.3. Let A G M°X°, q G P+, q = ajzj, q®(1) = 1- Then: (i) if 1 G aap(q®(A)) then 1 G aaP(A); (ii) if 1 G ap(q®(A)) then 1 G ap(A). Proof. (i) Let 1 — m — degq such that am = 1 and let 1 G aap(q®(A)). Let e > 0. By Lemma 5.2 there exists a unit vector x G such that ||Am 0 x - x|| < e. Let _ n\m-1 Ю У = 0m=o A® 0 x. Then ||y|| > ||x|| = 1 and m m— 1 ||A 0 У - У || = |0 A® 0 x - 0 A® 0 x|| — ||Am 0 x - x|| < e. j=1 j=o So 1 G aap(A), which proves (i). The proof of (ii) is similar. □ & Now we can prove the spectral mapping theorem for the point spectrum and for the approximate point spectrum (for polynomials without the absolute term). Theorem 5.4. Let A G M+°X° q G P+, q = Ef=i ajzj, q = 0. Then: (i) Op(q®(A)) = q® (o«p(A)); (ii) op(q®(A)) = q®(oP(A)). О СЧ Proof. (i) The inclusion oap(q®(A)) D q®(oap(A) was proved in Lemma 5.1. Conversely, let s G oap(q®(A)) and let t > 0 satisfy s = q®(t) (note that the function t ^ q®(t) is injective, continuous, q®(0) = 0 and q®(t) = w). We show that t G oap(A). If t = 0 = s, then we have 0 G oap(q®(A)). Since q = 0, there exists m G {1,..., deg q} such that am = 0. Since amAm < q®(A), we have 0 G oap(Am). For each e > 0 there exists a unit vector x with ||Am ® x|| < em. So there exists j, 0 < j < m — 1 with j+1 j Aj (x)x || Aj+ ® x|| < e|| Aj ® x|| and y := j-f—- that satisfies ||y|| = 1 and || A ® y|| < e. deg q a j So we may assume that t > 0 and s > 0. Set A' = A/t and q' = Y1 ~rzj. Then j=i s deg q « Aj q® (A') = 0 j j = qfSA), 1 G Oap(q® (A')) and q® (1) = 1. By Corollary 5.3 it follows j=i that 1 G oap(A') and so t G oap(A), which proves (i). The proof of (ii) is similar. □ О Given A G and a > 0 it is easy to see that oap(aI ф A) C [a, w) and op(aI ф A) C [a, w). Moreover, we have the following result. i CSI 00 CSI CSI m Proposition 5.5. Let A G M~X~, a > 0, q(z) = a + z. Then (9) Oap(q® (A)) Pi (a, w) = q® (oap(A)) П (a, w) and CO Op(q® (A)) P (a, w) = q® (op(A)) P (a, w) Proof. D: The inclusions q®(oap(A)) C Oap(q®(A)) and q®(op(A)) C Op(q®(A)) were proved in Lemma 5.1. C: Write B = q®(A) = aI ф A and let s G oap(B), s > a. Let 0 < e < s — a. Take x G , ||x|| = 1 such that ||B ® x — sx|| < e. For each r, (A ® x — sx)r < (B ® x — sx)r < e. We show that (sx — A ® x)r < e. We distinguish two cases: Suppose first that xr > jza. Then there exists r' with cu s a sxr r' * xr e, so either sxr — Arr/xr' < e or r' = r and sxr axr << e. Since the second case is not possible, we have (sx — A ® x)r < sxr — Ar,r/xr' < e. Д"1 If xr < then ' — s—a (sx — A ® x)r < sxr < sa So se se ||A ® x — sx|| = sup |(A ® x — sx)r| < ma« e,-> =-. r l s — a J s — a Letting e м 0 we get s G aap(A), which proves (9). Similarly, if s > a, s G ap(B) and B ® x = sx for some nonzero x G then it is easy to see that A ® x = sx and s G ap(A). □ The following result follows from Lemma 5.1, Theorem 5.4 and Proposition 5.5. Corollary 5.6. Let A G M+°X~, q G P+, q = E Jo" aj zj. Then (10) q® (ap(A)) С ap(q®(A)) С q®(ap(A)) U {ao} and q®(aap(A)) С aap(q®(A)) С q®(aap(A)) U {ao}. Remark 5.7. In general for the point spectrum nothing better than (10) can be said, for example consider the right shift A.Ì+IÌ = 1 (i > 1), A»,j = 0 otherwise, with ap(A) = 0 and the polynomial p(z) = z +1. Then x = (1,1,... ) satisfies (A ф I)x = x. Moreover, ap(I ф A) = {1} and the corresponding max eigencone consists of x = (x»)»^ G such that x» < x»+1 for all i G N. We do not know if this may happen for the approximate point spectrum. СЧ CO We conclude the paper with some remarks on the generalizations of results from Sections 4 and 5. The matrices in these sections may be of any size, not necessarily countable. More precisely, these results are true for max type kernel operators (Af )(x) = sup a(x,y)f (y). yen HH Here П is a non-empty set and the kernel a : П x П ^ [0, то) is bounded (i.e., ||A||^ = supx,yen a(x, y) < то). Thus A acts on the max cone (П) of bounded functions f : П ^ [0, то) (i.e., ||f |U — supken f (x) < то). The proofs are similar as above and we omit the details (for example the standard vectors en are replaced by the functions ey = X{y}). нн It is well known that max algebra is an idempotent semifield isomorphic to max + algebra Rmax (the set R U { — то} equipped with the operations a ф b = max{a,b} and a ® b = a + b) and to min + algebra or tropical algebra Rm»n (R U {то} equipped with the operations a ф b = min{a, b} and a ® b = a + b) via maps x м- log x and x м- — log x, respectively. The results above can thus be reformulated and applied also to these settings, i.e., to max-plus type operators (Bg)(x) = ^p^O^ y) + g(y)) yen on an idempotent semimodule of bounded (from above) functions g : П м Rmax (see e.g. [12], [27], [23] and the references cited there) and its tropical versions known also О СЧ о сч & 0 Ö о сч 1 сч со сч сч г со со со CD 'H jh CD со U а CD и as Bellman operators (which arise in numerous applications to optimal control problems, discrete mathematics, turnpike theory, mathematical economics, games and controlled Markov processes, the theory of generalized solutions of the Hamilton-Jacobi-Bellman differential equations, the theory of continuously observed and controlled quantum systems, ... - see e.g. [21], [25], [24] and the references cited there). Acknowledgments. The first author was supported by grants No. 14-07880S of GA CR and RVO: 67985840. The second author was also supported by the Slovenian Research Agency. References [1] M. Akian, R. Bapat, and S. Gaubert, Perturbation of eigenvalues of matrix pencils and optimal assignment problem. C. R. Acad. Sci. Paris, Serie I, 339, (2004), 103-108, E-print: arXiv:math.SP/0402438. [2] M. Akian, R. Bapat, and S. Gaubert. Min-plus methods in eigenvalue perturbation theory and generalised Lidskii-Vishik-Ljusternik theorem, 2005, E-print: arXiv:math.SP/0402090 [3] M. Akian, S. Gaubert and M. Sharify, Log-majorization of the moduli of the eigenvalues of a matrix polynomial by tropical roots, 2013, E-print: arxiv.org/abs/1304.2967 [4] M. Akian, S. Gaubert, C. Walsh, Discrete max-plus spectral theory, in Idempotent Mathematics and Mathematical Physics, G.L. Litvinov and V.P. Maslov, Eds, vol. 377 of Contemporary Mathematics, pp. 53-77, AMS, 2005. , E-print: arXiv:math.SP/0405225. [5] M. Akian, R.D. Nussbaum, S. Gaubert, A Collatz-Wielandt characterization of the spectral radius of order-preserving homogeneous maps on cones, E-print: arXiv:1112.5968, 2011. [6] F.L. Baccelli, G. Cohen, G.-J. Olsder and J.-P.Quadrat, Synchronization and linearity, John Wiley, Chichester, New York, 1992. [7] R.B. Bapat, A max version of the Perron-Frobenius theorem, Linear Algebra Appl. 275-276, (1998), 3-18. [8] R.B. Bapat, T.E.S. Raghavan, Nonnegative matrices and applications, Cambridge University Press, Cambridge, 1997. [9] R. B. Bapat, D.P. Stanford, and P. van den Driessche, The Eigenproblem in Max Algebra, DMS-631-IR, University of Victoria, Victoria, British Columbia, 1993. 10] R.B. Bapat, D.P. Stanford and P. van den Driessche, Pattern properties and spectral inequalities in max algebra, SIAM J. Matrix Anal. Appl. 16 (1995), 964-976. 11] D. A. Bini, V. Noferini and M. Sharify, Locating the eigenvalues of matrix polynomials, SIAM J. Matrix Anal. Appl., 34(4), (2013) 1708-1727. E-print: arxiv.org/abs/1206.3632 12] P. Butkovic, Max-linear systems: theory and algorithms, Springer-Verlag, London, 2010. 13] P. Butkovic, S. Gaubert, R.A. Cuninghame-Green, Reducible spectral theory with applications to the robustness of matrices in max-algebra, SIAM J. Matrix Anal. Appl. 31(3) (2009), 1412-1431. 14] P. Butkovic, H. Schneider, S. Sergeev and B.-S. Tam, Two cores of a non-negative matrix, Linear Algebra Appl. 439 (2013) 1929-1954. 15] L. Elsner and P. van den Driessche, Bounds for the Perron root using max eigenvalues, Linear Algebra Appl. 428 (2008), 2000-2005. 16] L. Elsner, C.R. Johnson and J.A. Dias Da Silva, The Perron root of a weighted geometric mean of nonnegative matrices, Linear and Multilinear Algebra 24 (1988), 1-13. 17] S. Gaubert, Theorie des systemes linéaires dans les dioi'des, These, Ecole des Mines de Paris, 1992. 18] S. Gaubert and M. Sharify, Tropical scaling of polynomial matrices, Lecture Notes in Control and Information Sciences 389 (2009), 291-303. [19] S. Gaubert and M. Sharify, Location of the roots of a polynomial by using tropical algebra, preprint, 2012. О сч о сч & 0 Ö о сч 1 сч со сч сч £ со со со CD 'H CD со Jh а CD и [20 [21 [22 [23 [24 [25 [26 [27 [28 [29 [30 [31 [32 [33 [34 R.D. Katz, H. Schneider and S. Sergeev. On commuting matrices in max algebra and in nonnegative matrix algebra, Linear Algebra Appi., 436(2), (2012), 276-292. V.N. Kolokoltsov and V.P. Maslov, Idempotent analysis and its applications, Kluwer Acad. Publ., 1997. B. Lemmens, R.D. Nussbaum, Continuity of the cone spectral radius, E-print: arXiv:1107.4532, 2011. G.L. Litvinov, The Maslov dequantization, idempotent and tropical mathematics: A brief introduction, J. Math. Sci. (N. Y.) 140, no.3 (2007), 426-444, E-print: arXiv:math/0507014, 2005. G.L. Litvinov, V.P. Maslov and G.B. Shpiz, Idempotent functional analysis: An algebraic approach, Math Notes 69, no. 5-6 (2001) 696-729, E-print: arXiv:math.FA/0009128 G.L. Litvinov and V.P. Maslov (eds.), Idempotent mathematics and mathematical physics, Con-temp. Math. Vol. 377, Amer.Math. Soc., Providence, RI, 2005. Y.Y. Lur and W.W. Yang, Continuity of the generalized spectral radius in max algebra, Linear Algebra Appi. 430 (2009), 2301-2311. J. Mallet-Paret and R.D. Nussbaum, Eigenvalues for a class of homogeneous cone maps arising from max-plus operators, Discrete and Continuous Dynamical Systems, vol 8, num 3 (2002), 519562. J. Mallet-Paret and R. D. Nussbaum. Generalizing the Krein-Rutman theorem, measures of non-compactness and the fixed point index, J. Fixed Point Theory and Applications 7 (2010), 103-143. V.P. Maslov, S.P. Samborskii, Idempotent analysis, volume 13 of Advances in Soviet Mathematics. Amer. Math. Soc., Providence, 1992. V. Müller, A. Peperko, Generalized spectral radius and its max algebra version, Linear Algebra Appl. 439 (2013), 1006-1016. A. Peperko, On the max version of the generalized spectral radius theorem, Linear Algebra Appl. 428 (2008), 2312-2318. A. Peperko, Inequalities for the spectral radius of non-negative functions, Positivity 13 (2009), 255-272. A. Peperko, On the continuity of the generalized spectral radius in max algebra, Linear Algebra, Appl. 435 (2011), 902-907. G. B. Shpiz, An eigenvector existence theorem in idempotent analysis, Mathematical Notes 82, 3-4 (2007), 410-417. Vladimir Müller Institute of Mathematics, Czech Academy of Sciences Zitna 25 115 67 Prague, Czech Republic email: muller@math.cas.cz AljoSa Peperko Faculty of Mechanical Engineering University of Ljubljana Askerceva 6 SI-1000 Ljubljana, Slovenia and Institute of Mathematics, Physics and Mechanics Jadranska 19 SI-1000 Ljubljana, Slovenia e-mail : aljosa.peperko@fmf.uni-lj.si