ARS MATHEMATICA CONTEMPORANEA ISSN 1855-3966 (printed edn.), ISSN 1855-3974 (electronic edn.) ARS MATHEMATICA CONTEMPORANEA 16 (2019) 49-58 https://doi.org/10.26493/1855-3974.1547.454 (Also available at http://amc-journal.eu) On the parameters of intertwining codes * Stephen P. Glasby, Cheryl E. Praeger Centre for Mathematics of Symmetry and Computation, University of Western Australia, 35 Stirling Highway, Crawley 6009, Australia Received 8 December 2017, accepted 6 March 2018, published online 13 September 2018 Let F be a field and let Frxs denote the space of r x s matrices over F. Given equinu-merous subsets A = {Ai | i e I} C Frxr and B = {Bi | i e I} C Fsxs we call the subspace C(A, B) := {X e Frxs | AiX = XBi for i e I} an intertwining code. We show that if C(A, B) = {0}, then for each i e I, the characteristic polynomials of Ai and Bi and share a nontrivial factor. We give an exact formula for k = dim(C(A, B)) and give upper and lower bounds. This generalizes previous work. Finally we construct intertwining codes with large minimum distance when the field is not 'too small'. We give examples of codes where d = rs/k = 1/R is large where the minimum distance, dimension, and rate of the linear code C(A, B) are denoted by d, k, and R = k/rs, respectively. Keywords: Linear code, dimension, distance. Math. Subj. Class.: 94B65, 60C05 1 Introduction Let F be a field and let Frxs denote the space of r x s matrices over F. Given equinu-merous subsets A = {Ai | i e I} C Frxr and B = {Bi | i e I} C Fsxs we call the subspace C(A, B) := {X e Frxs | AiX = XBi for i e I} an intertwining code. The parameters of this linear code are denoted [n, k, d] where n = rs, k := dim(C(A, B)) and d is the minimum distance of C(A, B). Given u, v e Fn the Hamming distance d(u, v) = |{i | ui = vi}| is the number of different coordinate entries, and a subspace C < Fn has minimal (Hamming) distance d(C) := min{d(u, v) | u = v} which equals min{d(0,w) | w e V where w = 0}. If |I| = 1 we write C(A, B) instead *The authors acknowledge the contribution of Robin Chapman who emailed us in August 2016 a proof of Theorem 2.8. His formula for dim(C(N^, Ninvolved a double sum which can be reduced to a single sum using Theorem 3.1. The authors gratefully acknowledge the support of the Australian Research Council Discovery Grant DP160102323. E-mail addresses: Stephen.Glasby@uwa.edu.au (Stephen P. Glasby), Cheryl.Praeger@uwa.edu.au (Cheryl E. Praeger) Abstract ©® This work is licensed under https://creativecommons.Org/licenses/by/4.0/ 98 ArsMath. Contemp. 16(2019)50-109 of C({A}, {B}). Centralizer codes [1] have the form C(A, A) and twisted centralizer codes [2, 3] have the form C(A, aA) where A e Frxs and a e F. Intertwining codes C(A, B) are more general still, so our dimension formula (Theorem 2.8) has particularly wide applicability. Furthermore, the greater abundance of intertwining codes turns out to help us construct intertwining codes with large minimum distance, cf. Theorem 4.3 and [3, Theorem 3.2]. Intertwining codes have the advantage of a short description, and fast matrix multiplication algorithms give rise to efficient syndrome computations which, in turn, may be used for decoding as described in [3, §3]. Given representations g ^ Aj and g ^ Bj a group algebra F (g | i e I}, elements of C(A, B) are homomorphisms between the associated modules. Hence Lemma 2.2 generalizes the fact that irreducible representations with distinct characters are inequivalent. An exact formula for k := dim(C(A, B)) is given in Theorems 2.9 and 2.8 of Section 2. The formula for k is simplified by an identity involving partitions proved in Section 4. Simpler upper and lower bounds for k are given in Section 5. In Theorem 4.3 in Section 4, we give an algorithm to construct A, B for which the minimum distance is d(C(A, B)) = [r/kjs. These examples have dR < 1 where R = rs is the rate of C (A, B). Corollary 4.4 to Theorem 4.3 shows that there exist matrices A e Frxr and B e Fsxs such that the intertwining code C(A, B) has dimension min{r, s} and minimum distance max{r, s}. We wonder how much this result can be improved. 2 A formula for dimF (C(A, B)) Throughout this section A = {Aj | i e I} C Frxr and B = {Bj | i e I} C Fsxs for a field F. The idea underlying this section is to use the Jordan form over the algebraic closure F of F to compute dimF(C(A, B)). To implement this idea we must simultaneously conjugate each Aj e A, and each Bj e B, into Jordan form. This is always possible when |I I = 1. Let GL(r, F) denote the general linear group of r x r invertible matrices over F. An ordered pair (R, S) e GL(r, F) x GL(s, F) acts on Frxs via X= R-1XS. Clearly (X (Rl,Sl))(R2,S2) = X (RlR2,Si S2) (XSi)(R'S) = X (R'S)Sf, and (RiX )(R'S) = RrX (r's). Lemma 2.1. If (R, S) e GL(r, F) x GL(s, F), then C (A, B)(r's) = R-1C (A, B)S = C(Ar, Bs ) where AR := {R-1AjR | i e I} and BS := {S-1BjS | i e I}. Proof. For each i e I, the equation AjX = XBj is equivalent to Arx (r-s) = (AjX )(R'S) = (XBj)(R'S) = X (R'S)Bf. □ Let cA(t) = det(tI - A) be the characteristic polynomial of A. Lemma 2.2. If C(A, B) = {0}, then gcd(cAi (t),cBi (t)) = 1 for all i e I. S. P. Glasby and C. E. Praeger: On the parameters of intertwining codes 51 Proof. Suppose that for some i G I we have gcd(cAi(t), cBi(t)) = 1. Then there exist polynomials f (t),g(t) such that f (t)cAi(t) + g(t)cBi(t) = 1. Evaluating this equation at t = Bj, and noting that cBi(Bj) = 0, shows f (Bj)cA. (Bj) = I. Hence cAi(Bj) is invertible. For X G C(A, B), \ve have AjX = XBj. Thus* * afc Ak I X = X afcBf I , \k>0 J \^k>0 J for all ak G F, and therefore cAi (Aj)X = XcAi (Bj). Since cAi (Aj) = 0, post-multiplying by cAi(Bj)-1 shows that X =0, and hence C(A, B) = {0}. □ Henceforth when we wish to emphasize the field F, we write CF(A, B). Lemma 3.1 of [2], in essence, says C^(A, B) = CF(A, B) ( F. This immediately yields Lemma 2.3. Lemma 2.3. If K is an extension field of F, then dimF (CF (A, B)) = dimK (CK (A, B)). In particular, dimF(CF(A, B)) = dim^(Cf(A, B)) where F is the algebraic closure of F. Lemma 2.3 allows us to assume that F is algebraically closed, which we shall do for the rest of this section. Given A g Frxr and B g Fsxs define A © B to be the block diagonal matrix (A B ), and define A © B to be {Aj © Bj | i g I} C F(r+s)x(r+s). Lemma 2.4. If A1 C FriXri,..., A™ C Fr™xrm and B1 C FS1XS1,..., Bn C Fs"xsn are subsets, all with the same finite cardinality, then (m n \ m n 0 Aj, 0 Bj I = 00 C(Aj, Bj). j=i j=i / j=i j=i Proof. Write X = (Xjj) as a block matrix where Xjj has size rj x sj. The condition X G C (®m=1 Aj, 0"=1 Bj) is equivalent to Xjj g C(Aj, Bj) for each i, j. □ Corollary 2.5. Suppose that A1,..., A™ and B1,..., Bn are as in Lemma 2.4, and suppose that for i = j, the characteristic polynomials of matrices in Aj are coprime to the characteristic polynomials of matrices in Bj. Then (m n \ min{m,n} 0 Aj, 0 Bj I = 0 C(Aj, Bj). j=1 j=1 / j=1 Proof. Use Lemma 2.4, and note that C(Aj, Bj) = {0} for i = j by Lemma 2.2. □ Remark 2.6. Let F be a finite field. Standard arguments, for example [6, p. 168], can be used to relate dim^ (C^ (A, B)) to data computed over F. This remark and Remark 2.10 explain the details. Let p1(t),p2(t),... enumerate the (monic) irreducible polynomials over F and write cA(t) = nj>1 Pj(t)ki and cB(t) = nj>1 Pj(t)£i, respectively. This gives rise to the A-invariant primary decomposition Fr = 0j>1 ker(pj(A)ki), and the B-invariant decomposition Fs = 0j>1 ker(pj(A)£i). Let Aj be the restriction of A to ker(pj (A)ki) and Bj the restriction of B to ker(pj(A)£i). Corollary 2.5 shows that dim(C(A, B)) = J2j>1 dim(C(Aj, Bj)). The second ingredient involves partitions and is described in Remark 2.10. 98 ArsMath. Contemp. 16(2019)52-109 It is straightforward to see that C(A, B) = p|ieJ C(A^ Bj) where C(A^ Bj) means C({Aj}, {Bj}). Recall that a matrix A g Frxr is nilpotent if and only if Ar = 0. We say that A is a-potent, where a g F, if (A — ai)r = 0. The following lemma and theorem reduce our deliberations from a-potent matrices to nilpotent matrices. For A = {Aj | i g I} C Frxr, let A — air denote the set {Aj — air | i g I}. Lemma 2.7. If AC Frxr, B C Fsxs and a g F, then C (A, B) = C (A — aIr, B — aIs). Proof: For i g I, AjX = XBj holds if and only if (Aj — aIr )X = X (Bj — aIs). □ A partition A of r, written A h r, is a sequence A = (Ai, A2,...) of integers satisfying Ai > A2 > • • • > 0 and Ai + A2 + • • • = r. We call Aj the ith part of A, and we usually omit parts of size zero. Let Nr be the r x r nilpotent matrix with all entries 0 except for an entry 1 in position (i, i + 1) for 1 < i < r. Let NA =0 NAi where A h r. Every nilpotent r x r matrix is conjugate to some NA for a unique A h r. Furthermore, if an r x r matrix R has eigenvalues p1,..., pm and associated generalized eigenspaces of dimensions r1,..., rm where r1 + • • • + rm = r, then R has Jordan form 0™ 1 (pjIri + NA .) where Aj is a partition of rj (not a part of a partition). Theorem 2.8. Suppose A g Frxr, B g Fsxs and gcd(cA(t),cB(t)) has distinct roots Z1,..., Cm in F. Suppose that the sizes of the Jordan blocks of A associated with the generalized Q-eigenspace of A determine a partition aj, and the sizes of the Jordan blocks of B associated with the generalized Q-eigenspace of B determine a partition fa. Then m dim(C(A, B)) = ^dim(C(Nai, Nft)). j=1 Proof. By Lemma 2.3 we may assume that F = F. Let Aj be the restriction of A to its generalized Creigenspace {v | v(A — ZjI)k for some k > 0}. Then Aj is Crpotent, and so determines a partition aj. Similarly, the restriction Bj of B to the Creigenspace determines a partition By Corollary 2.5 and Lemma 2.7, we have mm dim(C (A, B)) = ^ dim(C (Aj,Bj)) = ^ dim(C (Na ,Nft)). □ j=1 j=1 Theorem 2.9. Given partitions A of r and m of s, the dimension of C(NA , NM) equals dim(C (Na,Nm)) = ^^ min{Aj,Mj }. Proof. As A h r and m h s, we have J2Aj = r and Mj = s. Lemma 2.4 shows that C(NA,NM) = 0C(NAi, NMj). Taking dimensions, it suffices to show dim(C(NAi, NMj)) = min{Aj, mj}. This can be shown by solving NAiX = XNMj for X and counting the number of free variables. Alternatively, FAi is a uniserial (NAi}-module with 1-dimensional quotient modules, and similarly for FAj. As their largest common quotient module is Fmin{Ai-Ajwe have dim(C(NAi, NAj)) = min{Aj, Aj}. □ S. P. Glasby and C. E. Praeger: On the parameters of intertwining codes 53 Remark 2.10. Suppose |F | = q is finite. Following on from Remark 2.6 it suffices to consider the case where cA(t) = p(t)r/d, cB (t) = p(t)s/d, where p(t) is irreducible over F of degree d. The field K := F [t]/(p(t)) has order qd. In this case the structure of the modules Fr = Kr/d and Fs = Ks/d is determined by partitions A h r/d and ^ h s/d. It turns out that A is conjugate (see below) to NXp := diag(NAj,p, Na2,p, ...) G Frxr where /C(p) I \ N„ C (p) \ I C (p)/ G F' dm X dm and C(p) G Fdxd is the companion matrix of p(t). Now C(p) is conjugate in GL(d, K) to diag(Zi,..., Zd) where Zi,..., Zd are the (distinct) roots of p(t) in K. It follows from Theorems 2.9 and 2.8 that dim(C(A, B)) = dim(C(NA,P> NMiP)) (2.1) As an example, suppose A is cyclic and cA(t) = p(t)3 where d = deg(p) = 3. In this case r = 9 and A = (3). Write p(t) = t3 + p2t2 + pit + po = (t - Zi)(t - Ca)(t - Zs). Then A is conjugate in GL(9, F) by [5] to 'C(p) N 0 0 C(p) N 0 0 C (p). where C (p) = 0 0 - p o 1 0 0 1 - p i - p 2 and N 000 0 0 0 ,0 0 1, As p(t) is separable, [5, Theorem 1] implies that A is conjugate in GL(9, F) to 'C (p) 0 0 I C (p) 0 0 I C (p), Hence A is conjugate in GL(9, K) to 'D(Ci) 0 0 0 D(Z2) 0 0 0 D(C3)> where D(Z ) 'Z 1 0 Z 00 This explains the factor of d = deg(p(t)) in equation (2.1) and relates the generalized Jordan form of A over F to the Jordan form of A over K. 98 ArsMath. Contemp. 16(2019)54-109 3 Conjugate partitions In this section we simplify the formula in Theorem 2.9 for dim(C(NA, NM)). We prove an identity in Lemma 3.2 involving partitions which replaces multiple sums by a single sum. In order to state the simpler dimension formula we need to define 'conjugate partitions'. The conjugate of A h r is the partition A' = (A1, A2,...) of r whose parts satisfy Aj = |{j | Aj > i}|, for each i. The Young diagram of A', is obtained from that of A by swapping rows and columns as shown in Figure 1. n ] Figure 1: Young diagrams for A = (5, 3, 3,1) and A' = (4,3, 3,1,1). For the following result, note that the number of nonzero Aj is A1, and r = J2A=i Aj. Theorem 3.1. Given partitions A of r and of s, the dimension of C(NA, NM) equals min{ Ai } dim(C (Na,Nm)) = £ Aj ¡j = £ AjMj. j>1 j=1 To prove Theorem 3.1 we need a technical lemma which we have not been able to find in the literature, see [4]. Lemma 3.2 below says J2j>1 Aj = J2j>1 Aj when k =1. We only need the case k = 2 for the proof of Theorem 3.1, however, the argument for k > 2 is not much harder. Lemma 3.2. If A, ..., w are partitions and A', ¡',..., w' are their conjugates, then Ai ^i Wi min{ Ai ,...,wi} min{Ai,Mj ,...,wk} = •••w'. (3J) j=1 j=1 k=1 j=1 Proof. By permuting the partitions A, ..., w if necessary, we can assume that A1 ^ ¡1 ^ • • • ^ W1. If A1 = 0, then A h 0 and both sides of (3.1) are zero. If A1 = 1, then Ai ^i w1 min{ Ai ,...,Wi} LHS(1) = £ £ ••• £ 1 = A1m1 ••• w1 = £ AjMj ••• wj = RHS(1). j=1 j=1 k=1 j=1 Suppose now that A1 > 1. We use induction on A1. Let A be the partition of (^ j>1 Aj)-A'1 obtained by deleting the first column of the Young diagram of A. Since 1 < < • • • < w1, we define ¡A,..., w similarly. It is clear that Aj = Aj - 1 for 1 < i < A1 and Aj = Aj+1 for i > 1, and similarly for ¡A,..., w. As A1 < A1, induction shows Ai /ii W1 min{ Ai ,^i,...,Wi} min{Aj,,Aj ,...,wfc} = Ai'Aj' j=1 j=1 k=1 j=1 A A wA . S. P. Glasby and C. E. Praeger: On the parameters of intertwining codes 55 Note that Aj = 0 for each i e [A'x + 1, A'J since A'x = A2, so the upper limit A'x of the sum£*= 1 can be replaced by A'x. Similarly, the upper limits /¿i,..., O can be replaced by ^1,..., wj . Hence, since Aj = Aj - 1,..., o = wj - 1, we have Ai m'i "i • • min{Aj - - 1,..., wfc - 1} = j=1 j = 1 k = 1 min{A' — 1,^' — 1,...,"' -1} Ai+1Mj+1 ••• °i+1. j=1 Re-indexing the right sum, and using J2A= 1 J2j=1 • • • 2"i 1(-1) = -A1M1 • • • W1 gives A' ¡jl' "1 min{ A',...,"'} -AiMi ••• + min{Aj,^j ,...,°k} = Kni ••• . j=1 j=1 k=1 j=2 Adding A/1^/1 • • • w1 to both sides completes the inductive proof of (3.1). □ Proof of Theorem 3.1. Apply Theorem 2.9 and Lemma 3.2 with k = 2. □ 4 Minimum distances In Section 2 a formula is given for k := dim(C(A, B)); where we suppress mention of the field F in our notation. In this section we choose A and B to maximize the value of the minimum distance d := d(C(A, B)) as a function of k. We focus on the case when |A| = |B| = 1. The action of GL(r, F) x GL(s, F) of C(A, B) fixes k = dim(C(A, B)) but can change d wildly, e.g. from 1 to rs as setting k = 1 in Theorem 4.3 illustrates. Let Ej denote the r x s matrix with all entries 0, except the (i, j) entry which is 1. Lemma 4.1. Suppose r, s, k e Z where 1 < k < min{r, s}, and suppose F is afield with |F| > k+min{1,r-k}+min{1, s-k}. Fixpairwise distinct scalars Z1,..., Zk, a, 0 e F and set Ao := diag(Z1,..., Zk, a,..., a) e Frxr and Bo := diag(C1,...,Ck,£,...,£) e Fsxs. Then C(A0, B0) = (En,..., Ekk) has dimension k and minimum distance 1. Proof. Note first that if k = min{r, s}, then A0 has no as, or B0 has no ,0s. Thus the assumption |F| > k + min{1,r - k} + min{1, s - k} ensures that distinct scalars C1,...,Ck,a,0 in F exist. Using a direct calculation of C (A0, B0), or Corollary 2.5, shows that C(A0, B0) = (E11,..., Ekk). Since d(0,En) = 1, we have d(C(A0,B0)) = 1. □ We now seek R e GL(r, F) and S e GL(s, F) such that R—1(En,..., Ekk)S has large minimum distance. For brevity, we write T := R—1. Denote the ith row of a matrix A by A„ and its jth column by Aj. Lemma 4.2. Suppose r, s, k e Z where k < min{r, s}. Fix S e Fsxs and T e Frxr and define X(1),..., X(k) e Frxs by X= T^for 1 < ^ < k. Then TE«S = Xfor 1 < I < k. 98 ArsMath. Contemp. 16(2019)56-109 Proof. Suppose Sj is 1 if i = j and 0 otherwise. Then the (i, j) entry of Eff is SifSfj. The (i', j') entry of T*fSf* is t^ sfj-/. This agrees with the (i', j') entry of TEffS, namely SifSfj Sjjl = ti'£S£j' . □ i=1 j = 1 Theorem 4.3. Suppose r, s, k G Z where 1 < k < min{r, s}, and suppose F is afield with |F| > k + 2. Then there exist A G Frxr and B G Fsxs such that the linear code C(A, B) has dimension k and minimum distance d = |_r/kj s. Proof. By Lemma 4.1 there exist diagonal matrices A0 G Frxr and B0 G Fsxs such that C(A0, B0) = (E11,..., Ekk} has dimension k. We seek invertible matrices R G Frxr and S G Fsxs such that A = AR and B = BS give C (A, B) = (En,.. .,Ekk }(R'S) with minimum distance d = |_r/kj s. Let X(f) = R-1EffS for 1 < £ < k. The r x s matrices X (f) , 1 ^ £ ^ k, will have a form which makes it clear that d = |_r/kj s. First, we partition the set {1,..., r} of rows into the following k subsets: I = 1 {1,...,[Ij},l2 = { +1,..., 2 [kj}..... r -1) bd + 1,...,^. ik = Choose the ith row of the matrix X(f) to be zero if i G If, and to be a vector with all s entries nonzero otherwise. Since |_|J = |If | < |Ik | for £ < k, it follows that for 1 < £ < k d(0,X(f)) = ^ s = |If|s > ieit with equality if £ < k. The choice of these matrices is such that for each nonzero X in the span (X(1),..., X(k)} we also have d(0, X) > d(0, X(f)) for some £, and hence (X(1),..., X(k)} has minimum distance d = |_r/kjs. It is well known that if the first few rows of a square matrix are linearly independent, then the remaining rows can be chosen so that the matrix is invertible. A similar remark holds if the first few columns are linearly independent. Our construction uses k linearly independent 1 x s row vectors u1,..., uk which give the first k rows of S G GL(s, F), and k linearly independent r x 1 column vectors v(1),...,v(k) which give the first k columns of R-1 G GL(r, F). The pair (R, S) will be used to construct A and B. Henceforth suppose that 1 ^ £ ^ k. Since |F| ^ 3, we may choose y G F \ {1,1 — s}. Let J be the s x s matrix with all entries 1. Then the s x s matrix S' = (y — 1)I + J is invertible as det(S') = (y — 1)s-1(y + s — 1) is nonzero. Let uf = (1,..., 1, y, 1,..., 1) be the £th row of S'. Since u1,..., uk are linearly independent, there exists an invertible matrix S G GL(s, F) with Sf* = uf. Of course S = S' is one possibility. Similarly, let v(f) be the r x 1 column vector (f) v; 1 if i G If, 0 if i G if. As v(1),..., v(k) are linearly independent, there exists an r x r invertible matrix, which we call R-1, whose first k columns are v(1),..., v(k). Lemma4.2 shows that R-1EffS = X (f) for 1 < £ < k. Hence C (AR, BS) = C (A0 ,B0)(R'S) = (X (1),...,X(k)} has minimum distance |_r/kj s as desired. □ S. P. Glasby and C. E. Praeger: On the parameters of intertwining codes 57 Corollary 4.4. If |F| > min{r, s} + 2, then there exist matrices A e Frxr and B e Fsxs such that C(A, B) has dimension min{r, s} and minimum distance max{r, s}. Proof. Since AX = XB if and only if XTAT = BTXT we see that C(BT, AT) equals C(A, B)t. Because C(A, B) and C(A, B)T have the same dimension and minimum distance, we may assume that r < s. If |F| > r + 2, then applying Theorem 4.3 with k = r gives the desired result. □ Remark 4.5. Suitable matrices A and B in Theorem 4.3 are constructed by first choosing the diagonal matrices A0 and B0 in Lemma 4.1, and then taking A = R-1A0R and B = S-1B0S where R and S are constructed in the proof of Theorem 4.3. It is desirable for a code to have both a high rate, viz. R = k/n, and a high distance d. Can the product Rd be a constant for intertwining codes? By setting r = s = k in Theorem 4.3 we obtain a rate of R = 1/r and a distance of d = r, so the answer is affirmative. It is natural to ask how the maximum value of Rd for an intertwining code depends on (r, s, F)? We wonder whether there is a sequence C1, C2,... of intertwining codes over a field F with parameters [r^, kj, dj] for which Rjdj = ^^ approaches infinity. 5 Upper and lower bounds for dimF (C(A, B)) Denote that rank and nullity of A e Frxr by Rk(A) and Null(A), respectively. Note that Rk(A) + Null(A) = r and Null(NA) = A^. In this section we bound k = dim(C(A, B)) in terms of the rank and nullity of A and B. If A h r and ^ h s, Theorem 2.9 implies that A>1 < £ AjMj = dim(C(Na,Nm)) < I £ Aj I I £I = rs. (5.1) j>1 \j >1 ) \j>1 ) View A e Frxr as acting on an r-dimensional vector space over the algebraic closure F. Let the a-eigenspace, and the generalized a-eigenspace, of A have dimensions kA,a and mA,a, respectively. Then cA(t) = f](t - a)mA,a where mA,a = 0 for finitely many a e F and 0 < kA,a < mA,a. The following result generalizes [2, Theorems 2.8 and 4.7]. Theorem 5.1. If A e Frxr and B e Fsxs, then (a) J2kA,akB,a < dim(C(A, B)) < £ mA,amB,a, and (b) (r - Rk(A))(s - Rk(B)) < dim(C(A, B)) < (r - Rk(A))(s - Rk(B)) + Rk(A) Rk(B). Proof. Part (a) follows immediately from Theorem 2.8 and (5.1). (b) The lower bound follows from part (a) since r - Rk(A) = Null(A) = kAj0. For the upper bound, note that A is similar to a diagonal direct sum N\ © A' where N\ is nilpotent of size m0jA and A' is invertible of size r - m0jA. Similarly, B is similar to 98 ArsMath. Contemp. 16(2019)58-109 NM © B' where NM is nilpotent of size m0,B and B' is invertible of size s - m0,B. It follows from Theorem 2.8 that dim(C(A,B)) = dim(C(NA,NM)) + dim(C(A','b')). Further by Theorem 2.9 dim(C(NA, NM)) = ^i>x Ai^i where, as usual, A' and m' denote conjugate partitions. We use the observation: if 0 < x < a and 0 < y < b, then (a — x)(b — y) + xy < ab (5.2) to show that dim(C(A, B)) = Ai^i + ^ Ai^i + dim(C(A', B')) i> 2 < Ai^i + (mo,A — Ai)(mo,B — m1) + (r — mo,i)(s — mo,B) < A>i + (r — Ai)(s — Mi). The result follows since Ai = Null(NA) = Null(A) = r — Rk(A) and m1 = s — Rk(B). □ The Singleton bound d + k < n +1 implies that if d is close to n = rs, then k is small, and the lower bound of Theorem 5.1(b) implies that A or B has high rank. Setting k =1 in Theorem 4.3, shows that this bound is attained for intertwining codes. The code C(A, B) is the row nullspace of AT ( Is + Ir ( B and the column nullspace of A ( Is + Ir ( Bt where T denotes transpose. References [1] A. Alahmadi, S. Alamoudi, S. Karadeniz, B. Yildiz, C. Praeger and P. Sole, Centraliser codes, Linear Algebra Appl. 463 (2014), 68-77, doi:10.1016/j.laa.2014.08.024. [2] A. Alahmadi, S. P. Glasby and C. E. Praeger, On the dimension of twisted centralizer codes, Finite Fields Appl. 48 (2017), 43-59, doi:10.1016/j.ffa.2017.07.005. [3] A. Alahmadi, S. P. Glasby, C. E. Praeger, P. Sole and B. Yildiz, Twisted centralizer codes, Linear Algebra Appl. 524 (2017), 235-249, doi:10.1016/j.laa.2017.03.011. [4] S. P. Glasby, Lemmas involving two partitions of integers, MathOverflow, 2017, https:// mathoverflow.net/q/25 8 7 22. [5] D. W. Robinson, Classroom notes: the generalized jordan canonical form, Amer. Math. Monthly 77 (1970), 392-395, doi:10.2307/2316152. [6] R. Stong, Some asymptotic results on finite vector spaces, Adv. in Appl. Math. 9 (1988), 167199, doi:10.1016/0196-8858(88)90012-7.