International Journal of Management, Knowledge and Learning, 9(2), 223–235 On Some Applications of Matrix Partial Orders in Statistics Iva Golubić University of Ljubljana, Faculty of Electrical Engineering, Slovenia Janko Marovt University of Maribor, Faculty of Economics and Business, Slovenia In statistics different partial orders appear as useful in several cases. Three of the best known partial orders defined on (sub)sets of real or complex matrices are the Löwner, the minus and the star partial orders. Another two matrix partial orders that are related to the star partial order are the left- star and the right-star partial orders. In the paper we review some of the applications of mentioned partial orders in statistics. Keywords: matrix partial order, generalized matrix inverse, preserver, statistics, linear model Introduction Mathematics is essential for all (serious) branches of science, including nat- ural science, engineering, medicine, finance, and in the last few decades also for social sciences. One can argue that a particular practice becomes a scientific discipline when it starts to obey the postulates of mathematics and adopts the mathematical language and mathematical (especially ana- lytical) way of thinking. Mathematics and statistics are becoming increas- ingly important in daily operations of various organizations, e.g. for modern knowledge management (i.e. a process of creating, sharing, using and man- aging the knowledge and information of an organization) the use of mathe- matics and statistics is crucial (Munje et al., 2020; Phusavat et al., 2009; Priestley & McGrath, 2019). Linear algebra is a branch of mathematics that especially in its subbranch of matrix theory encompasses results which are used in various fields of science and practice. We can not imagine modern (micro and macro) economics and econometrics without the use of matri- ces. Matrices are for example useful in observing the relationships between individual industries and in calculating the quantities needed to meet the demand for goods produced in the industries of an economy. We can also use matrices in linear programming in management to for example adjust production processes by solving optmization problems such as calculation of the minimum of production costs. In the paper we present particular relations between matrices that have many applications in statistics and www.issbs.si/press/ISSN/2232-5697/9_223-235.pdf 224 Iva Golubić and Janko Marovt thus in other scientific fields, and give an overview of these applications. Let F denote the field of all real or complex numbers, i.e. F = R or F = C, and Mm,n(F), the set of all m×n matrices over F. If m = n, then we write Mn(F) instead of Mn,n(F). Let A∗ ∈ Mn,m(F) denote the conjugate transpose of A ∈Mm,n(F) (if A ∈Mm,n(R), then A∗ = At, the transpose of A). A generalized inverse or a pseudoinverse of A ∈Mm,n(F) is a matrix that has some proper- ties of the usual inverse (of A ∈Mn(F) with the nonzero determinant) but not necessarily all of them. One of the best known examples of a generalized inverses is the Moore-Penrose inverse. We say that X ∈Mn,m(F) is the Moore- Penrose inverse of A ∈Mm,n(F) when the following four matrix equations are satisfied: AXA = A, XAX = X, (AX)∗ = AX, and (XA)∗ = XA. (1) It turns out (Mitra et al., 2010) that every A ∈Mm,n(F) has a Moore-Penrose inverse X = A† and that A† is unique. Another example of a pseudoinvese satisfies only the first equation in (1). Namely, we say that X = A− is an inner generalized inverse of A ∈ Mm,n(F) if A = AA−A. Again, every A ∈ Mm,n(F) has an inner generalized inverse A− however A− is not necessarily unique. There are many applications of these pseudoinverses. For example, if A ∈Mm,n(F), c ∈Mm,1(F) = Fm, and x is the n×1 vector of variables, then the system Ax = c (2) of m linear equations with n variables has a solution if and only if AA−c = c for some inner generalized inverse A− of A. Moreover, if the system (2) has a solution and if A− is an inner generalized inverse of A, then for every vector y ∈ Fn xy = A−c + (I−A−A)y, (3) where I ∈ Mn(F) is the identity matrix, is a solution of (2), and for every solution x∗ of (2) there exists a vector y such that x∗ = xy (Schott, 2005). Both of the above classes of generalized inverses induce partial orders on Mm,n(F) (i.e. relations that are reflexive, antisymmetric, and transitive). We say that A ∈Mm,n(F) is dominated by (or is below) B ∈Mm,n(F) with respect to the minus partial order and write A≤− B when A−A = A−B and AA− = BA− (4) for some inner generalized inverse A− of A. For A,B ∈Mm,n(F) we write A≤∗ B when A∗A = A∗B and AA∗ = BA∗ (5) International Journal of Management, Knowledge and Learning On Some Applications of Matrix Partial Orders in Statistics 225 and name the relation ≤∗ the star partial order. It turns out that both rela- tions (4) and (5) are indeed partial orders (Drazin, 1978; Hartwig, 1980). Moreover, the star partial order may also be defined by a generalized in- verse. Namely, it is easy to see that for A,B ∈Mm,n(F) we have A≤∗ B if and only if A†A = A†B and AA† = BA† where A† is the Moore-Penrose inverse of A. Two partial orders that are ‘related’ to the minus and the star partial orders are the left-star and the right-star partial orders (Baksalary & Mitra, 1991). Let ImA denote the image (i.e. the column space) of A ∈Mm,n(F). For A,B ∈ Mm,n(F) we say that A is dominated by B with respect to the left-star partial order and write A ≤∗ B when A*A = A*B and ImA⊆ ImB. (6) Similarly, we define the right-star partial order: For A,B ∈Mn(F) we write A≤∗ B when AA∗ = AB∗ and ImA∗ ⊆ ImB∗. (7) It is known (Mitra et al., 2010) that for A,B ∈ Mm,n(F), A ≤∗ B implies both A ≤∗ B and A≤∗ B and each A ≤∗ B and A≤∗ B implies A≤− B. The converse implications do not hold in general. Another well known partial order may be defined on a certain subset of Mn(F). We say that A ∈ Mn(F) is Hermitian (or symmetric when A ∈ Mn(R)) if A = A∗. A Hermitian matrix A ∈ Mn(F) is said to be positive semidefinite if x∗Ax ≥ 0 for every x ∈ Fn. Positive semidefinite matrices have become fun- damental computational objects in many areas of statistics, engineering, quantum information, and applied mathematics. They appear as variance- covariance matrices in statistics, as elements of the search space in convex and semidefinite programming, as kernels in machine learning, as density matrices in quantum information, and as diffusion tensors in medical imag- ing. It is known (Christensen, 1996) that every variance-covariance matrix is positive semidefinite, and that every real positive semidefinite matrix is a variance-covariance matrix of some multivariate distribution. Let now A,B ∈ Mn(F) be Hermitian matrices. We say that A is dominated by B with respect to the Löwner partial order and write A≤L B if B−A is positive semidefinite. (8) There are many applications of the above partial orders, especially in statistics. Let us present an example of such an application (Mitra et al., 2010). Let A,B ∈Mn(R) be two positive semidefinite matrices and let A≤L B. Write A = [ A11 A12 A21 A22 ] and B = [ B11 B12 B21 B22 ] Volume 9, Issue 2, 2020 226 Iva Golubić and Janko Marovt where Aij and Bij are of the same order for all i, j ∈ {1,2} and A11 is a r× r, r < n, matrix. Then (Sengupta & Jammalamadaka, 2003) A11−A12A−22A21 ≤L B11 −B12B−22B21. (9) Consider now a tribal population on which several anthropometric measure- ments are made. Let y1 be the vector of measurements on the face and y2 the vector of measurements on the remaining part of the body. Let the ran- dom vector y = [ y1 y2 ]t have the multivariate normal distributions N(μ,V1) in population 1 and N(τ,V2) in population 2. Here μ and τ are the mean vectors and V1 and V2 are the variance-covariance matrices (also known as dispersion or covariance matrices). Suppose y has a smaller dispersion in population 1 than in population 2. The ‘smaller dispersion’ condition may be expressed in terms of the Löwner partial order ≤L, i.e. V1 ≤L V2. By (9) and by properties of variance-covariance (dispersion) matrices (Sengupta & Jammalamadaka, 2003, p. 59) we have the following: The conditional dis- persion of facial measurements given the measurements of the rest of the body, namely V(y1 | y2), is also smaller in population 1 than in population 2. In the following two sections more applications of the above partial or- ders in statistics will be presented – we will focus our attention on linear models. In the next section (Linear Models) we will recall the notion of a linear model and then use matrix partial orders to compare different lin- ear models. In the last decades many authors studied preserves problems which concern the question of determining or describing the general form of all transformations of a given structure X which preserve a quantity at- tached to the elements of X , or a distinguished set of elements of X , or a given relation among the elements of X , etc. It has been recently stated (Dolinar et al., 2020; Golubić & Marovt, in press, 2020; Guillot et al., 2015) that a motivation for the study of preserver problems that concern the above partial orders on certain (sub)sets of real matrices (i.e. X is a subset of Mn(R)) comes from statistics. Let S be a subset of Mn(F) and let ≤G be one of the above orders (i.e. ≤−, ≤∗, ≤∗ ,≤∗ , ≤L) on S . We say that the map Φ:S → S preserves the partial order ≤G in both directions (or is a bi-preserver of ≤G) when for every A,B ∈S , A≤G B if and only if Φ(A)≤G Φ(B). (10) In the last section (Preservers of Partial Orders) we will recall some recent results that were motivated by statistics and that (under some additional assumptions) describe the form of maps Φ with the property (10). Linear Models One of the simplest models that is used to illustrate how an observed quantity y can be explained by a number of other quantities, x1,x2, . . .,xp−1, is the linear model International Journal of Management, Knowledge and Learning On Some Applications of Matrix Partial Orders in Statistics 227 y =β0 +β1x1 +β2x2 + · · ·+βp−1xp−1 + , where β0,β1, . . .,βp−1 are constants (real numbers) and  is an error term that accounts for uncertainties. We refer to y as a response variable and to x1,x2, . . .,xp−1 as explanatory variables. For a set of n observations of the response and explanatory variables, the explicit form of the equations would be yi =β0 +β1xi,1 +β2xi,2 + · · ·+βp−1xi,p−1 + i, i = 1,2, . . .,n, where for each i, yi is the i-th observation of the response, xi,j is the i-th observation of the j-th explanatory variable (j = 1,2, . . .,p−1), and i is the unobservable error corresponding to this observation. These equations can be written in the following matrix form: y = Xβ+ . Here, y = ⎡ ⎢⎢⎢⎢⎢⎢⎣ y1 y2 ... yn ⎤ ⎥⎥⎥⎥⎥⎥⎦ , X = ⎡ ⎢⎢⎢⎢⎢⎢⎣ 1 x1,1 · · · x1,p−1 1 x2,1 · · · x2,p−1 ... ... . . . ... 1 xn,1 · · · xn,p−1 ⎤ ⎥⎥⎥⎥⎥⎥⎦ , β= ⎡ ⎢⎢⎢⎢⎢⎢⎣ β0 β1 ... βp−1 ⎤ ⎥⎥⎥⎥⎥⎥⎦ , = ⎡ ⎢⎢⎢⎢⎢⎢⎣ 1 2 ... n ⎤ ⎥⎥⎥⎥⎥⎥⎦ . We call y the response vector (also known as the observation vector) and X the model matrix (also known as the design or regressor matrix). In order to complete the description of the model, some assumptions about the nature of the errors have to be made. It is assumed that E() = 0 and V() = σ2D, i.e. the errors have the zero mean and covariances are known up to a scalar (real number). Here V denotes the variance-covariance matrix. The nonnegative parameter σ2 and the vector of parameters (real numbers) β are unspecified, and D is a known n×n (real, positive semidefinite) matrix. We denote this linear model with the triplet (y,Xβ,σ2D). (It follows that E(y) = Xβ and V(y) =σ2D.) It is known (Mitra et al., 2010, Lemma 15.2.1) that the response vector y ∈ Im(X:D) with probability 1 where Im(X:D) denotes the image (i.e. the column space) of the partitioned matrix (X:D). Remark An assumption that the errors follow the multivariate normal dis- tribution is often added to the model. Moreover, the matrix X above where all the elements in a first column equal 1 is in fact a special case of a lin- ear model matrix; such model matrices are used in the multiple regression analysis. Models (y,Xβ,σ2D) where all the elements in the first column of the model matrix do not necessarily equal 1 and the probability distribution of the errors is not necessarily normal are (usually) called general linear Volume 9, Issue 2, 2020 228 Iva Golubić and Janko Marovt models. In the continuation we will deal with general linear models, how- ever, for the sake of simplicity we will use the term “linear model” instead of “general linear model”. Classical inference problems related to the linear model (y,Xβ,σ2D) usu- ally concern a linear parametric function (LPF), sβ (here s is a 1×p real vector). We try to estimate it by a linear function of the response zy (here z is a 1×n real vector). For accurate estimation of sβ, it is desirable that the estimator is not systematically away from the ‘true’ value of the parameter. We say that the statistic zy is a linear unbiased estimator (LUE) of sβ if E(zy) = sβ for all possible values of β. A LPF is said to be estimable if it has an LUE. Let (y,Xβ,σ2D) be a linear model and let A be a real matrix with p columns. We say that a vector LPF, Aβ, is estimable if there exists a real matrix C such that E(Cy) = Aβ for all β ∈ Rp. It turns out (Mitra et al., 2010, Theorem 15.2.4) that if A is a real matrix with p columns, then Aβ is estimable if and only if ImAt ⊆ ImXt. (11) The best linear unbiased estimator (BLUE) of an estimable vector LPF is defined as the LUE having the smallest variance-covariance matrix. Here, the “variance-covariance” condition is expressed in terms of the Löwner order ≤L: Let Aβ be estimable. Then Ly is said to be BLUE of Aβ if (i) E(Ly) = Aβ for all β ∈ Rp and (ii) V(Ly) ≤L V(My) for all β ∈ Rp and all My satisfying E(My) = Aβ. Let us consider two linear models L1 = (y1,X1β,σ2D1) and L2 = (y2,X2β, σ2D2) where the number p of columns of X1 and X2 is fixed but arbitrary while the number n of rows may vary from model to model. Then we say that L1 is at least as good as L2 if for any LUE, a2y2, of a parameter kβ there exists LUE a1y1 of this parameter such that V(a1y1)≤ V(a2y2) (here a1, a2, k are appropriate vectors, and V denotes the variance). If this condition is satisfied, then we write L1  L2. With the following result, which was proved in Stępniak (1985), Stępniak showed that two linear models L1 and L2 may be compared by consider- ing certain matrices that are induced by matrices Xi and Di, i ∈ {1,2}, and comparing them via the Löwner partial order. Theorem 1 Let L1 = (y1,X1β,σ2D1) and L2 = (y2,X2β,σ2D2) be two linear models. Then L1  L2 is equivalent to M2 ≤L M1 where Mi, i ∈ {1,2}, are positive semidefinite matrices defined as Mi = Xti (Di + XiX t i ) −Xi. International Journal of Management, Knowledge and Learning On Some Applications of Matrix Partial Orders in Statistics 229 Here (Di + XiXti ) − is an inner generalized inverse of Di + XiXti . With the next two results (Mitra et al., 2010, Theorems 15.3.6, 15.3.7) we consider the linear models with model matrices that are related to each other under the minus partial order or the left-star partial order. Let L1 = (y,X1θ,σ2D) and L2 = (y,X2β,σ2D) be two linear models and suppose X1 ≤− X2. Note that for any two matrices A,B ∈ Mm,n(F), we have A≤− B if and only if B−A≤− B (Mitra et al., 2010, Theorem 3.3.16). Let A = X2−X1. It follows that then A ≤− X2 and therefore by (4) there exists an inner generalized inverse A− of A such that A−A = A−X2 and AA− = X2A−. Since then A = AA−A = AA−X2 and thus At = Xt2(AA −)t, we may conclude that ImAt ⊆ ImXt2 (i.e. Aβ is by (11) estimable in the model L2). Let the model L2 be constrained by linear constraints Aβ= 0 on the parametric vector β ∈Rp. Observe that on the one hand A = X2 −X1 and AA− = X2A− imply X1 = X2−A = X2 −AA−A = X2−X2A−A = X2(I−A−A), (12) and on the other hand, (I−A−A)θ where θ ∈Rp is arbitrary are by (3) exactly the solutions of the system Aβ= 0 of linear equations (where β is the vector of variables). So, by (12) for each β ∈ Rp where Aβ = 0 there exists θ ∈ Rp such X2β = X1θ and for each θ ∈ Rp there exists a solution β ∈ Rp of Aβ = 0 such that X1θ = X2β. It follows that the model L1 is the model L2 constrained by Aβ = 0. We may conclude that if L1 = (y,X1θ,σ2D) and L2 = (y,X2β,σ2D) are two linear models with X1 ≤− X2, then there exists a matrix A such that Aβ is estimable in the model L2 and L1 is the model L2 constrained by Aβ= 0. We presented the above argument as an example of how purely linear algebraic techniques can lead to a result that has implications in statistics. It turns out that the converse of the proved implication is true as well (Mitra et al., 2010, proof of Theorem 15.3.6). Theorem 2 Let L1 = (y,X1θ,σ2D) and L2 = (y,X2β,σ2D) be any two linear models. Then X1 ≤− X2 if and only if there exists a matrix A with ImAt ⊆ ImXt2 and L1 is the model L2 constrained by Aβ= 0. The following result gives an interpretation of the left-star order in Gauss- Markov linear models, i.e. linear models (y,Xβ,σ2D) where D = I is the iden- tity matrix. Theorem 3 Let L1 = (y,X1β,σ2I) and L2 = (y,X2β,σ2I). Then X1 ≤∗ X2 if and only if: (i) The linear models L1 and L = (y, (X2 −X1)β,σ2I) have no common estimable linear function of β; (ii) X1β is estimable under the model L2; (iii) The BLUE of X1β under the model L1 is also its BLUE under L2 and the Volume 9, Issue 2, 2020 230 Iva Golubić and Janko Marovt variance-covariance matrix of the BLUE of X1β under the model L1 is the same as under the model L2. With the next result we give another application of the minus partial order (Baksalary & Puntanen, 1990, p. 366). Theorem 4 Consider a linear model (y,Xβ,σ2D). Then the statistics Fy is BLUE of Xβ if and only if the following conditions hold: (i) FX = X; (ii) Im(FD)⊆ ImX; (iii) V(Fy)≤− V(y). Note that V(Fy) and V(y) are positive semidefinite matrices. It is thus nat- ural to ask if there are some characterizations (i.e. equivalent definitions) of the minus partial order on the cone of all positive semidefinite matrices. Observe first that if A = 0 is the n×n zero matrix, then ACA = A for every C ∈Mn(F). Take A− = 0 to conclude by (4) that 0≤− B for every B ∈Mn(F). Theorem 5 Let A,B ∈Mn(F) be positive semidefinite and A = 0. Then A≤− B if and only if there exists an invertible matrix S∈Mn such that A = S [ Ir 0 0 0 ] S∗ and B = S [ Is 0 0 0 ] S∗ where Ir and Is are r× r and s× s, s ≤ n, identity matrices, respectively, and r < s if A = B, and r = s, otherwise. (In case when s = n, the zeros on the right-hand side of the formula for B are absent.) This purely linear algebraic result (Golubić & Marovt, in press, Theorem 4.1) may now be used with Theorem 4 to obtain the following corollary. Corollary 1 Let (y,Xβ,σ2D) be a linear model. Then the statistics Fy with V(Fy) = V(y) is BLUE of Xβ if and only if the following conditions hold: (i) FX = X; (ii) Im(FD)⊆ ImX; (iii) There exist an invertible matrix S ∈Mn(R) such that V(Fy) = S [ Ir 0 0 0 ] St and V(y) = S [ Is 0 0 0 ] St where Ir is a r× r identity matrix, and Is is a s×s identity matrix with r < s≤ n. We conclude this section with another corollary of Theorem 5. Note that for a positive semidefinite matrix A ∈Mn(R), the matrix WtAW ∈Mm(R) is still positive semidefinite for any matrix W ∈ Mn,m(R). The following result (Golu- bić & Marovt, in press, Corollary 4.3) thus follows directly from Theorem 5 and (Baksalary et al., 1992, Theorem 1). International Journal of Management, Knowledge and Learning On Some Applications of Matrix Partial Orders in Statistics 231 Corollary 2 Let A = ∑k i=1 Ai where Ai ∈ Mn(R) are positive semidefinite matri- ces, i = 1,2, . . .,k. Let the n×1 random vector x follow a multivariate nor- mal distribution with the mean μ and the variance-covariance matrix V. Let W = (V:μ) be a n× (n + 1) partitioned matrix. Consider the quadratic forms Q = xtAx and Qi = xtAix, i = 1,2, . . .,k. Then the following statements are equiv- alent. (i) Qi, i = 1,2, . . .,k, are mutually independent and distributed as chi-squared variables; (ii) Q is distributed as a chi-squared variable and there exist invertible matri- ces Si ∈Mn+1(R) such that WtAiW = Si [ Iri 0 0 0 ] Sti and W tAW = Si [ Is 0 0 0 ] Sti for every i = 1,2, . . .,k, where Iri are ri × ri identity matrices, and Is is a s× s identity matrix with ri ≤ s ≤ n + 1. (Here Iri = 0 if WtAiW = 0 for some i ∈ {1,2, . . .,k}.) Preservers of Partial Orders The first example of a solution to a preserver problem dates back to the year 1897 when Frobenius described the form of all bijective, linear maps Φ:Mn(F)→Mn(F) that preserve the determinant, i.e. detΦ(A) = detA for every A ∈ Mn(F). Since then many authors studied various preservers problems (see the monograph by Molnár, 2007, and references therein). Let Hn(F) denote the set of all Hermitian (i.e. symmetric in the real case) matrices in Mn(F), let H+n (F) be the cone of all positive semidefinite matrices in Hn(F). Note that if A ∈Mm,n(F), then A∗A ∈H+n (F). We say that two matrices A,B ∈Mm,n(F) are ordered as A≤N B if and only if A∗A≤L B∗B, i.e. B∗B−A∗A ∈ H+n(F). The relation ≤N has many applications in statistics, e.g. in the study of of probability measures, in linear estimation theory, in the analysis of the power of a binary hypothesis test, etc (Jensen, 1984, Part 2). In some of these applications order-preserving maps are used; e.g. in (Jensen, 1984, Application 3) author uses maps Φ:Mm,n(R)→R, defined with Φ(A) = ϕ(AtA), A ∈ Mm,n(R), where ϕ:H+n (R) → R is an order-preserving map in one direction with respect to the Löwner partial order, i.e. A≤L B implies Φ(A)≤Φ(B) Volume 9, Issue 2, 2020 232 Iva Golubić and Janko Marovt for every A,B ∈ H+n (R). It is thus natural to study and try to characterize transformations on H+n (R) that have a ‘Löwner order-preserving property’ (i.e. maps that either preserve the Löwner partial order in one or in both directions, the latter in the sense of (10)), and perhaps have some addi- tional properties. Moreover, in modern high-dimensional probability theory and statistics, transformations are often applied to the entries of variance- covariance matrices in order to obtain regularized estimators with attractive properties (sparsity, good condition number, etc.), see Bickel and Levina (2008). The resulting matrices often serve as ingredients in statistical pro- cedures that require these matrices to be positive semidefinite (Guillot et al., 2015). Motivated by applications in quantum information theory and quantum statistics Molnár studied preservers that are connected to certain struc- tures of bounded linear operators which appear in mathematical founda- tions of quantum mechanics, i.e. he studied automorphisms of the under- lying quantum structures or, in other words, quantum mechanical symme- tries. From one of Molnár’s results (Molnár, 2001, Theorem 1) it follows that bijective maps Φ:H+n (C) → H+n (C), n ≥ 2, where A ≤L B if and only if Φ(A)≤L Φ(B), A,B ∈H+n(C), are of the form Φ(A) = TAT∗, A ∈H+n(C) (13) where T ∈ Mn(C) is an invertible matrix. Motivated by possible applications in statistics authors studied in (Golubić & Marovt, in press) bi-preservers on H+n (R) of the Löwner partial order. They showed that a similar theorem to Molnár’s result (13) holds also in the real matrix case. Theorem 6 Let n ≥ 2 be an integer. Then ϕ:H+n (R) → H+n (R) is a surjective bi- preserver of the Löwner partial order ≤L if and only if there exists an invertible matrix S ∈Mn(R) such that ϕ(A) = SASt for every A ∈H+n (R). The following observation is connected to the theory of comparison of linear models and was presented in (Golubić & Marovt, in press). Remark Let L1 = (y1,X1β,σ2D1) and L2 = (y2,X2β,σ2D2) be two linear mod- els. Here X1 ∈Mn,p(R), X2 ∈Mm,p(R), D1 ∈H+n (R), and D2 ∈H+m(R). Recall that L1  L2 means that the model L1 is at least as good as the model L2 and that by Theorem 1 L1  L2 if and only if M2 ≤L M1 where Mi = Xti ( Di + XiXti )−Xi, i ∈ {1,2}. Moreover, Stępniak noted in (Stępniak, 1985) that when ImXi ⊆ ImDi, i ∈ {1,2}, we may replace Xti ( Di + XiXti )−Xi with International Journal of Management, Knowledge and Learning On Some Applications of Matrix Partial Orders in Statistics 233 Xti D .− i Xi. When Di = Xi, i ∈ {1,2}, these matrices may be further simplified to Mi = Xti D .− i Xi = D t iD − i Di = DiD − i Di = Di. For models L1 = (y1,D1β,σ 2D1) and L2 = (y2,D2β,σ2D2) we thus have L1  L2 if and only if D2 ≤L D1. Let n > 1. For a random n×1 vector of observed quantities yi, an unspec- ified n×1 vector βi, and an unspecified nonnegative scalar σ2i , let Li be the set of all linear models Li = (yi,Dβi,σ2i D) where D ∈ H+n(R) may vary from model to model. Define a map ψ:L1 → L2 with ψ((y1,Dβ1,σ21D)) = (y2,ϕ(D)β2,σ22ϕ(D)) where ϕ:H + n (R) →H+n (R) is a surjective map. Suppose L1a  L1b if and only if ψ(L1a )ψ(L1b ) for every L1a ,L1b ∈L1. This assumption may be reformulated as D1b ≤L D1a if and only if ϕ(D1b ) ≤L ϕ(D1a ), D1a ,D1b ∈ H+n (R), and therefore Theorem 6 completely determines the form of any such a map ψ. Let A,B ∈Hn(F). Since then A∗A = A∗B if and only if (A∗A)∗ = (A∗B)∗ if and only if A2 = BA which is equivalent to AA∗ = BA∗, we may conclude (compare (5) with (6) and (7)) that the star, the left-star, and the right-star partial orders are the same partial order on Hn(F). They are however different to the minus partial order even on H+n (F) (see Golubić & Marovt, 2020, for a counterexample). Motivated by applications of the minus and the left- star partial orders in the linear model theory (see Theorems 3, 4) authors characterized in (Golubić & Marovt, in press, 2020) the surjective, additive minus and star partial order bi-preservers on H+n (R), n≥3. We present here a result concerning the star partial order bi-preservers. Recall that A ∈Mn(R) is called an orthogonal matrix when AtA = AAt = I, i.e. when At = A−1 where A−1 denotes the usual inverse of an invertible matrix A. Theorem 7 Let n ≥ 3 be an integer. Then ϕ:H+n (R) → H+n (R) is a surjective, additive bi-preserver of the star partial order if and only if there exists an orthogonal matrix R ∈Mn(R) and λ>0 such that ϕ(A) =λRARt for every A ∈H+n (R). We end the paper with a remark that in a very recent paper (Dolinar et al., 2020) the forms of general (not necessarily additive) surjective bi- preservers of the left-star partial order and the right-star partial order on Mn(F), n ≥ 3, were described. The results, which were expressed by using the Moore-Penrose inverse, are rather technical and hence we omit them. Nevertheless we mention that it was noted in (Dolinar et al., 2020) that Volume 9, Issue 2, 2020 234 Iva Golubić and Janko Marovt given the model M = (y,Xβ,σ2I) one might rather work with the transformed model M̂ = (y, X̂β,σ2I) because the matrix X̂ ∈ Mn(R) has more attractive properties than X ∈Mn(R) (e.g. elements of X that are very close to zero are transformed to zero), and thus it is natural to demand that the transformed model still retains most of the properties of the original model (e.g. has similar relations to other transformed models). Thus, in view of Theorem 3, it is interesting to know what transformations on Mn(R) preserve the left-star partial order in both directions. We believe that preservers of various relations on sets of matrices hold a great potential for applications in statistics and hope that our review of ‘pre- server results’ might encourage some statisticians and/or mathematicians to find further connections between certain bi-preservers and statistics. References Baksalary, J. K., Hauke, J., & Styan, G. P. H. (1992, March). On some dis- tributional properties properties of quadratic forms in normal variables and some associated matrix partial orderings. Paper presenteed at the International Symposium on Multivariate Analysis and its Applications, Hong-Kong. Baksalary J. K., & Mitra, S. K. (1991). Left-star and right-star partial orderings. Linear Algebra and its Applications, 149, 73–89. Baksalary, J. K., & Puntanen, S. (1990). Characterizations of the best linear unbiased estimator in the general Gauss-Markov model with the use of matrix partial orderings. Linear Algebra and its Applications, 127, 363– 370. Bickel, P. J., & Levina, E. (2008). Covariance regularization by tresholding. Annals of Statistics, 36(6), 2577–2604. Christensen, R. (1996). Plane answers to complex questions: The theory of linear models. Springer. Dolinar, G., Halicioglu, S., Harmanci, A., Kuzma, B., Marovt, J., & Ungor, B. (2020). Preservers of the left-star and right-star partial orders. Linear Algebra and its Applications, 587, 70–91. Drazin, M. P. (1978). Natural structures on semigroups with involution. Bulletin of the American Mathematical Society, 84, 139–141. Golubić, I., & Marovt, J. (2020). Monotone transformations on the cone of all positive semidefinite real matrices. Mathematica Slovaca, 70(3), 733– 744. Golubić, I., & Marovt, J. (In press). Preservers of partial orders on the set of all variance-covariance matrices. Filomat. Guillot, D., Khare, A., & Rajaratnam, B. (2015). Complete characterization of Hadamard powers preserving Loewner positivity, monotonicity, and con- vexity. Journal of Mathematical Analysis and Applications, 425(1), 489– 507. Hartwig, R. E. (1980). How to partially order regular elements. Japanese Jour- nal of Mathematics, 25, 1–13. International Journal of Management, Knowledge and Learning On Some Applications of Matrix Partial Orders in Statistics 235 Jensen, D. R. (1984). Invariant ordering and order preservation: Inequalities in statistics and probability. IMS Lecture Notes, 5, 26–34. Mitra, S. K., Bhimasankaram, P., & Malik, S. B. (2010). Matrix partial orders, shorted operators and applications. World Scientific. Molnár, L. (2001). Order-automorphisms of the set of bounded observables. Journal of Mathematical Physics, 42, 5904–5909. Molnár, L. (2007). Selected preserver problems on algebraic structures of lin- ear operators and on function spaces (Lecture Notes in Mathematics, No. 1895). Springer. Munje, P. N., Tsakeni, M., & Jita, L. C. (2020). School heads of departments’ roles in advancing science and mathematics through the distributed lead- ership framework. International Journal of Learning, Teaching and Educa- tional Research, 19(9), 39–57. Phusavat, K., Anussornnitisarn, P., Tangkapaisarn, P., & Kess, P. (2009). Knowl- edge management and performance improvement: The roles of statistics and mathematics. International Journal of Innovation and Learning, 6(3), 306–322. Priestley, J., & McGrath, R. J. (2019). The evolution of data science: A new mode of knowledge production. International Journal of Knowledge Man- agement, 15(2), 97–109. Schott, J. R. (2005). Matrix analysis for statistics. Wiley. Sengupta, D., & Jammalamadaka, S. R. (2003). Linear models, an integrated approach. World Scientific. Stępniak, C. (1985). Ordering of positive semidefinite matrices with applica- tion to comparison of linear models. Linear Algebra and its Applications, 70, 67–71. Iva Golubić is a lecturer of mathematics and statistics at Universitiy of Ap- plied Sciences Velika Gorica and Algebra University College in Croatia. Her fields of interests are linear algebra and functional analysis with applications in linear models. ig4290@student.uni-lj.si Janko Marovt Janko Marovt is an associate professor of mathematics at University of Maribor. His additional work and employment is at Institute of Mathematics, Physics, and Mechanics (IMFM), Ljubljana. His research inter- ests are functional analysis and linear algebra with applications. He is an author or coauthor of more than 30 research papers. janko.marovt@um.si Volume 9, Issue 2, 2020