Image Anal Stereol 2014;33:95-105 doi:10.5566/ias.v33.p95-105 Original Research Paper STRUCTURE TENSOR IMAGE FILTERING USING RIEMANNIAN L1 AND L. CENTER-OF-MASS JES ´US ANGULO MINES Paristech, CMM-Centre for Mathematical Morphology, 35 rue St Honor´e 77305 Fontainebleau Cedex, France e-mail: jesus.angulo@mines-paristech.fr (Received October 27, 2013; revised January 1, 2014; accepted March 13, 2014) ABSTRACT Structure tensor images are obtained by a Gaussian smoothing of the dyadic product of gradient image. These images give at each pixel a n ×n symmetric positive de.nite matrix SPD(n), representing the local orientation and the edge information. Processing such images requires appropriate algorithms working on the Riemannian manifold on the SPD(n) matrices. This contribution deals with structure tensor image .ltering based on Lp geometric averaging. In particular, L1 center-of-mass (Riemannian median or Fermat-Weber point) and L. center-of-mass (Riemannian circumcenter) can be obtained for structure tensors using recently proposed algorithms. Our contribution in this paper is to study the interest of L1 and L. Riemannian estimators for structure tensor image processing. In particular, we compare both for two image analysis tasks: (i) structure tensor image denoising; (ii) anomaly detection in structure tensor images. Keywords: Riemannian averaging, Riemannian center-of-mass, structure tensor, tensor image denoising, tensor image enhancement, tensor-valued images. INTRODUCTION Given a 2D scalar image u : . . Z2 › R, the associated structure tensor image represents the local orientation and the edge information of u(x,y) (F¨ulch, 1987; Knutsson, 1989). orstner and G¨More precisely, structure tensor image f (x,y) is just a regularization of the .rst fundamental form of image u(x,y). Hence, it involves a simple computation based on .rst derivatives of u(x,y), followed by a Gaussian smoothing of the dyadic product .u.uT : u(x,y) › f (x,y)= .. *.u(x, y).u(x,y)T= .. 2 .u(x,y) . u(x,y) .u(x,y) . . x.x .y. .. * . 2 ., (1) . u(x,y) . u(x,y) .u(x,y) .x .y. y where T .u(x,y) .u(x,y) .u(x,y)=, . x . y is the 2D spatial intensity gradient and .. stands for a Gaussian smoothing with a standard deviation .. We note that f (x,y) can be understood as the local covariance matrix of the set of gradient vector around point (x,y). We should remark also that if . is very small, f (x,y) is a rank-1 tensor at any point. For the sake of simplicity, we focuss here on the case of 2D gray-level images; however, structure tensor can be easily extended to 3D images, including color and multispectral-valued ones. By its robustness against illumination changes as well as invariance to some geometric image transformations, structure tensor is a versatile method used frequently in computer vision for corner detection, optical .ow estimation, segmentation, stereo matching, etc. Structure tensor images are just an example of the so called tensor-valued images; namely a spatial structured matrix .eld f : . -› SPD(n) where the support space is . . Z2 , Z3 and SPD(n) is the space of (real) n × n symmetric positive de.nite matrices. Besides structure tensor, SPD(n)-valued images appear nowadays in various image processing .elds and applications, for instance in diffusion tensor magnetic resonance imaging (DT-MRI) (Basser et al., 1994) or in radar imaging based on covariance matrix estimation (Barbaresco, 2011). Fig. 1 gives an example of structure tensor image, where the SPD(2) element at each pixel is depicted . by the corresponding ellipse of semi-axis 1/ .1 and . 1/ .2, where .1 and .2, .1 . .2, are the eigenvalues of the matrix and the ellipse orientation represents their corresponding eigenvectors e1 and e2. As it is shown, the tensor structure information can be also visualized by the image of tensor energy (i.e., sum of eigenvalues) and the image of anisotropy (i.e., ratio of eigenvalues). (a) u(x, y) . F(., R) (b) f (x,y) . F(.,SPD(2)) .1/.2 (c) .1 + .2 Fig. 1. Example of structure tensor image: (a) Original image; (b) corresponding structure tensor image (computed with . = 15); (c) image of tensor energy (sum of eigenvalues); (d) image of anisotropy (ratio of eigenvalues). Note that the ellipses in (b) are normalized in “size”, and the latter is given by the color lookup table. Motivation: Riemannian representation of structure tensor images The value at each pixel of a structure tensor image can be viewed as a point belonging to the non-Euclidean space underlying SPD(n) (Bhatia, 2007), which is just the interior of a convex cone, with apex 0 and boundary consists of all rank-de.cient symmetric semi-positive de.nite matrices. In order to visualize this cone, let us consider the case of a tensor M . SPD(2), i.e., acM = cb such that a,b . 0 and ab - c2 > 0. The cone of SPD(n) is a differentiable manifold endowed with a Riemannian structure, where the base point-dependent inner product is de.ned by (A,B)P = tr P-1AP-1B . This inner product leads to a natural Riemannian metric on SPD(n) whose line element is ds2 = tr P-1dPP-1dP . Note that this metric it invariant under congruent transformations, i.e., P › LPLT , and inversion, i.e., P › P-1. From the metric, the unique geodesic parameterized by the length, t › .(t), joining two elements P,Q . SPD(n), is de.ned as t P- 1 .(t)= P21 2 QP- 12 P21 , (2) where .(0)= P and .(1)= Q. Similarly, the geodesic (metric length) distance between P,Q . SPD(n) is given by P- 1 d(P,Q)= Ilog 2 QP- 12 IF =trlog2 (P-1Q) . (3) The tangent space of a manifold M at a point p . M is denoted by TpM . Let (M ,g) be a Riemannian manifold, where the Riemannian metric g on M is a family of (positive de.nite) inner products, (·,·)p : TpM ×TpM › R, which varies smoothly with respect to p . M . The notion of exponential and logarithmic maps are extremely powerful notions in Riemannian manifolds, see diagram in Fig. 2a. The exponential operator Expp maps a point of TpM into a point in M . The exponential map is injective on a zero-centered ball B in TpM of some non-zero (possibly in.nity) radius. Thus for a point q in the image of B under Exppthere exists a unique vector v . TpM corresponding to a minimal length path under the exponential map from p to q. Exponential maps may be associated to a manifold by the help of geodesic curves. The exponential map Expp : TpM › M associated to any geodesic .v emanating from p with tangent at the origin v . TpM is de.ned as Expp(v)= .v(1). The geodesic has constant speed equal to Id.v/dtI(t)= IvI, and thus the exponential map preserves distances for the initial point: d(p,Expp(v)) = IvI. The inverse operator, named logarithm map, Exp-1 = Logp maps a ppoint of q . M into to their associated tangent vectors v . TpM . Thus for a point q in the domain of Logpthe geodesic distance between p and q is given by d(p,q)= ILogp(q)I. (a) (b) Fig. 2. (a) Exponential and logarithmic maps in Riemannian manifolds. (b) Set of sample points in a Riemannian manifold M . Image Anal Stereol 2014;33:95-105 Exponential map and its inverse map from the cone of SPD(n) onto the vector tangent space TP SPD(n) at a given matrix P are respectively de.ned in a closed form as (Moakher, 2005; Fletcher et al., 2009; Fiori and Toshihisa, 2009) Sym(n) -› SPD(n) ExpP:1 1 (4) M › ExpP(M)= P 2 exp(M)P 2 SPD(n) -› Sym(n)LogP :(5) Q › LogP(Q)= log P- 21 QP- 12 Note that it is assumed that the tangent space to SPD(n) at P is identi.ed to the linear vector space associated to Sym(n) (n × n symmetric matrices), i.e., TP SPD(n) ~ = Sym(n). Aim: Riemannian averaging of structure tensor images In this context, the goal of this work is to show how to process tensor-valued images using algorithms based on the Riemannian nature of SPD(n). Filtering tensor images f . F (.,SPD(n)) involves in our case the need of a method to compute the averaging a set of samples in SPD(n). Or more formally, let {Ai}Ni=1 bea .nite set of N SPD(n) matrices, our aim is to compute their Riemannian Lp center-of-mass. This setting is a particular case of the problem of Lp averaging a discrete set of sample points in a Riemannian manifold, see diagram in Fig. 2b. Let M be a Riemannian manifold and let d(x,y) be the Riemannian distance function on M . Given N points x1,x2,··· ,xN . M and the corresponding positive real weights .1,.2,··· ,.N , with .1.i.N .i = 1, the Riemannian Lp center of mass, with p . [1,+.), is de.ned as the minimizer of the sum of p powered distances function N cp = arg min ..idp(x,xi) . (6) x.M i=1 General de.nition Eq. 6 includes two cases of well known Riemannian statistics. The classical geometric mean (Karcher-Fr´echet barycenter) is the minimizer of the sum-of-squared distances function: N µ = arg min ..id2(x,xi) , (7) x.M i=1 and the geometric median (Fermat-Weber point) is the minimizer of sum-of-distances function: N m = arg min ..id(x,xi) . (8) x.M i=1 Additionally, the particular case p =+., known as Riemannian circumcenter (or 1-center or minimax center), corresponds to the minimizer of max-of­distances function: c. = arg min max d(x,xi), (9) x.suppM ({xi})1.i.N where suppM ({xi}) is the closure of the convex hull on M of {xi}N i=1. To have an appropriate de.nition of Riemannian center-of-mass, it should be assumed that the points xi . M lie in a convex set U . M , i.e., any two points in U are connected by a unique shortest geodesic lying entirely in U. The diameter of U, denoted diam(U), is the maximal distance between any two points in U. We notice that the squared geodesic distance function and the geodesic distance function in U are convex. Existence and uniqueness of geometric mean Eq. 7 and geometric median Eq. 8 have been widely considered: both exist and are unique if the sectional curvatures of M are nonpositive, or if the sectional curvatures of M.are bounded above by . > 0 and diam(U) < ./(2 .) (Karcher, 1977; Kendall, 1984; Fletcher et al., 2009). More recently, the existence and uniqueness for the Riemannian Lp center of mass, 1 . p . . have been studied in Afsari (2010). We can also .nd more recent results on existence and uniqueness, including also practical algorithms for L2 (Bhattacharya and Patrangenaru, 2003; Le, 2004), for L1 (Fletcher et al., 2009; Yang, 2010), for general Lp (Afsari, 2010; Afsari et al., 2013) and for L. (Arnaudon and Nielsen, 2013). We can mention also some results on stochastic algorithms (avoiding to compute the gradient to minimize) (Arnaudon et al., 2012; Bonnabel, 2013). Related work Our contribution here is to study the interest of L1 and L. Riemannian center-of-mass for structure tensor image processing. In particular, we compare both estimators for two image analysis tasks: (i) structure tensor image denoising; (ii) anomaly detection in structure tensor images. There are several proposals in the literature that intend to process the tensor images obtained from structure tensor computation. Tensor .ltering can be achieved by PDE’s approaches (Tschumperl´e and Deriche, 2002) or by frequency .ltering techniques (Larrey-Ruiz et al., 2006). Extension of diffusion .ltering for matrix-valued images has been also widely studied in the literature (Burgeth et al., 2007). The latter approach is also related to the discrete counterpart which involves the computation of local adaptive neighborhood .lters for matrix .elds (Pizarro et al., 2008). From an alternative viewpoint, regularization is done intrinsically in the structure tensor computation. The underlying idea consist in locally adapting the Gaussian kernel (Nagel and Gehrke, 1998), de.ning another adaptive shaped (K¨2003), or .lter othe, other adaptive tensor computation (Brox et al., 2005). The notion of nonlinear structure tensor involves to replace the Gaussian smoothing of the classical structure tensor by a discontinuity preserving nonlinear diffusion (Brox et al., 2006). where Expµ (·) is the exponential map and Logµ (a) . Tµ M is the tangent vector at µ . M of the geodesic from µ to a; and where ß > 0 is the step parameter of the gradient descent. Using the expressions of exponential Eq. 4 and logarithmic Eq. 5 maps of tensors, the geometric mean of a set {Ai}Ni=1 of N SPD(n) matrices, with weights {wi}Ni=1, can be computed by the following Fr´echet-Karcher gradient .ow N 1 ——— Ak+1 AA - 1 - 11 22 A—A— . k 2 expß 2 wi log Ai = k ,k k Up to best of our knowledge, Riemannian Lp METHODS i=1 center-of-mass has not been previously used as a .ltering approach for structure tensor images. Paper organization The rest of the paper is organized as follows. In Methods Section are presented the algorithms for computing L1 and L. Riemannian center-of­mass of a set of tensors, which are the basic ingredients for regularization and enhancement of structure tensor images. All details for implementing those algorithms are given. Results Section discusses the performance L1 and L. Riemannian structure tensor image processing for the problems of denoising and anomaly detection. In the case of denoising, the comparison includes quantitative assessment. The case of anomaly detection deals exclusively with a qualitative comparison. Finally, Section on conclusions and perspectives closes the paper. (10) where ß > 0 is the step parameter of the gradient descent. We can typically use a constant step-size, i.e., 1 it is .xed to ß = for any k. In other to guarantee a N fast convergence of the algorithm Eq. 10 to the unique minimum, it is useful to have an initialization close to the .nal average. Hence, we propose the initialization to the arithmetic mean tensor. L1 RIEMANNIAN CENTER-OF-MASS OF SPD(n) MATRICES The Fermat-Weber point, or geometric median Eq. 8, can be also particularized to tensors. Indeed, for any Riemannian manifold M , the gradient of the Riemannian sum-of-distances function is given by N Logx(xi) . . f (x)|x.U; x = - =xi wi d(x,xi) i=1 N wi Logx(xi) ILogx(xi)I . . = - We discuss in this section the algorithms for computing L1 and L. Riemannian center-of-mass of i=1 a set of tensors which are the basic ingredients for .ltering the structure tensor images. We start by a remind on the L2 case, which is used as a baseline for comparison with the other estimators. RIEMANNIAN MEAN OF SPD(n) MATRICES Given a manifold M , the Fr´ echet-Karcher .ow (Fr´echet, 1948; Karcher, 1977) is an intrinsic gradient .ow on M that converges to the L2 center- With this result, the classical Weiszfeld-Ostresh algorithm (Weiszfeld, 1937; Ostresh, 1978) for iteratively computing the median was extended in Fletcher et al. (2009) to Riemannian manifolds as: Logmk (xi) k+1 m= Expmkß . wi · ILogmk (xi)I i.Ik . -1 . wi . , ILogmk (xi)I i.Ik of-mass, called Fr´ echet-Karcher barycenter. In the discrete case, the L2 center of mass for a .nite set of N points on M is given by the iterative algorithm where Ik =i . [1,N] : m k . Now, by = xi N straightforward substitution of expressions Eq. 4 and Eq. 5, one obtains the geometric median of a .nite set . Logµk (xi) , = Expµkß µk+1 {Ai}Ni=1 of N SPD(n) matrices, and weights {wi}N i=1 i=1, 98 Image Anal Stereol 2014;33:95-105 using the Riemannian Weiszfeld-Ostresh algorithm as Similarly to the case of Euclidean Lp center-of­follows mass, the Riemannian median is theoretically a more ... robust estimator than the Riemannian mean. More - 1 - 1 2 A—2 formally, the robustness is related to the notion of A— log Ai k k ...... ... 1 breakdown point (Lopuhaa¨and Rousseeuw, 1991). —— Ak+1 A ß . k 2 exp wi · = - 1 - 1 2 A—2 The .nite sample breakdown point of an estimator is A— i.Nk Ilog Ai I k k the fraction of data that can be given arbitrary values . . -1. without making the estimator arbitrary bad: minimal 1 proportion of data that can be corrupted before the ... ... ... wi A— . 2 (11) statistic becomes unbounded. Let X =(x1,x2,··· ,xN ), xi . Rd , the breakdown point of an estimator . is k , - 1 - 1 A—2 A—2 i.Nk Ilog Ai I k k de.ned as where Nk = {n . [1,N] : Ak = An} and 0 . ß . 2. The step size is .xed to ß = 1. .* (.,X,D)= min k : sup D(.(X),.(Yk)) = . 1.k.nn Yk It was proven in Fletcher et al. (2009) that the where D is a metric on the estimator space, the set Yk Riemannian Weiszfeld-Ostresh algorithm converges to contains (n - k) points for the set X and k arbitrary the geometric median limk›. mk = m in the case of a points from Rd . negatively curved manifold as SPD(n) if 0 . ß . 2 and Typically this is some function of the sample size N. Let U be a convex subset of M if the set of points is not too dispersed. Even in the case of very spread data, we have observed as suggested in Fletcher et al. (2009) that for ß = 1 the convergence is always obtained in SPD(n). with diam(U) < +., and let X = x1,x2,··· ,xN be a collection of points in U. Then, the Riemannian median has a breakdown point (Fletcher et al., 2009): (a) (b) Fig. 3. Comparison of Lp Riemannian center-of-mass in SPD(2), visualized as ellipses. In both examples original tensors are in blue, geometric mean L2 in green, geometric median L1 in red and Riemannian circumcenter L. in black. .* (m,X)= l(N - 1)/(2N)J , which means that half of the data needs to be corrupted in order to corrupt this estimator. It should be compared with the breakdown point of the Riemannian mean .*(µ,X)= 1/N. Lets us give a .rst illustration of the notion of robustness. Fig. 3 depicts two examples of the comparison of Lp Riemannian center-of-mass in SPD(2). In Fig. 3a, the Riemannian mean (in green) and the Riemannian median (in red) are computed from two tensors. Compare now the results of both with those obtained with the set of four tensors given in Fig. 3b. One of the two new tensors is an outlier with respect to the three others. We observe how the Riemannian median is less deformed by the outliers than Riemannian mean. L. RIEMANNIAN CENTER-OF-MASS OF SPD(n) MATRICES Given a discrete set of N samples x1, x2,··· ,xN , with each xi . Rn, the circumcenter (Sylvester point or 1-center or minimax center) is de.ned as c. = arg min max Ixi - xI2 , x.Rn 1.i.N and corresponds to .nd the unique smallest enclosing ball in Rn that contains all the given points. Computing the smallest enclosing ball in Euclidean spaces is intractable in high dimension, but ef.cient approximation algorithms have been proposed. The Badoiu and Clarkson (2003) algorithm leads ˘to a fast and simple approximation (of known precision . after a given number of iterations i.12 l using the notion of core-set, but independent of dimensionality n): Initialize the minimax center c. 1 with an arbitrary point of {xi}1.i.N , then iteratively update the center fk - ck k+1 k . c= c, .. + k + 1 k where fk is the farthest point of set {xi}1.i.N to c.. For the case L. Riemannian center-of-mass (minimum enclosing ball) there is no canonical algorithms which generalizes the gradient descent algorithms considered for p . [1,.) In a recent work by Arnaudon and Nielsen (2013), it has been introduced an extended version of the Euclidean algorithm (B˘adoiu and Clarkson, 2003) for circumcenter in Riemannian manifolds. Let us consider a discrete set {xi}Ni=1 . M on a manifold M . – Initialize the center x—. with a point of set, i.e., x—. 1 = x1; – Iteratively update the current minimax center as x—. 1 = Geodesic x—k ., fi, 1 , 1 + k where fi denotes the farthest point of the set to x—.k , and Geodesic(p,q,t) denotes the intermediate point m on the geodesic passing through p and q such that dist(p,m)= tdist(p,q). The convergence of this algorithm in nonpositive sectional curvature manifolds as SPD(n) is guaranteed (Arnaudon and Nielsen, 2013). The geometric circumcenter of a .nite set {Ai}N of N i=1 SPD(n) matrices can be computed using the closed expression of the geodesic Eq. 2 and the distance Eq. 3 3. Find the cut of the geodesic 1 - 1 - 1 t 1 22 22 ——— .(t)= A—AQkAA kk k k 1 at a value t = 1+k , which gives the SPD(n) — matrix Ak+1, so that — dist(A—k,Ak+1)= 1 dist(A—k,Qk) . 1 + k We note that the complexity of this iterative algorithm is a little bigger than for the gradient descent algorithms of the Riemannian mean and median. Nevertheless, all these algorithms only require a few number of SPD(n) matrix operations: product, inversion, power to a real number, matrix exponential and matrix logarithm, which are available in many scienti.c computing languages. One can compare in Fig. 3 the Riemannian circumcenter (in black) with respect to the Riemannian mean or median for both sets of tensors. We should remark that the L. corresponds geometrically to the center of the minimal enclosing geodesic ball which contains the tensors and hence, by de.nition, is very sensitive to outliers. In fact, it can be seen as an average between the extreme and distant tensors. (a) f (x,y) . F(., SPD(2)) (b) Geometric Mean L2 as the following instantiation of Arnaudon-Nielsen algorithm: — – Initialization: A1 = A1; – Iteratively update 1. Obtain the farthest SPD(n) matrix to the current estimate: - 1 - 1 22 = arg max Ilog A—A—I ; Qk kAik Ai, 1.i.N 2. Compute geodesic distance from current center estimation to farthest point: - 1 - 1 22 dist(A—k,Qk)= Ilog A—QkA—I ; kk (c) Geometric Median L1 (d) Riem. circumcenter L. Fig. 4. Comparison of structure tensor image .ltering: (a) original image (computed with . = 15); (b– d) Riemannian Lp averaged structure tensor image AverW,Lp ( f ). Averaging window is W = 3 × 3 for the three examples. Image Anal Stereol 2014;33:95-105 RESULTS Having an algorithm to compute the Lp center-of­mass of a set of SPD(n) matrices, it can be naturally used for .ltering structure tensor images f (x,y) . F (.,SPD(2)) by simply computing the average in local neighborhood associated to each pixel (x,y) of the image, i.e., AverW,Lp ( f )(x,y)= , — — A : A = AverLp { f (u,v)}(u,v).W (x,y), (12) where W (x,y) is the set of pixels belonging to the window W centered at pixel (x,y), such that W is typically a square window. Each pixel neighborhood is processed independently of the others and consequently that can be done in parallel. A comparison of L2, L1 and L. Riemannian structure tensor image .ltering is shown in Fig. 4, using the same window W = 3 × 3 pixels for the three cases. Obviously, by computing the center-of­mass is a such small neighborhood (i.e., only nine tensors are averaged), the obtained structure tensor images are quite similar. However, if we compare the corresponding tensor energy and tensor anisotropy images, depicted in Fig. 5, we observe that the three estimators have a different behavior in terms of image .ltering. As classically in image processing, the median-based .lter produces less blurring effect than the mean-based one. That involves a better edge preserving regularization. The result of the circumcenter can be compared with the effect of morphological .lters, in the sense that there is neither blurring nor edge deformation but a suppression of structures smaller than the .ltering window and an enhancement of those bigger than W . STRUCTURE TENSOR IMAGE DENOISING We consider now the performance of Riemannian .ltering for structure tensor denoising. Given a tensor image f (x,y) . F (.,SPD(2)), we .rst simulate a new tensor image 1f (x,y) by adding noise. We have considered two sets of experiments according to the type of simulated noise. – “Gaussian” noise: It is obtained by simulating a decoupled componentwise i.i.d. Gaussian noise for the eigenvalues of each SPD(2) pixel value, the corresponding . being a percentage of the dynamic range of eigenvalues and µ equals the empirical mean of the eigenvalues. In addition, for each SPD(2) pixel, a random rotation according to Gaussian distribution µ = 0 and . is also included. – Impulse noise: The simulation mechanism involves replacing the pixel values by an outlier tensor with a given probability Pr. Then, simulated image 1f is denoised by Riemannian Lp averaging AverW,Lp (1f ). (a) .1 + .2 (original) (b) .1 + .2(L2) (c) .1 + .2(L1) (d) .1 + .2(L.) (e) .1/.2 (original) (f) .1/.2(L2) (g) .1/.2(L1) (h) .1/.2(L.) Fig. 5. From (a) to (d), corresponding images of tensor energy from images of Fig. 4; from (e) to (h), corresponding images of tensor anisotropy from images of Fig. 4. In order to quantitatively assess the denoising effect of the different estimators, we introduce the notion of Mean Riemannian Error (MRE), de.ned as MRE = 1 . df (x, y),AverW,Lp ( f1)(x,y) , #. (x,y).. which is basically the Riemannian tensor distance between the pixel of original unnoisy image and the pixel of denoised image, averaged for all the image pixels. Table 1 summarizes the denoising performance by AverW,Lp ( f1) (W = 3 ×3) for the Riemannian mean, median and circumcenter. We have also included the results for the simple arithmetic mean of SPD(2) matrices, denoted as Euclidean L2. For the “Gaussian” (a) f (x, y) . F(., SPD(2)) (b) 1f1(x,y) (c) AverW,L1 ( f11) (d) AverW,L. ( f11) noise we have consider four values of .: 1%, 5%, 10% and 50% and also four probabilities Pr for the impulse noise: 0.01, 0.05, 0.1 and 0.5. The values of MRE correspond to the average of ten realizations from the image test. The results for the “Gaussian” noise are a bit surprising. We have, as expected, that the Riemannian mean performs better than the Riemannian median (expect for very low noise). However, the Riemannian circumcenter performs better than both, and the difference is particularly signi.cant for hight levels of noise. We explain this effect by considering the fact that in kind of noise, the corrupted tensors are evenly distributed around the original tensors and consequently, an estimate based on the center of minimal enclosing geodesic ball is rather steady with respect to the level of noise. Obviously, in case of uneven distribute noise, we can expect a bad behavior of the L. estimator. Table 1. Denoising performance by Riemannian center-of-mass averaging AverW,Lp ( f1) (W = 3 × 3) quanti.ed by MRE: (a) “Gaussian” noise, and (b) impulse noise. Values correspond to average of ten realizations. . = 1% . = 5% . = 10% . = 50% Noisy 3.01 7.35 10.06 17.51 Euclidean L2 3.61 8.06 10.87 18.79 Riemannian L2 3.43 7.27 9.82 17.17 Riemannian L1 3.21 7.30 9.94 17.49 Riemannian L. 3.27 6.90 9.33 16.16 (e) 1f2(x,y) (f) AverW,L1 ( f12) (g) AverW,L. ( f12) (h) 1f3(x,y) (i) AverW,L1 ( f13) (j) AverW,L. ( 1f3) Fig. 6. Examples of structure tensor image impulse denoising: (a) original structure tensor image (computed with . = 15); (b)-(e)-(h) noisy images with respectively Pr = 0.01, Pr = 0.1 and Pr = 0.5 probability of impulse noise; (c)-(f)-(i) denoised images by Riemannian L1 averaging; (d)-(g)-(j) denoised images by Riemannian L. averaging. Averaging window is W = 3 × 3 for all the cases. (a) (b) Pr = 0.01 Pr = 0.05 Pr = 0.1 Pr = 0.5 Noisy 0.20 0.94 1.91 10.04 Euclidean L2 1.98 4.31 6.79 15.95 Riemannian L2 1.66 2.16 2.67 10.18 Riemannian L1 1.08 1.22 1.63 12.01 Riemannian L. 2.2 4.25 6.15 9.31 For the case of the impulse noise, besides the quantitative results given in Table 1, we have also included some images in Fig. 6. As we observe, this kind of outlier-based impulse noise is appropriate to state the robustness of L1 against the other Lp estimators. Before the breakdown point, which corresponds here to Pr . 0.5, the Riemannian median .lter yields a signi.cant better performance. Then, for extremely noise situations, the Riemannian circumcenter produces a better estimate. Before closing this study of structure tensor denoising, we should point out that the noise has been added to the structure tensor images. Such a problem is different from the case where the noise is present in Image Anal Stereol 2014;33:95-105 the initial gray-level image. In fact, smoothing effect during the computation of the structure tensor helps to deal with this scalar noise. ANOMALY DETECTION IN STRUCTURE TENSOR IMAGES Let us consider the original images (a) in Fig. 7 and Fig. 8. Both cases correspond to a regular texture including a zone of irregularity. Defects in such regular textures can be detected as anomaly areas according to the structure tensor. The goal of these experiments is to show how an image f (x,y) . F(.,SPD(2)) associated to a regular texture can be processed by Riemannian center-of-mass averaging AverW,Lp ( f ) in order to enhance the potential defect areas. The .rst case study given in Fig. 7 corresponds to anisotropy anomaly detection. First of all, one needs to choose the parameter . for the Gaussian smoothing of gradient required in the structure tensor computation. It typically depends on the scale of the regular pattern of the texture; here we .x . = 15. Then, one should select the size of the window for averaging; here W = 7 × 7. The latter parameter is related to the scale of the irregular zone. In the .gure are compared the result of AverW,Lp ( f ) for the Riemannian median and (a) u(x, y) (b) f (x, y) (c) L1 (d) L. (e) .1 + .2 (f) .1 + .2(L1) (g) .1 + .2(L.) Fig. 8. Energy anomaly detection by Riemannian center-of-mass averaging AverW,Lp ( f ): (a) original texture image; (b) original structure tensor image (computed with . = 15); (c) L1,W = 7 × 7; (d) L.; W = 7 × 7; (e)-(f)-(g) corresponding tensor energy images. Riemannian circumcenter, together with the corresponding tensor anisotropy images. As we can observe, the L1 estimator produces a strong (a) u(x,y) enhancement of the defect by “rounding” the (b) f (x,y) (c) L1 (d) L. (e) .1/.2 (f) .1/.2(L1) (g) .1/.2(L.) Fig. 7. Anisotropy anomaly detection by Riemannian center-of-mass averaging AverW,Lp ( f ): (a) original texture image; (b) original structure tensor image (computed with . = 15); (c) L1,W = 7 × 7; (d) L.; W = 7×7; (e)-(f)-(g) corresponding tensor anisotropy images. corresponding area. As the window is a square, the L. estimator regularizes according to this geometry with a better adjustment to the underlying zone of irregularity. Fig. 8 provides a second case study. Anisotropy is not signi.cantly degraded in this example, but the gradient magnitude is lowered; which consequently involves a scenario for tensor energy anomaly detection. We have considered the same scale parameters than in the previous example. As we can observe from the results, the behavior of the Riemannian median and Riemannian circumcenter are similar to the previous case. Finally, we note that a boundary effect appears in Figs. 7 and 8 (e.g., left in Fig. 7g, bottom in Fig. 8f or top in Fig. 8g) due to the fact that the regular texture is cropped in a bounded window. As usually in these cases, one needs to remove from the analysis an image border of size equal to the .lter size. CONCLUSIONS Riemannian averaging is a mathematically sound and useful tool for processing structure tensor images. Geometric median for tensor image .ltering inherits properties of scalar median image .ltering: robustness against impulse noise and rounding structure effect. The latter is related to the mean curvature motion (Guichard and Morel, 2003). Riemannian circumcenter is potentially relevant for very noisy images and it produces a limited blurring effect (boundaries are not shifted). For the particular problem of anomaly enhancement/detection, instead of processing spectral information (tensor energy and tensor anisotropy) from original structure tensor image, it seems more useful to .rst L1/L. processing the structure tensor image and then to use spectral information of processed images. From a computational viewpoint, both iterative algorithms have a complexity which depends linearly on the number of pixels as well as the size of the averaging window. In order to have fast algorithms, an ef.cient implementation of the exponential and logarithm of symmetric positive de.nite matrices is required. More precisely, only an ef.cient implementation of classical linear algebra tools is needed. Faster algorithms can be based on an approximated version of the de.nition, founded on the fact that when the averaging image window moves from one pixel to one of its neighbors only a limited number of new tensors are involved, with respect the set of tensors averaged in the previous pixel. In this study we have only considered tensor .ltering by a .xed kernel averaging for all the image neighborhoods. Obviously, the use of adaptive kernels, such as it is done in Burgeth et al. (2007) for Euclidean tensor averaging, would improve the results in terms of object edge preserving. The algorithm for L1 estimator can be used for this purpose by considering weights which represent the adaptivity, associated typically to bilateral kernels. That has been done for quaternion­valued images in Angulo (2013). Formulation of morphological operators for structure tensor images has been considered in Angulo (2012) under different frameworks. However, work on structure tensor morphology should be pursued in order to fully exploit the Riemannian structure of such images. ACKNOWLEDGEMENTS The topic of this paper was presented at the 11th European Congress of Stereology and Image Analysis, July 9–12, 2013 in Kaiserslautern, Germany. REFERENCES Afsari B (2010). Riemannian Lp center of mass: existence, uniqueness and convexity. Proc Am Math Soc 139(2):655–73. Afsari B, Tron R, Vidal R (2013). On the convergence of gradient descent for .nding the Riemannian center-of­mass. SIAM J Control Optim 51(3):2230-60. Angulo J (2012). Supremum/in.mum and nonlinear averaging of positive de.nite symmetric matrices. In: Nielsen F, Bathia R, eds. Matrix Information Geometry. Berlin, Heidelberg: Springer. Ch 1: 3–34. Angulo J (2013). Riemannian Lp averaging on the Lie group of nonzero quaternions. Adv Appl Clifford Algebras, In press. Arnaudon M, Dombry C, Phan A, Yang L (2012) Stochastic algorithms for computing means of probability measures. Stoch Proc Appl 122:1437–55. Arnaudon M, Nielsen F (2013). On approximating the Riemannian 1-center. Comput Geom 46(1):93–104. Barbaresco F (2011). Geometric radar processing based on Fr´echet distance: Information geometry versus optimal transport theory. In: Rohling H, Ed. Proc Int Radar Symp (IRS’11). Leipzig, Sept 7–9. 663–8. Basser P.J, Mattiello J, LeBihan D (1994). MR diffusion tensor spectroscopy and imaging. Biophys J 66:259–67. Bhatia R (2007). Positive de.nite matrices. Princeton University Press. Bhattacharya R, Patrangenaru V (2003). Large sample theory of intrinsic and extrinsic sample means on manifolds I. Ann Statist 31(1):1–29. Bonnabel S (2013). Stochastic gradient descent on Riemannian manifolds. IEEE T Autom Control 58:2217–29. B˘adoiu M, Clarkson KL (2003). Smaller core-sets for balls. In: Proc 14th Ann ACM-SIAM Symp Discrete Algorithms. pp 801–2. Brox T, Weickert J, Burgeth B, Mr´azek P (2006). Nonlinear structure tensors. Image Vision Comput 24(1):41–55. Brox T, van den Boomgaard R, Lauze F, van de Weijer J, Weickert J, Mr´ azek P, Kornprobst P (2005) Adaptive structure tensors and their applications. In: Weickert J, Hagen H, Eds. Visualization and Image Processing of Tensor Fields. Berlin, Heidelberg: Springer. 17-47. Burgeth B, Didas S, Florack L, Weickert J (2007). A generic approach to diffusion .ltering of matrix-.elds. Computing 81:179–97. Fiori S, Toshihisa T (2009). An algorithm to compute averages on matrix Lie groups. IEEE T Signal Proces 57(12):4734–43. Fletcher PT, Venkatasubramanian S, Joshi S (2009). The geometric median on riemannian manifolds with Image Anal Stereol 2014;33:95-105 application to robust atlas estimation. NeuroImage 45(1):S143–52. F¨ulch E (1987). A fast operator for detection orstner W, G¨and precise location of distinct points, corners and centres of circular features. In Proc. of ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, pp. 281–304. Fr´M Les elements eatoires natureechet (1948). ´al´de quelconque dans un espace distancie.´Ann Inst H Poincar´e 10: 215–310. Guichard F, Morel JM (2003). A note on two classical enhancement .lters and their associated PDE’s. Int J Comput Vision 52:153–60. Karcher H (1977). Riemann center of mass and molli.er smoothing. Commun Pur Appl Math 30:509–41. Kendall DG (1984). Shape manifolds, Procrustean metrics, and complex projective spaces. Bull Lond Math Soc 16:18–121. Knutsson H (1989). Representing local structure using tensors. In: Proc 6th Scand Conf Image Anal. Oulu, June 1989. pp. 244–51. K¨anothe U (2003). Edge and junction detection with improved structure tensor. In: Michaelis B, Krell G. Pattern Recognition. Lect Not Comput Sci 2781:25–32. Larrey-Ruiz J, Verdu-Monedero R, Morales-Sanchez J, Angulo J (2011). Frequency domain regularization of d-dimensional structure tensor-based directional .elds. Image Vision Comput 29:620–30. Le H (2004). Estimation of Reimannian barycenters. LMS J Comput Math 7:193–200. Lopuha¨a HP, Rousseeuw PJ (1991). Breakdown points of af.ne equivariant estimators of multivariate location and covariance matrices. Ann Stat 19(1):229–48. Moakher M (2005). A differential geometric approach to the geometric mean of symmetric positive-de.nite matrices. SIAM J Matrix Anal Appl 26:735–47. Nagel HH, Gehrke A (1998). Spatiotemporally adaptive estimation and segmentation of OF-.elds. In: Burkhardt H, Neumann B, Eds. Proc 5th Eur Conf Comput Vision (ECCV’98). Freiburg, June 2–6. Lect Not Comput Sci 1407:86–102. Ostresh LM (1978). On the convergence of a class of iterative methods for solving the Weber location problem. Oper Res 26:597–609. Pizarro L, Burgeth B, Didas S, Weickert J (2008). A generic neighbourhood .ltering framework for matrix .elds. In: Forsyth D, Torr P, Zisserman A, Eds. Proc 10th Eur Conf Comput Vision (ECCV’08). Marseille, October 12–18. Lect Not Comput Sci 5304:521–32. Tschumperl´e D, Deriche R (2002). Orthonormal vector sets regularization with PDE’s and applications. Int J Comput Vision 50(3):237–52. Weiszfeld E (1937). Sur le point pour lequel la somme des distances de n points donn´ees est minimum. Tohoku Math J 43:355–86. Yang L (2010). Riemannian median and its estimation. LMS J Comput Math 13:461–79.