Image Anal Stereol 2012;31:109-120 Original Research Paper ON RANDOM ITERATED FUNCTION SYSTEMS WITH GREYSCALE MAPS Matthew Demers1, Herb Kunze153*2 and Davide La Torre3 University of Guelph, Department of Mathematics & Statistics, Guelph, Ontario, Canada; 2University of Guelph, Department of Mathematics & Statistics, Guelph, Ontario, Canada; 3University of Milan, Department of Economics, Business and Statistics, Milan, Italy e-mail: mdemers@uoguelph.ca; hkunze@uoguelph.ca; davide.latorre@unimi.it (Received January 18, 2012; revised April 27, 2012; accepted May 8, 2012) ABSTRACT In the theory of Iterated Function Systems (IFSs) it is known that one can find an IFS with greyscale maps (IFSM) to approximate any target signal or image with arbitrary precision, and a systematic approach for doing so was described. In this paper, we extend these ideas to the framework of random IFSM operators. We consider the situation where one has many noisy observations of a particular target signal and show that the greyscale map parameters for each individual observation inherit the noise distribution of the observation. We provide illustrative examples. Keywords: random fixed point equations, random iterated function systems, collage theorem. INTRODUCTION In fractal imaging compression and coding based on Generalized Fractal Transforms (GFT), one seeks to approximate a target image or signal by the fixed points of a contractive fractal transform operator. The usual formulation involves a fixed set of geometric contraction maps along with a corresponding set of greyscale adjustment maps. The inverse problem requires one to find the best greyscale map parameters for a given target image. The solution process is referred to as "collage coding" because of the visual effect of the contraction maps; it applies the collage theorem, a simple consequence of Banach's fixed point theorem. The approximation of the target image or signal can be generated through iteration of the fractal transform (see Hutchinson, 1981; Barnsley et a I., 1985; Barnsley and Demko, 1985; Barnsley, 1989; Barnsley and Hurd, 1993; Forte and Vrscay, 1995; 1999; Iacus and La Torre, 2005b;a; Kunze et al,, 2008; La Torre et a I,, 2009; La Torre and Vrscay, 2011 for more details and applications). In Forte and Vrscay (1995), the authors showed that one can find an iterated function system with greyscale maps (IFSM) to approximate any target signal or image with arbitrary precision, and they provided a suboptimal but systematic approach for doing so. The goal of this paper is to extend the work in Forte and Vrscay (1995) to the framework of random IFSM operators. We consider the situation where one has many noisy observations of a particular target signal and show that the greyscale map parameters for each individual observation inherit the noise distribution of the observation. Prior to these results, we present several short background sections. First, we discuss the deterministic collage coding framework and quickly summarize the related IFSM setting. Then we present the random collage coding framework. Finally, we formulate the random IFSM operator framework and present some illustrative numerical examples. FIXED POINT EQUATIONS AND COLLAGE THEOREM In this section, we present some basic contraction map results that will be used in later sections. Let (X,dx) denote a complete metric space. Then T: X — X is contractive if there exists ace [0,1) such that dx (Tx, Ty) < cdx {x, y) for all x, y € X . (1) We normally refer to the infimum of all c values satisfying Eq. 1 as the contraction factor of T. Theorem 1 (Banach): Let (X.dx) be a complete metric space. Also let T : X —> X be a contraction mapping with contraction factor c 0 as n —> Theorem 2 (Continuity theorem for fixed points, Centore and Vrscay, 1994): Let (X,dx) be a complete metric space and J), I? be two contractive mappings with contraction factors c\ and c2 and fixed points vi and y2, respectively. Then dx(y\,yi) < 1 1 -min{ci,c2} 0, find a contraction map T(e) (perhaps from a suitable family of operators) with fixed point x(e) such that d{x,x{e)) < e. In the case that one is able to solve such an inverse problem to arbitrary precision, i.e., e 0, then one may identify the target x as the fixed point of a contractive operator T on X. In practical applications, however, it is not generally possible to find such solutions to arbitrary accuracy nor is it even feasible to search for such contraction maps. Instead, one makes use of the following result, which is a simple consequence of Banach's fixed point theorem. Theorem 3 ("Collage theorem", Barnsley et al, 1985): Let (X. d) be a complete metric space and T : X — X a contraction mapping with contraction factor c € [0,1). Then for any a* J~^—d(x,Tx) where x is the fixed point ofT. (5) FORMAL SOLUTION TO THE INVERSE PROBLEM FOR ITERATED FUNCTION SYSTEMS WITH GREYSCALE MAPS The method of iterated function systems with greyscale maps (IFSM), as formulated by Forte and Vrscay (1995), can be used to approximate a given element u of L2([0,1]). We consider the case in which u : [0,1] — [0,1] and the space X = {u: [0,1] [0, l],u € L2[0,1]} (6) The ingredients of an A?-map IFSM on X are (Forte and Vrscay, 1995; 1999) 1. a set of N contractive mappings w = {wi,w2,...,wN}, Wi : [0,1] [0,1], satisfying the covering condition Ui=iM,;([0,1]) = [0,1], and most often affine in form: Wi(x) = six + cii, 0 < Sj < 1, i=l,2,...,N; (7) 2. a set of associated functions - the greyscale maps -0 = {011021 • • • i 0n}> 0! : M —> R. Affine maps are usually employed: 0/(0 = (Xit + pi, with the conditions a,-,ft e [0,1] and N o+A-< l i= 1 (8) (9) Associated with the A/-map IFSM (w. (j)) is the fractal transform operator T, the action of which on a function u M^\x))), (11) i=1 where the prime means that the sum operates on all those terms for which wt 1 is defined. Theorem 5 (Forte and Vrscay, 1995): /': X — X and for any u,v € X we have d2(Tu,Tv) 0 and every Bel there exists a finite set of integers ik< ik> 1. 1 v = {¿>1..... (¡)m } be the N-vector of affine grey level maps. Let zn be the solution of the previous quadratic optimization problem and min = A%{zn)- In Forte and Vrscay (1995), the following result was proved. Theorem 6 AN,min -^OasN^co. Using the Collage Theorem, the inverse problem may be solved to arbitrary accuracy. A practical choice for the contraction maps w onX = [0,1] is M'ij(x) = 2-\x + j - 1), i = 1,2,..., j = 1,2,...,2\ RANDOM FIXED POINT EQUATIONS Let (Q,^,P) denote a probability space and (X.d) a metric space. A mapping T : Q x X —> X is called a random operator if for any x € X the function T(.,x) is measurable. A measurable mapping x : Q —> X is called a random fixed point of a random operator T if j is a solution of the equation T(w,x{w)) =x{w) . (16) for a.e. a> e Q. A random operator T is called continuous (Lipschitz, contraction) if for a.e. w G Q we have that T(w,.) is continuous (Lipschitz, contraction). There are many papers in the literature that deal with such equations for single-valued and set-valued random operators (see Bharucha-Reid, 1972; Itoh, 1979; Papageorgiou, 1988; Lin, 1995; Liu, 1997; Shahzad, 2001; 2004a;b). Solutions of random fixed point equations and inclusions are usually produced by means of stochastic generalizations of classical fixed point theory. Let be a probability space and let (X,dx) be a complete metric space. Let'/ : Q x X — X be a given operator. We look for the solution of the equation T(,y)) < c(co)d(x,y) , (18) where c(co) : Q —> X. When T satisfies this property, we say that T is a c(o)-Lipschitz operator. If c(co) < c < 1 a.e., with to ) X and a family of operators 7\ : Q x X —> X, find A such that x is the solution of random fixed point equation Tx(co,x((o)) =x(a>) . (19) We may now state the following random analogs to Theorems 1-4. Corollary 1 ( "Regularity conditions "): Let (Q, P) be a probability space and (X,dx) be a complete metric space. Let T : Q x X —> X be a given c(a>)-contraction. Then dx(x(a>i),j(®2)) < JL?supd(T((Oi,x),T(a>2,x)) xeX Corollary 2 ("Continuity Theorem"): Let (Q,^,P) be a probability space and let (X.dx) be a complete metric space. Let T\ : Q x X —> X and T2 : Q x X —> X be c\((o)- and c2(a>)-contractions respectively. Then dx(xi((o),x2((o)) < 1 supdx(Ti(CO,x),T2((0,x)) . l-mm{cuc2} xeX Theorem 7 ("Collage Theorem"): Let (Q.-f.F) he a probability space and let (X,dx) be a complete metric space. Let i : Q x X — X be a given c(a>)-contraction. Then 1 l+c dx(x[(o),T((o,x((o))) ) is the solution ofT((o,x(a>)) = x((0). The above described approach to deal with random fixed point equations does not guarantee, in general, the measurability of the function x : Q —> X with respect to the sigma algebra . In order to overcome this difficulty, we provide an abstract formulation of the same problem which guarantees the measurability of J. Recall that (X,d) is said to be a Polish space if it is a separable complete metric space. In random fixed point theory, separability plays an important role. Two important examples of Polish metric spaces which will be useful in the following are C([a,b]) and I?(\a.b\). Consider now the space Y of all measurable functions x : Q —> X. If we define the operator 7' : K — K as (Ty)(co) = T(co,x(co)) the solutions of this fixed point equation on Y are the solutions of the random fixed point equation T(co,x(co)) = x(co). Suppose that the metric dx is bounded, that is dx(x\,x2) < K for all xi,x2 € X. So the function y"(a>) = dx{xi(co),x2(co)) : Q —> R is an element of (Q) for all x\,x2 ,.) is c((o)-Lipschitz and for each a* X be a measurable mapping; then the mapping q : Q — X defined by %(co) = T(co,x(co)) is measurable. The previous theorem (Eq. 8) holds if T(co, •) is only continuous (see, for instance, Himmelberg, 1975). Corollary 3 Let X be a Polish space and T : Q x X —> X be a mapping such that for each co € il the function T(a>,-) is a c(a>)-contraction. Suppose that for each x G X the function T(-,x) is measurable. Then there exists a unique solution of the equation fx = x that is T(co,x(co)) = x(co) for a.e. m X and a family of operators f-L : Y — Y find A such that x is the solution of random fixed point equation fix = x. that is, Tx(m,x((o)) =x(co) . (21) (22) Now, with the collage and continuity theorems, we may state the following. Corollary 4 Let X be a Polish space and T : Q x X —> X be a mapping such that for each co ,-) is a c(a>)-contraction. Suppose that for each x GX the function T(-,x) is measurable. Then for any x ) := T((o,x(co)). FORMAL SOLUTION TO THE INVERSE PROBLEM FOR RANDOM ITERATED FUNCTION SYSTEMS WITH GREYSCALE MAPS We now formulate a Random IFSM fractal transform that is analogous to the deterministic fractal transform in Eq. 11. Let (Q,^,P) be a probability space and consider now the following operator T :X xO^I defined by T(o3,u) = (Tu)(x) n i= 1 = (r«)+j3(fi>), where ß (®) = , 'ßi(o)). That is, we suppose that the greyscale maps take the form = out + ßi( X, u is measurable} (27) and consider the function y(fi)) := dx (in (co), 112(03)) = \\ui(a>) - u2((o)\\2. From the hypotheses it is clear that \jf{(o) € L1 (Q). We know that (Y,dy) is a complete metric space where dy(til, 112) = / \\lll{co) — U2{co)\\2d(0 . (28) JQ. Obviously, the function %(eo) := (T*u)(eo) + ß(eo) belongs to Y because T* is Lipschitz on X and the ß is measurable. If we define T :Y -^Y where Tu c, wc have dY(Tui,Tu2) = f IKr^X®) +ß(co) -(T*u2)(co)-ß((o)\\2d(o = Jn / \\ui(co) — U2{(0)\\2d(0 = Cdy{ui,U2) ■ JQ. So there exists u G Y such that Tu = u, that is, 11(03) = T(03,11(03)) for a.e. 03 R is integrable and let u(x) = / [ll(03)}(x)d03 = / U(03,x)d03 . (30) Jo. Jq. Thus Eq. 29 becomes n il(x) = £'od(u(wil(x))) + m . (31) i= 1 That is, the expectation value of u(o3,x) is the solution of a deterministic A^-map IFSM on X (cf. Eq. 11). We observe in Eq. 31 another motivation for not randomizing the parameters at: the expectation of the product «,//(■) becomes complicated. Suppose that we have N observations, u(o3\), 11(032), ..., u(03n), of the variable 11(03). Choose a set of greyscale maps, as in Eq. 15. We collage code each observation, 11(03^), k = 1,... ,n, to find the set of minimal collage parameters a^ and ¡5ik, k = 1,... ,n, i = l,...,N. As a result of our earlier discussion, if the observations differ only by a uniform additive noise factor, we expect that the distribution of the pu; parameters, k = 1 ...,n will inherit that distribution, with mean jjj. NUMERICAL EXAMPLES This section provides several numerical examples which illustrate the above formulation and "mean property." Example: We define the IFS maps M'ij(x) = 2-\x + j - 1), i = 1,... ,4, j = 1,2,... ,2'", with associated greyscale maps