Informatica 33 (2009) 385-308 297 Robustness and Visualization of Decision Models Andrej Bregar Informatika d.d. Vetrinjska ulica 2, SI-2000 Maribor, Slovenia E-mail: andrej.bregar@informatika.si Jozsef Gyorkos and Matjaž B. Jurič University of Maribor, Faculty of Electrical Engineering and Computer Science, Institute of Informatics Smetanova ulica 17, SI-2000 Maribor, Slovenia E-mail: jozsef.gyorkos@uni-mb.si, matjaz.juric@uni-mb.si Keywords: decision support, multi-criteria decision analysis, robustness metrics, mathematical optimization, principal components analysis, utility theory, promethee, electre Received: June 20, 2008 Robustness analysis and visualization are two of key concepts of multi-criteria decision support. They enable the decision-maker to improve his understanding of both the model and the problem domain. A class of original mathematical optimization based robustness metrics is hence defined in this paper. In addition, several efficient existing techniques that have been successfully used in various ICT projects are presented. They include the stability intervals/regions and the principal components analysis. All approaches are applied to the multi-attribute utility function, and to the PROMETHEE II and ELECTRE TRI methods. Their benefits are discussed and demonstrated on real life cases. Povzetek: Vpeljane so izvirne, na matematični optimizaciji temelječe metrike robustnosti večkriterijskih odločitvenih modelov ter predstavljeni učinkoviti pristopi k analizi občutljivosti in vizualizaciji, ki so bili uspešno uporabljeni na projektih iz področja informacijsko-komunikacijskih tehnologij. 1 Introduction The decision model is a formal, simplified representation of the problem domain. It transforms input parameters, which are set by the decision-maker, into numerical or qualitative assessments, also called model assumptions (Power, 2002). These assessments should, however, not directly influence the implemented decision; they should rather be further analysed because they are often derived from data that are subject to uncertainty, imprecision and indetermination (Roy, 1996). These phenomena are the consequence of: • incomplete domain knowledge or information; • high domain complexity and high cognitive load of the decision-maker; • insufficient insight into relations between model parameters; • nonsystematic subjective assessments of criteria weights and evaluations of alternatives. It is thus necessary to thoroughly and systematically test the inferred model assumptions. Preference aggregation, which is performed in order to assess alternatives, must represent merely the first phase of the decision-making process since the aim of decision analysis is not only to deal with the common problematics of selecting, ranking or classifying alternatives (Roy, 1996), but primarily to provide the decision-maker with a deep understanding of the problem domain, and to clearly expose the influence of preferential parameters and relations between them on the derived results. For this reason, a technique called the sensitivity analysis is used. It enables the decision-maker to judge in a formal and structured manner (Turban and Aronson, 2001): • the influence of changes in input data - decision and uncontrollable variables - on the proposed solution that is expressed by the values of output variables; • the influence of uncertainty on output variables; • the effects of interactions between variables; • minimal changes of preferential parameters that are required to obtain (un)desirable results; • the robustness of both the decision model and the suggested decision in dynamically changing conditions. Sensitivity/robustness analysis is one of key concepts in the field of multi-criteria decision aiding (Saltelli et al., 1999). It helps the decision-maker to prepare for the uncertain and potentially extreme future, and to improve his understanding of the problem domain by reflecting back on his judgements, synthesising preferences and observing changes. Yet, experiences of researchers and practitioners show that multi-dimensional complexity of the problem domain poses great challenges with regard to the sensitivity analysis as extensive tasks are difficult 378 Informatica 33 (2009) 375-383 H. Hexmoor et al. to communicate (Hodgkin et al., 2005). On the contrary, visual displays are a powerful means of communication for the majority of people. It is therefore recommended to implement and use interactive visual tools, in order to considerably improve the problem solving process. Several approaches to sensitivity analysis exist that have been defined in conjunction with various decisionmaking methods (Frey and Patil, 2002; Vincke, 1999b). Because they are designed for specific types of decision models, they do not cover all relevant aspects of problem solving. Especially the following deficiencies should be taken into consideration: • Existing ZP-metric based optimization methods and algorithms address sensitivity analysis only partially. They eliminate some dilemmas, but to systematically verify robustness it is necessary to simultaneously measure: 1. the minimal modification of parameters according to which the best alternative loses its priority over any suboptimal alternative; 2. the smallest modification that suffices for a selected suboptimal aternative to become the best one; 3. the largest deviation that preserves the preferential relation of two alternatives. • In the case of outranking methods ELECTRE and PROMETHEE, the robustness is measured only with regard to criteria weights, aggregated credibility degrees or inferred net flows. Other preferential parameters, such as thresholds, are not analysed. The purpose of this paper is therefore (1.) to introduce a class of original ZP-metric optimization algorithms and programs that can be applied to holistically measure the robustness of decision models in conjunction with both the utility function and the outranking methods, (2.) to extend the concept of robustness analysis in the context of the ELECTRE TRI method to pseudo-criterion related thresholds, (3.) to formally present fundamental existing sensitivity analysis and visualization techniques that the authors have successfully used within the scope of their project work, and (4.) to discuss the benefits of these techniques. It should be noted that the utility function based approaches are adapted solely to determining the influence of criteria weights. This is a common practice because weight derivation is generally more subjective than specification of criterion-wise values of alternatives. The rest of the paper is organized as follows. Section 2 provides a brief description of three decision methods -utility function, PROMETHEE II and ELECTRE TRI -to which the techniques of robustness measurement are applied. More detailed explanations can be found in the literature (Figueira et al., 2005). Section 3 gives a review of related work. Section 4 formally presents the stability intervals/regions based automatic sensitivity analysis. In Section 5, several new approaches to multi-dimensional robustness analysis are defined, which utilize (non)linear mathematical programming. This Section represents the original contribution of the paper. In Section 6, practical examples are provided. They demonstrate the strengths and benefits of the described techniques, and correspond to the results of projects. Section 7 concludes the paper by giving a resume and directions for further research. 2 Theoretical foundations of decision methods 2.1 Multi-attribute utility function Since the utility theory was axiomatized by Keeney and Raiffa (1993), it has become the most widespread and probably the most relevant approach to decision analysis. Its foundations lay in the dogma of rational behaviour, so it is based on five axioms that provide a framework for a generic strategy that people should adopt when making reasonable decisions. The central concept of all axioms is the lottery, which is a space of outcomes that occur with certain probabilities. If preferences of the decision-maker satisfy these axioms, a real-valued function exists, which is called the utility function and correlates outcomes with a scale that expresses judgements on the [0,1] interval. It is uncomplicated to model the utility function for a single attribute (Zeleny, 1982). However, in practice an alternative is generally chosen by expressing preferences on a set of attributes or criteria {xu ..., xn}. In this case, the alternative ai is represented with a vector of values at = (xi(a,), ..., xn(a)). Its utility is determined by assigning the vector a real value between 0 and 1. It is difficult to directly assess alternatives with the multi-attribute utility function, so this problem is reduced by defining a partial (one-dimensional) utility function for each attribute: Uj (a,): Xj (a,) ^ [0,1]. Partial utilities are aggregated with a decomposition rule. It can have several forms of which the most widely used is the weighted additive decomposition: u(ai ) = = V / I Z-j=1..n 2.2 PROMETHEE I and II methods PROMETHEE is a family of methods that are based on the concepts of pseudo-criterion, outranking relation and pairwise comparisons (Brans and Vincke, 1985). For a pair of alternatives at and aj, and for each criterion xk, the preference function Pk(ai, aj) is defined on the interval [0,1] according to criterion-wise values gk(a) and gk(aj), and according to the chosen indifference (qj), preference (pj) or Gauss (sj) thresholds. This function expresses the degree to which ai outranks (outperforms) aj. It can have one of six possible shapes of which the linear is the most widely used: Pk (ar , aj) = 0 dk (ai,a,) - qk Pk- qk 1 dk (ar , aj) ^ qk , •qk < dk (ai , a] ) < Pk , • dk (ai, aj) ^ Pk, where dk(ah aj) = gk(a) - gk(aj). The outranking degrees are calculated for both "directions", so that the Pk(ah aj) ROBUSTNESS AND VISUALIZATION OF DECISION MODELS Informatica 33 (2009) 385-395 387 and Pk(aj, a) values are obtained. Criterion-wise indices are aggregated by taking criteria weights into account: n(ai,aj) = Zk=l nwk ' Pk (a,,aj). In the next step, the positive and negative ranking flows +(at) and t~(a) are computed for every alternative a. They indicate the average degree to which at performs better respectively worse than all other alternatives: (+ (ai) -Z ^ ,n(ai,at) and n -1 4> (a ) = ■ —-z n -1 z-< The inferred flows can be interpreted in two ways. The PROMETHEE I method considers them simultaneously. A partial rank-order of alternatives is thereby derived, in which the incomparability relation may exist in addition to the preference and indifference relations. More often, a weak rank-order is obtained with the PROMETHEE II method. For this purpose, alternatives are evaluated with the net flow: t(a,) = (+ (at) -< (at) = nZT 'Za.eA Zk=1 ..n ' P (ai > aj ) - Pk (a j , ai )). s ,■ (a,, b) = max min g j (a ) - gj (b) - q, Pi- q, 0 1 y / Analogously, sjb, ai) represents the valued outranking of at by b. To express the degree of concordance with the assertion "the alternative at belongs to the class C+", the indices s,(a„ b) and sjb, a) are aggregated with a fuzzy averaging operator: ci (a) = j - (s; (ai,b) + (1 - s} (b, at))). For the sake of compensation of small weaknesses, the indices ja) are combined so that each is scaled by the weight wj which represents the voting power of the J-th criterion and determines its contribution to the decision: c(a-)=z ,=,. wi • Cj (ai). For each criterion, the discordance index is also defined based on the discordance and veto thresholds u, and v,. It reflects the partially noncompensatory degree of veto on the assertion "a,- belongs to C+": d, (a, ) = max min gj(b) - gj(a,) - u, 0 1 vi - ui The overall nondiscordance relation is grounded in two ways: d '(a- ) =n /=i..n f1 - dJ (a- )) or 2.3 Dichotomic ELECTRE TRI method The above described PROMETHEE I and II methods are designed to rank-order alternatives. Yet, the concepts of pseudo-criterion and outranking relation enable sorting as well. Two variants of PROMETHEE dealing with the sorting problematic have been recently introduced (Araz and Ozkarahan, 2007; Doumpos and Zopounidis, 2004), while the most widespread outranking method for sorting is ELECTRE TRI (Mousseau et al., 2000; Roy, 1991). The latter has been slightly modified within the scope of our research work by following the localization principle and preventing the incomparability relation, in order to allow for group consensus seeking and automated multiagent negotiation (Bregar et al., 2008). The dichotomic ELECTRE TRI method compares all alternatives with the profile b. Acceptable choices belong to the positive category C+, while unsatisfactory ones are members of the negative category C- Let sj(ai, b) express the degree to which the option ai outperforms the profile b according to the criterion x. Its calculation is based on the indifference and preference thresholds q. and p.: d"(ai) = 1 - d(ai), where d(ai) = max;=Ln d. (ai). Because of its absolute and noncompensatory nature, the nondiscordance index does not need to be combined with the concordance index. However, the valued outranking relation is usually obtained as a result of the following multiplication: a(ai) = c(ai) - d (ai), so that d (ai) = d '(ai) or d (ai) = d "(ai). As o(ai) = 0.5 denotes strict equality among the profile and the alternative, an appropriate a-cut should be used to determine the "crisp" membership of the alternative: ai e C + » A, where A e [0.5,1]. 3 Existing approaches to sensitivity analysis and visualization 3.1 Techniques and studies Hites et al. (2006) have explored the applicability of multi-criteria decision-making concepts to the robustness framework by observing the similarities and differences between multi-criteria and robustness problems. In their opinion, a conclusion is called robust if it is true for all or almost all scenarios, where a scenario is a plausible set of parameter values used to solve the problem. In a similar manner, Vincke (1999a) has provided the definition of a robust preference aggregation method. He has analyzed the robustness of eleven methods for the construction of an outranking relation. Several researchers have investigated the ZP-metric sensitivity analysis of additive multiple attribute value models. Barron and Schmidt (1988) have introduced a procedure for the computation of weights that make the utility of one alternative exceed the utility of a compared alternative by the amount of 8. They have measured the closeness of derived and original weights by the squared deviation. Wolters and Mareschal (1995) have presented a similar method for determining the modification of a given set of weights, which sums up absolute deviations. 388 Informatica 33 (2009) 385-395 A. Bregar et al. In addition to the closeness of weights, Ringuest (1997) has developed a second measure of sensitivity: a decision is considered insensitive if the rank order of weights that led to the original best solution must be altered for the optimal solution to change. A method has been defined which applies both criteria simultaneously by searching for solutions that minimize the L1 and Lm distance metrics subject to a set of linear constraints. Jansen et al. (1997) have described the problems that may occur when using standard software for linear programming. Accordingly, they have proposed a framework for performing efficient sensitivity analysis. Zopounidis and Doumpos (2002) discuss optimality measures for classification and sorting with respect to the assignment of alternatives in the reference set. Two L1-norm distance metrics determine the classification error and the satisfaction of classification rules, respectively. Mousseau et al. (2001) measure the minimal difference a between the credibilities of alternatives and the cutting level that determines to which classes alternatives should be sorted. The larger is the value of a, the more stable are the assignments. Dias et al. (2002) do not approach the measurement of robustness numerically. Instead, their aim is to identify unrobust alternatives that have a wide range of classes to which they may be sorted, since they are strongly affected by the imprecision of data. Hodgkin et al. (2005) argue that systematic multidimensional sensitivity analysis is not well supported by available facilities. Their review of existing techniques for the display of multi-dimensional data reveals many approaches which may be grouped in three categories: 1. approaches that try to retain all information and display it in some manner; 2. reduction of the dimensionality by applying the multi-variate statistical analysis; 3. displays of sensitivity analysis which focus on the outcomes rather than the input data, such as stability intervals, triangles of the weight space, etc. Hodgkin et al. describe two softwares for the robustness analysis and visual interactive modelling - the triangle plot and the principal components analysis plot. The first reveals three-dimensional stability regions of the weight space, while the latter reduces dimensionality. Both plots have been evaluated from the perspective of end users. The triangle plot is found to be intuitive and easy to use. It exposes robustness and serves as an analytical device with which users can quickly deduce whether the results are as expected. The principal components analysis plot, on the contrary, is rather a heuristic device that exposes comparisons and directs users to further investigations. 3.2 Variance based methods It has been established that people have difficulties with interpreting and visualizing information in four or more dimensions. An approach that confronts this problem is the principal components analysis (Jolliffe, 2002), which has already been applied in many fields of science for the purpose of reducing dimensionality and providing a good insight into correlations between variables by preserving a high degree of variance in data. It is often possible to identify a few groups of variables that capture the same key principles, and are hence strongly correlated. Linear combinations of these original variables define a set of principal components forming the unique non-redundant orthogonal basis of a new space of data. Each component corresponds to an axis of the new space. It is selected in such a way that its variance is the highest of all possible choices for this axis. The set of principal components has equal power to the set of original variables, however the sum of variances for only the first two or three principal components generally exceeds 80 percent of variance in original data. For this reason, it is sufficient to consider a small subset of principal components in order to preserve the majority of information. Because of the most simple and understandable interpretation and visualization, the projection on a two-dimensional plane, which is defined by the 1st and the 2nd component, is usually performed. The principal components analysis may be applied in combination with nearly all multi-criteria decision-aiding methods. Probably the first method that has used it under the name GAIA for almost two decades is PROMETHEE II (Brans and Mareschal, 1994). It takes criteria-wise net ranking flows as the basis for visualization: ^k (ai ) = "¡hi ' Pk (ai, ^ j ) " Pk (aj , ) ' Espinasse et al. (1997) have applied GAIA planes in a multi-agent negotiation framework. They have developed several levels of group planes, which represent decision-makers, coalitions, criteria and weights with the purpose of assisting the mediator during the negotiation process. Radojevic and Petrovic (1997) have used GAIA within the scope of fuzzy multi-criteria ranking. They have thus extended the applicability of PROMETHEE methods to the cases when criteria values are fuzzy variables. Saltelli (2001) has studied the properties of variance based methods in the context of importance assessment. He has considered two settings. In the first, the objective has been to identify the most important factor that would lead to the greatest reduction of variance. In the second, the required target variance has been obtained by fixing simultaneously the smallest possible number of factors. 3.3 Integration in decision support systems In order to make the process of preference assessment interactive, Mustajoki et al. (2005) have developed and described the WINPRE software, which seeks for three-dimensional stability regions in the weight space, ranges of allowed imprecise weights and partial utility intervals. Another decision support system that visualizes utilities of alternatives in the context of group decision-making is RINGS (Kim and Choi, 2001). By observing overlapping of utility ranges for individual decision-makers and the whole group, consensus can be reached. Moreno-Jimenez et al. (2005) have implemented a spreadsheet module for consensus building, which is able to visualize preference structures with radial graphic repesentation maps. Each structure is mapped to a planar polygon whose vertices are placed at the end of rays cast from a central point. ROBUSTNESS AND VISUALIZATION OF DECISION MODELS Informatica 33 (2009) 385-395 389 Bana e Costa et al. (1999) have integrated several decision support systems which implement visualization and sensitivity analysis techniques. EQUITY provides graphical cost-benefit efficiency analysis, MACBETH depicts value functions, while V.I.S.A. visualizes partial utilities of alternatives and computes stability intervals. Siskos et al. (1999) have embedded visual components into the MIIDAS system. The decision-maker can shape the value function in terms of its curveness and turning point, graphically perform trade-offs, observe the ordinal regression curve and view the net graph coming from the cluster analysis. Jimenez et al. (2003) have introduced a system that allows for imprecise assignments of weights and utilities, whereby inputs can be subjected to different sensitivity analyses and visualization aids, including: • pie charts of certainties and probabilities; • bar charts of weights and utilities; • graphical representations of utility functions; • stability intervals of weights; • several types of simulation techniques designed to randomly modify weights by preserving their rank order or numerical intervals. 4 Stability intervals and regions 4.1 Stability intervals The inference of stability intervals represents the most basic form of sensitivity analysis, next to the "what-if" analysis which is, in connection with interactive graphic tools, used primarily in the phases of criteria structuring and preference elicitation. It is implemented by many decision support systems that help companies and large corporations make important organizational and business decisions (Forman and Selly, 2001). The purpose of this technique is to determine for what intervals of values of a single parameter (for example, a criterion weight), the rank-order of alternatives is preserved. Its main strength is the ability to identify boundaries of stability intervals automatically, without any manual intervention. It is thus appropriate for robustness checking after the preference aggregation phase completes. To determine the influence of the criterion xt e X on the rank-order of alternatives, its weight continuously increases on the interval from 0 to 1. The weights of all other criteria Xj e X\ {x,} decrease inversely proportioned according to their relative portions dj that exclude w,: W: dj = — s, where s, = Tt=i..„wt . pairwise comparisons. Since the weighted additive utility function is applied, the point of indifference between two alternatives can be expressed with a linear equation: w, •u, (ak) + ^ , dj •(1" w,) • uj (ak) = w, •u, (ai) + Z m d. • (1 - w, ) • u. (a, ). The weight w, is easily derived: w, 1 - w, Z dj • (Uj a ) - Uj (ak )) U (ak ) - u(a, ) If the normalization of weights is required, such that their sum equals to 1, it becomes clear that the weight of the xj criterion decreases by Awj = dj •Aw,- when the weight of the observed criterion x, is increased by Aw,-. Thereby, the theoretical foundations for the graphical represenation of stability intervals are laid. Complementary, the analytical computation of all possible weights w, at which the rankorder changes is also useful. The utilities of alternatives must be compared in this case for all pairs of ak and al, so that k, l = 1, ..., m and k ^ l. This requires (m • (m-1))/2 Analogously, one-dimensional stability intervals can be found for the PROMETHEE II method, which is based on additive aggregation as well: w, ZbeA dj • (pj(al■b) - pjb al) - Pj(ak,b) + Pjb ak)) . 1 - w' "LbeA P(ak' b) - P (b, ak) - P (a, b) - P,(b, ai) 4.2 Two-dimensional stability regions It is possible to generalize the stability regions analysis to two or more dimensions. This subsection discusses the interaction of two criteria weights because otherwise the reduction of dimensionality or (non)linear programming must be performed. The latter approach is addressed in the next section. The first is realized by the principal components analysis and is applied by the visual GAIA analysis (Brans and Mareschal, 1994), which projects the multi-dimensional criteria space on a plane, and thereby loses some preferential information. The two-dimensional sensitivity analysis considers each pair of weights that belong to criteria of the same hierarchical group (let these be the w, and wj weights). For a pair of alternatives ak and al, it is determined for which values of w, and wj the indifference relation holds. In general, a single point (meaning that alternatives are equivalent for unique weights wt and wj), a straight line (implying indifference for an infinite space of weights), or an empty set (meaning that one alternative is preferred to the other for all values of w{ and wj) is obtained. Lines and points delimit regions within which the rank-order of alternatives remains constant. The stability regions are additionally delimited with borderlines w, = 0, wj = 0 and w, + wj = 1. It is clear that the new model has one degree of freedom more than the model of stability intervals: w, ■ u, (ak) + w} ■ Uj (ak) + (1 - w, - w}) • u (ak) = w, • ui (at) + wj ■ Uj(at) + (1 - w, - wj) • u (at), where u (ak) respectively u (al) is a constant utility of n - 2 criteria that do not change during analysis: u(ak) = V wL• uh(ak), where W ="V wh . v k' ¿—¡h^,,j W ¿—th^,,j h The correlation between weights is now obtained: u (ai) - u (ak ) - wj ■ (uj (ak ) - uj (ai) - u (ak ) + U (ai)) wi =------—-- ui (ak) - ui (al) - u (ak)+u (al) By setting wj = 0 and wj = 1 - w, it can be seen when two alternatives ak and al become equivalent. Analogous two-dimensional sensitivity analysis has been implemented 390 Informatica 33 (2009) 385-395 A. Bregar et al. for the PROMETHEE II method as a functionality of the PROMCALC decision support system. 5 Multi-dimensional robustness analysis Mathematical programming can be applied to judge the influence of arbitrary many simultaneously changing parameters. The motivation for its use lies in the fact that multi-dimensional information is totally preserved, while in the case of visualisation it gets partially lost because of the projection on a plane. For this reason, several original robustness metrics are proposed. They are implemented with optimization algorithms. 5.1 Optimization approaches for the multiattribute utility function The goal of the approaches is to test how robust the rankorder of alternatives is with regard to the weights of all criteria that are structured into a common hierarchical group. Thereby, a comprehensive insight into the model and its robustness must be assured with as few metrics as possible. Four mathematical optimization programs are hence defined. The first exposes the minimal change of the weight vector that causes the best ranked alternative to lose its priority over any other, originally less optimal solution, which means that the best ranked alternative changes. This measurement is of essential importance, since a rational decision is to choose an alternative with the highest utility/value. The robustness of such a choice is obtained with the following program: A w = min subject to £ j=1 ..n \WJ - ^j 1)" V p Am u(ak) = £ . 1 w] ■ UJ (ak) < < maxM[u(a, ) = £w. ■ u. (a, )j, £ =1, ¿—lj=1..n 1 0 < w. < 1, Vj = 1,...,n, where w~ j are current and wj newly derived weights, and where it holds: ~(ak) = £.= l MWi • ui (ak) = = max l= 1..m ( U(al ) = £ .= j. nW j ' Uj (al ) | The parameter P, 1 < P < ro, determines which one of the LP distance metrics is used. Usually, the Manhattan norm (L1), which returns the rectangular distance between two vectors, or the Euclidean norm (L2), which takes the hypotenuse of a square triangle as the distance, are used because of the simplest interpretation. The distance has to be normalized by division with the largest possible change of the weight vector A'Wax . For the case when all criteria weights are allowed to have any value from the [0,1] interval (Vj: dwj = uw. - Iw. = 1), the vector changes maximally when exactly two of its components move from one extreme to the other: wi = 1, Vk ^ i: wk =0 ^ ^ wj =1, i ^ j, Vk ^ j : wk =0. In this special situation, Aiwax equals to 2. However, for arbitrary differences dw., such that -.V j: dw. = 1 holds, the following mathematical program is solved: AT = max ¿-ij=1..n £ k -wS\T 1 p by deriving wS , wE , Vj = 1,...,n subject to w £ wS =1, £ wE =1, Vj =1,.,n, Z—I;=1..n 1 Z—(;=1.. n 1 •¡1=1..n lw. < wS < uw., lw. < wE < uw., Vi = 1,.,n. J J I ' ' ' J J1 =1.n V1 < wi S and E denote the starting respectively ending weights, and also the initial respectively final utilities in the next two programs. To find the largest allowed deviation of the weight vector, such that the preferential relation is preserved for a pair of selected alternatives a1 and a2, the below optimization problem must be dealt with: SE (wS, wE ) maximize A,\w ,w subject to uSfa) = £.= 1 nwS • uf (a^ * ^ uf ( j1..n '^1) =£ . 1 wE • uE (a^ = =1..n J J .E (a ) = wE uE ( J j=1 .n *uS(a2) = £ wS • uS(a2^ = uE(a2) = £ . 1 wE • uE(a2), =1..n J J £ wS =£ ¿—¡j=1..n 1 ¿—tj=1 wS e [0,1], wE e [0,1], Vj = 1,.,n. wf =1, n The last addressed problem is to find the smallest change of the weight vector for which any initially suboptimal alternative becomes the best ranked one. As it is similar to the previous optimization problem, the mathematical program is slightly modified: minimize A, (wS,wE) subject to uS (a1) < uS (ai), 3i = 2,.,m, uE(a1) > uE(ai), Vi = 2,.,m, uS(ai) = £ wf • uf (ai), Vi =1,.,m, ^j =1 n 1 1 1 uE(ai) = £ wE • uE(ai), Vi = 1,.,m, =1..n J J £ wj =£ (1=1..n J Z—(1=1.. wj =1, 7 J [0,1] S ROBUSTNESS AND VISUALIZATION OF DECISION MODELS Informatica 33 (2009) 385-395 391 It is presupposed that the alternative selected to become optimal for the final inferred distribution of weights is denoted with a1, and that there exists at least one initially superior alternative. 5.2 Optimization approaches for the ELECTRE TRI method Three types of distance metrics are defined. They reflect the minimum deviations of weight, veto and preference vectors that cause the reassignment of an alternative to the other category. When, considering the alternative a,, any of these measures is low, the membership of at is not sufficiently robust because only a slight modification of preferences may result in a different decision. The most simple task is to find the smallest change of the weight vector so that the reassignment of at to the other class occurs: a, e C+ ^ a, e C~ or a, e C ^ a, e C+ . The problem is solved with a linear optimization program, for which all used symbols have already been defined: S/=!..„(w; - wj V P Am A w (a, ) = min by deriving W/, Vj = 1,. subject to a(ai) = d(ai) • (S/=1 n w/ •c/(a,) J = X, Zw / = 1, Iw / < w / < uw /, V/ = 1,..., n. /=1 n ^ ^ J J A harder problem is to measure the robustness of veto and discordance thresholds v. and Uj. An advanced metric is needed that allows for the aggregation of discordance indices, and indicates the minimal threshold deviations that would cause the observed alternative to reassign: A v (a, ) = min S ,=,n )P S /=1 n (2 4 (b) - P/ - D- Ï V P by deriving Uj and Vj, Vj = 1,..., n subject to a(ai) = c(a,) • ^ ^(1 - d. (a,)) = X, d , (a, ) = max ( ' gj (b) - gj (a, ) - Uj ^ ^ min V v vj - uj 0 J J sj = \uj- ~; + h- + + |(vj - Uj )-(Î~j - ~/ I , Vj = 1,., n , Pj < Uj < Vj < b j , Vj = 1,., n. The program minimizes the distances between previous and new values of discordance and veto thresholds. In addition, it pays regard to the distances between different thresholds (\v,- - uj), to prevent anomalies that can occur if thresholds converge towards the same value. It clearly demonstrates the problematic of finding the smallest change of Uj and Vj thresholds that causes reclassification. Yet, it has to deal with piecewise linear functions with unknown segments. For this reason, it is substituted with a different optimization program. For each value g/a,), an appropriate partial discordance degree is found so that the product of these degrees equals the required overall discordance d (ai) calculated by dividing the fixed cut level X with the fixed concordance index c(ai). Then, the criterion-wise coefficient kj of a linear function is derived according to gj(a,) (x-axis) and dj (ai) (y-axis), for each index j = 1, ..., n. The induced function determines the Uj and Vj thresholds (at y = 0 and y = 1), and minimizes the distance metric: A v (a, ) = min by deriving sr S JE (2 4 - P; - Dj Î V P dj (ai ) and k., Vj e F subject to E = 1 ?}, F £ E, nJeF 1 - dJ (a, )) nJeE\F (1 - dJ (a, ))= ~(a, ) 0 < d; (a, ) < 1, Vj e F, ô, = S1] + Sv + SU, Vj e F, 8J = Uj - g,(a,) + d/ (a, ) Vj e F , 1 - df (a, ) sï = v/ - g/a ) —k , v/ e F, SUV = v,- - u, - —, Vj e F, J J Jb 1 - d, (a, ) D + - D - - g, (a, ) < k. < dJ(a,) J g,(a,) - P, .Vj e F. Figure 1 gives the graphical interpretation on how the new u. and v/ thresholds are inferred by inducing the kj coefficient. The thresholds may be modified either with a parallel shift of the function or by changing its slope with the increase/decrease of the k- coefficient. Consequently, their absolute difference or the initial value of uj must be preserved. The third possibility also exists: by combining the shift and the angle adjustment, all differences Au, Av and Auv become positive. On Figure 1, k0 and k1 depict the initial respectively the extreme possible induced angle of the linear function. Similarly, _y0 denotes the initial partial discordance degree and >>i represents the required adjusted degree. Finally, x0 corresponds to the criterion-wise value of the alternative g/a,). If at is the member of the positive category C+, the discordance degree must increase in order to cause the n k 392 Informatica 33 (2009) 385-395 A. Bregar et al. reassignment, which is a prerequisite to properly measure robustness. Then, y1 >y0; otherwise y1