5 DESIGN, IMPLEMENTATION AND INFORMATICA 4/91 TESTING OF A PRIMAL-DUAL INTERIOR POINT METHOD FOR SOLVING LINEAR PROGRAMS Janez Barle and Janez Grad Univerza v Ljubljani Keywords: linear programing, primal-dual Ekonomska fakulteta interior point algorithm, sparse Cholesky Kardeljeva ploščad 17 factorization 61109 Ljubljana ABSTRACT: This paper describes our implementation of a primal-dual interior point algorithm for linear programming. The topics discussed include economic data structures, efficient methods for some sparse matrix operations, sparse Cholesky factorization, methods for handling dense columns and comparisons with simplex based methods. Extensive numerical results demonstrate the efficiency of the resulting algorithm as well as some problems which remain to be solved. The role of interior point based solvers in the process of solving large-scale mathematical programming models is also discussed. SNOVANJE, IMPLEMENTACIJA IN TESTIRANJE PRIMARNO-DUALNE METODE NOTRANJE TOČKE ZA REŠEVANJE LINEARNIH PROGRAMOV: V sestavku je opisana naša implementacija primalno-dualne metode notranje točke za reševanje linearnih programov. Obravnavane so ekonomične podatkovne strukture, učinkoviti načini za izvajanje nekaterih operacij z razpršenimi matrikami, razpršeni razcep po Choleskem, metode za delo z gostimi stolpci ter primerjave z metodo simpleksov. Izčrpni rezultati numeričnega testiranja kažejo tako učinkovitost razvitih algoritmov, kot tudi nekatere probleme, ki jih je treba še razrešiti. Opisana je tudi vloga reševanja z metodami notranje točke v splošnem kontekstu modeliranja in reševanja velikih problemov matematičnega programiranja. Introduction The linear programming (LP) problem may be stated in the form: minimize (maximize) cTx (1) subject to: Ax = b, J < x < u where A is a rectangular matrix, c, x, 1, u, b are column vectors and the symbol T denotes transpose of a vector or matrix. Some of bounds in 1 and u may be infinite. The important features of typical LP constraint matrix A are its quite large dimension, sparsity and a specific structure i.e. patterns in which its nonzero elements appear. A classical way for solving the above problem is the simplex algorithm, developed by G.D. Dantzig in late 1940's. Contemporary variants of this algoritlun, which are included in many commercial or university developed software packages, are tailored for solving quite large problems in a fast and reliable way. However, this established algorithm has got recently a serious competitor in the so called interior point, methods. These methods became widely known after Kurmurkur's publication (Karmarkar, 1984) of ¡An algorithm that is claimed to be much faster for practical problems than the simplex method. Although these initial promises appeared to be too optimistic, Karmarkar's algori thm and other interior point methods are now regaixled as a competitive methods for solving LP problems. This is particularly true when solving of some specific forms of super size problems on supercomputers is considered. Such kind of problems, which are often encountered in communication, transportation and military operations, are very sparse and usually exhibit specific and generally well-behaved block struct.urea that can be effectively exploited. Efforts to develop software systems for solving super size LP problems with Karmarkar's algorithm proved to be very fruitful. One of the outstanding steps in this direction is AT&T's KORBX system. The system consists of both hardware, which uses parallel processing, and software which exploits the resources of this hardware (Carolan et al., 1990). However, there is also a need for exploring ability of interior point methods for solving LP on a more widely available serial computers. In the paper we present our work in this direction, which was performed on MS-DOS personal computers and VAX/VMS minicomputers. Algorithms Nowadays a plethora of research papers is published where different interior point, methods are proposed. We have employed the variant of a primal-dual interior point method which is supposed to be among the most efficient (Lustig et. al. 1989). In order to make clear differences between such kind of methods and simplex based algorithms, we first give a brief explanation of the revised simplex algoritlun. The steps of this algorithm are roughly described within tin: following box where B denotes the basis matrix and cb the cost vector of the basic variables. It is therefore assumed that there is a set of in basic variables, which is usually changed after each iteration in such a way that one nonbasic variable enters the basis and one basic variable leaves the basis. The informal description which follows is related to the second phase of the primal revised simplex algorithm, where the basic feasible solution is already known. 6 -Rev i sed-s impl ex-rue thod- Rl: Produce a pricing vector: u = cdtB-' (BTRAN). R2: Select the entering variable xB (column u = Pan) according to a given pricing strategy. If no entering variable ia found, terminate (solution is optimal). R3: Update the entering column: v = B"1u (FTRAN). R4: Determine the leaving variable. If none is found, terminate (problem is unbounded). R5: Update the basis matrix representation; refuctorize if necessary. Go to Rl. It must be noted however that there is not yet general agreement about what are the best algorithms in detail, Eind how they should be implemented in the most efficient way. In general, number crunching operations are concentrated within the steps Rl and R3 where two systems of linear equations have to be solved (these operations are often refered to as BTRAN and FTRAN). It is very important that after basis change updating of basis matrix is possible without performing full factorization, which has to be done only periodically. Other steps, particularly R2 and R4, deal mainly with logic decision and "book-keeping" problems. It is also obvious that a rather sophisticated data structures must be employed in order to exploit sparsity (Duff et al., 1989). Interior point methods differ considerably from the simplex method. Primal-dual interior point method, which we have decided to implement, requires LP problem being formulated in the following form: minimize cTx subject to: Ax = b, x + s = u, x > 0, s > 0 with the associated dual maximize bTy - uTw subject to: ATy -w+z = c, w > 0, z > 0 (2) (3) Fortunately, formulation (2) can be derived in a straightforward way from formulation (1). An outline of the algorithm is sketched within the following box, where X, S, W and Z are diagonal matrices with diagonal elements equal to the components of corresponding vectors x, s, w and z. ® is the user supplied constant which is usually computed by using the following formula: 4> = 5000 and M = xi(n) »max(|c{|»>, |bj») where t is a scalar multiplier which is used to allow variations of the initial |j. Furthermore, do = ATy° + z° - c, where y° and z° are initial y and z, and e = (1,1,...,1). ad, ap are some appropriate step lengths in the primal and dual spaces respectively. These step lengths must be chosen in a way which ensures nonnegativity of variables x, s, z and w, for example: a„ - ao*inin (min; (xj/-6xj , 6xj<0), mirij (sj /6xj , 6xj>0)} ad = uo *min {miry (zj/-6zj , fiZj <0), mirij (wj /-6wj , 6wj<0}) where 0 ))/■!> K2: Compute 8 = (S">W + X">Z)-' K3: Compute positive definite matrix A0AT. K4: Perform Cholesky factorization of the ABA" K5: Compute o(n) = p(S-i - X"1 )e - (W - Z)e K6: Compute 6y = -(A8AT )"1 (A9(o(11) + zodo) + (Ax - b>) 6x = 0(AT6y + o(M) - Zbdo) 6s = -8x 6z = -uX-'e + Ze - X" > Z6x 6w = -pS- 1 e + We - S-1 Wtix K7: Update yne w - yo I d - ûdôy Xd e w = Xol d - apôx Sue w = So I d - ap 6s Zoe w = Zo I d - ad 6z WB e w = Wo I d - ad 6w K8: If relative duality gap satisfies relation: cTx + uTw - bTy 1 + |uTw - bTyl < e where 6 is user supplied constant, terminate (solution is optimal). Otherwise go to K1. It was also assumed that the initial interior solution is supplied by the user. For example, it is possible t.o choose x° - z° = min{e,u/2) and yo = 0 (Choi et al., 1990). In general, interior point methods are not very sensible to the choice of the initial solution. The above description is based on two papers (Lustig et al. 1989, Choi et al. 1990) where algorithmic aspects of tlie modularized fortran code OBI (Optimization with Barriers 1) were described. Our intention was to develop our own code based on mentioned papers and standard methods for computing sparse Cholesky factorization (George, Liu, 1981). However, some of the implementation details are different, for example: a) In our implementation column Axo -. b and row do are computed at each iteration, rather than only for the initial solution. xa and ya are defined as ratios between current and initial °° norms of Axo - b and do. Furthermore, xa and ya retain some small value even in the case if their computed vsalue is zero. Such approach enable us to save some storage space-and also, according to our experience, to improve accuracy of the computed solution. b) Sometimes it is impossible for relative duality gap to reach prescribed 6 on step K8. In such cases we terminate algorithm when relative difference between two subsequent objective values become smaller than the prescribed constant, which is usualll.y set to be 0.l*e. On the whole, published description of the algoriUim is good enough to enable everyone to create, jossibiy after some experimental investigation, workable implementation of a primal dual interior point method. Evidently the algorithm consists mostly of floating point computations and consequently fortran is an obvious choice of' implementation language. Some features of the interior point methods that make them so much different from thr simplex method are obvious: 1) There is no partitioning into basic and nonbasic variables. This means that, in principle, all variables and constraints are handled in equal way during the solution process. 2) Each iteration requires computationally expensive factorisation of positive definite matrix or solution of the least squares problem. 7 3) Solution vector x is always an interior point of the solution polytope. Feature 1) has a far-reaching consequences. It can be a means for. avoiding potential combinatorial problems arising in the movement from one basis to another which is typical for simplex method. On the other hand such approach may degrade computational speed and stability. The main computational problem of the interior point methods is inversion of matrix A0AT or solution of the corresponding linear least squares problem. This is usually done by computing sparse Cholesky factorization of A0AT. In order to understand methods for doing this, one must be acquainted with the methods for storing sparse matrices. In the next section the methods for storing sparse matrices which were applied in our implementation of primal-dual interior point methods are briefly described. Data structures and implementation issues Exploitation of sparsity is based on the fact that only nonzero elements of sparse matrix (or vector) must be stored, together with information about their position within matrix (vector). In the case of LP input data (A, b, c, u) within the framework of interior point methods, we have employed the following data structures: 1) Righthand-side vector b is stored as a dense vector. 2) Constraint matrix A is stored using dimensional vectors (XA,IA,CP) where three XA = vector of nonzero values Aj j which are sorted by columns and (secondary) by row indices within a particular column, both in increased order. IA - vector of row indices of nonzero elements, which are sorted in a same manner as XA. CP = vector of column pointers which consists of locations where the representation of columns begins in XA and IA. For example, elements of column i are all in locations from CP(I) to CP(I+1)-1. 3) Nonzero elements of c are stored (formally) as n+1. column of A. Therefore they are stored between locations CP(N+1) and CP(N+2)-l in XA (values) and IA (indices). 4) Noninfinite elements of u are stored (formally) as n+2. column of A. Therefore they are stored between locations CP(N+2) and CP(N+3)-l in XA (values) and IA (indices). Obviously, it is also necessary to store the matrix A9AT and its triangular factor L, in the case if Cholesky factorization is used within the solution process. These matrices can be stored using the same storage locations. The amount of this storage is determined by fill-in which generally can not be avoided during the Cholesky factorization of A6AT. It is therefore advisable to try to minimize fill-in by appropriate reordering of rows and columns of A9AT. Ordering algorithms are essentially graph techniques for obtaining appropriate numbering of the graph nodes. In our case nonzero structure of A6AT represents an undirected graph G(X,E) with m nodes. The adjacency list of node xeX is a list containing all nodes adjacent to x, which is represented by indices of nondiagonal nonzero elements of corresponding column of A9AT. The implementation of described structure is done by storing the adjacency lists sequentially in integer array ADJNCY along with an index vector XADJ of length M+l containing pointers to the beginning of the lists in ADJNCY. The extra entry XADJ(M+l) points to the next available location in ADJNCY (George, Liu, 1981). These arrays are input data for ordering algorithms which can be generally divided in two groups: a) reoi-derings which try to minimize number of nonzero elements (and therefore fill-in) in triangular factor L. Although it is known to be NP-cowsplete problem (Duff et al. 1989) several reasonable good heuristics exist. One of them is the minimum degree algorithm. The name of this algoritiim is derived from its graph theoretic interpretation: in the graph associated with a symmetric sparse matrix, this strategy corresponds to choosing that node for the next elimination which has the least edges connected to it. b) reorderings which try to permute AGAT and triangular factor L into some particular desirable form. This can be for example so-called envelope or profile form. The most known algorithm of this type is the Reverse Cuthill-McKee algorithm. The objective of such kind of algorithms is to reorder the rows and columns of the matrix so that the nonzeroes in the obtained matrix are clustered near the main diagonal since this property is retained in the corresponding Cholesky factor L. Such a cluster is called the profile or envelope and is defined to contain also all zero elements between the diagonal and the last nonzero element in the row or column. The problem of minimizing the envelope size of a matrix is proven to be NP-complete (Billionet, Breteau, 1989) and consequently the Reverse Cuthill-McKee algorithm is only one among heuristic procedures for doing this. We have implemented both the minimum degree and the Reverse Cuthill-McKee algorithm within our LP package. In order to give some insight into these methods, we shall show how they perform on the smallest example from the NETLIB library (Gay, 1985), which is known under the name AFIRO. This is the problem with constraint matrix having 27 rows and 51 columns which contain all together 102 nonzero elements. Its corresponding A6AT matrix has the structure as in the following picture, where only the upper triangular part is reproduced: 1 2 3 4 5 6 7 1111111 9 0 1 2 3 4 5 6 11122222222 78901234567 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 D X X X D X D D X D X X X X X D X X X X D D D D D X X X D X D D X D X X X X X D X X X X D D D D X X X X X X Figure 1. AFIRO - structure of the upper part of A8AT Nondiagonal and diagonal nonzero elements are presented using symbols X and D respectively. It is obvious that, at least in this case, matrix ABAT is not as sparse as matrix A itself. Moreover, number of its nonzeroes may substantially increase during the subsequent Cholesky factorization. The following picture displays how this sitution is controlled by applying the minimum degree algorithm. The produced ordering (permutation of rows and columns) is: (4,14,26,27,13,22,12,3,24,2,1,11,10,17, 18,19,20,23,16,15,9,8,25,21,6,7,5) 8 12 2 12 1 2 4467322342 111112 2 11 2 2 11078903659851675 4 14 26 27 13 22 12 3 24 2 1 11 10 17 18 19 20 23 16 15 9 8 25 21 6 7 5 D X X D X D D X D X X X X X X X X X X X D X X D X D X X X X X X X X X X X X X X X X X X X X D X X X X D X X X X D X X X X D X X X X D X X X X D X X X X D X XXX D XXX D X X D X D Figure 2. AFIRO - structure of the Cholesky factor The above structure is determined not by actual numerical factorization but by simulation of it, or so called symbolic factorization. The advantage of this approach is that the data structures are set up once for all, as the structure of the matrix does not change from one iteration to another. Obviously, at each iteration it is necessary to perform numerical factorization since A0AT is changing along with diagonal matrix 6. The data structures returned by the symbolic factorization can be presented in a sparse storage scheme known as the compressed storage scheme of Sherman, cited in (George, Liu, 1981). The scheme has a storage array LNZ which will contain all nonzero entries in the nondiagonal part of Cholesky factor L column-wise (or, which are the same numbers, in the LT row-wise), an INTEGER vector NZSUB which will hold the row subscripts of the nonzeroes, and an index vector XLNZ whose entries are pointers to the beginning of nonzeroes in each column in LNZ. The diagonal elements are stored separately in vector DIAG. In addition, an index vector XNZSUB is also used to hold pointers to the start of row subscripts in NZSUB for each column. This is the consequence of the key idea of the Sherman's compressed scheme: some of indices from NZSUB can be used for presenting nonzero patterns of two or even more columns. For example, it is applicable when columns 17,18,19,20 from the Figure 2 are considered. DIAG(J) - DIAG(J) + Z(J)*XA(I) 100 CONTINUE DO 200 I=ISTART, ISTOP-1 IROW = IA(I) Z0 = Z(IROW) L = XLNZ(IROW) KSUB = XNZSUB(IROW) JBEGIN =1+1 DO 150 J=JBEGIN,ISTOP 125 CONTINUE IF (NZSUB(KSUB) .EQ. IA( J)) THEN LNZ(L) = LNZ(L) + Z0*XA(J) ELSE L = L + 1 KSUB = KSUB + 1 GOTO 125 ENDIF CONTINUE CONTINUE DO 300 I=ISTART,ISTOP Z(IA(I)) = 0.0 CONTINUE 500 CONTINUE 150 200 300 Another ordering algorithm which we have implemented is the Reverse Cuthill-McKee algorithm. Its performance on our test problem is presented on the following picture. 122 21111 1 21 2112 55360987098765144231431 1 2 2 2 6 27 15 25 23 6 20 19 18 17 10 9 8 7 16 5 21 14 4 22 13 11 24 3 1 12 2 26 D X D X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X D X X X X X X X X X D X X X X X X X X D X X X X X X X D X X X X X X D X X X X X D X X X X • D X X X D X X D X X X X X X X D X X D X X X D X D Described data structure is filled et each iteration after computation of A8AT is performed. The subsequent Cholesky factorization is performed using the same data structure. An interesting question is what is the best way to compute A6AT. At first sight row-oriented data structure for matrix A seems to be more practical and efficient when computation of A8AT is considered. However, theoretical arguments (Duff et al. 1989) and practical testing convinced us that column-wise storage of A is much more efficient. The point is in avoiding operations with zero components of A. Perhaps this can be best explained with our fortran code for computating A9AT , which follows. C*******t****t*t*.*ttt**ttt*tt*tt*ttt*t*t*t*tt*ttt*tttt** C At this moment LNZ and working vector Z must be C initialised to zero values. DO 500 ICOL=l,N ISTART = CP(IOOL) ISTOP = CP(IC0L+1) - 1 DO 100 I=ISTART, ISTOP J = IA(I) Z(J) = XA( I) *THETA( ICOL) Figure 3. AFIRO - Envelope after RCM algorithm The storage scheme for envelope methods has a main storage array ENV which will contain all entries (zeroes as well as nonzerc.es) between the first nonzero entry in row of L (or column of LT) and the diagonal, an index vector XENV whose entries are pointers to the beginning of nonzeroes in each row of L, and a vector DIAG where diagonal entries are stored. Our- practical experience with the Reverse Cuthill-McKec algorithm was quite disappointing. Its corresponding storage scheme is usually not as economical as those produced by the minimum degree algorithm. The results concerning speed of the numerical factorization were even less competitive. However, it would be interesting to try otiier profile methods. Some of them produce more economical storage scheme than the Reverse Cuthill-McKee algoritm (Billionet, Breteau, 1989). There are also some other efficient ordering algorithms, for example the nested dissection ordering algorithm (George, Liu, 1981). Their quantitative and qualitative comparisons may be an interesting topics for further research. 9 Handling of dense columns The problem is what to do if one or more columns of A are dense vectors. In such cases computation of A8AT leads to a very dense matrix and therefore should be avoided, if it is possible. This situation can be overcome by first dividing columns of A into dense and sparse submatrices: A = [As J Ad ] and consequently 6 = [8s;8 4 X X X X 5 X 5 X 6 X X X 6 X X X 7 1 -1 Figure 4. Column splitting The first formulation will lead to completely dense A8AT, while this is not so for the second one. Computational results In this section, we report the computational results of running our implementation of a primal-dual interior point method on a set of LP test problems available through NETLIB (Gay, 1985). NETLIB is a system designed to provide efficient distribution of public domain software to the scientific community through different computer networks. We considered in our tests all 53 problems which are currently available to us. However, 3 of them (CZPROB, 80BAU3 and PILOTS) we were not able to read from the input file due to insufficient generality of our subroutine for reading LP data written in MPS format. Therefore we have used only 50 problems in our tests. Some quantitative data related to their size and storage consumption (when the minimum degree algorithm is used) are presented in the following table. Problem: No: n: nz(A): nz(AAT): nz(L): nz(ind): 25FV47 ADLITTLE AFIRO BANDM BEACONFD B0RE3D BRANDY CAPRI E226 GANGES GFRDPNC GROW15 GKOW22 GROW7 ISRAEL NESM PIL0T4 PILOTJA PILOTWE RECIPE SC205 SCAGR25 SCAGR7 SCFXM1 SCFXM2 SCFXM3 SCORPION SCRS8 SCSD1 SCSD6 SCSD8 SCTAP1 SCTAP2 SCTAP3 SEBA SHARE1B SHARE2B SHELL SHIP04L SHIP04S SHIP08L SHIP08S SHIP12L SHIP12S SIERRA STAIR STANDAW VTPBASE 45 820 1876 10705 11894 34590 8925 2 56 138 424 384 355 171 1 27 51 102 90 80 46 19 305 472 2494 3724 4355 1913 24 173 295 3408 2842 2727 1783 9 233 333 1446 2424 2860 1331 14 193 303 2202 2734 3236 1306 11 271 480 1933 3112 5569 2628 20 223 472 2768 2823 3416 1390 16 400 734 2188 2771 11186 4016 34 524 1028 6401 10615 18520 6601 35 1309 1706 6937 8965 29359 7080 23 616 1160 2445 1451 1537 1270 32 300 645 5620 3430 5790 5372 38 440 946 8252 5040 8590 8039 18 140 301 2612 1590 2590 2324 15 174 316 2443 11227 11259 1373 47 662 2930 13260 4743 21283 8604 29 410 1181 7242 6743 14273 6781 49 924 2044 13339 14174 50793 12560 40 722 2930 9537 5547 15422 6109 6 87 178 652 582 584 153 3 205 317 665 656 986 701 12 471 671 1725 2393 2510 1521 4 129 185 465 629 636 417 17 330 600 2732 3233 4400 1797 30 660 1200 5469 6486 8977 3673 36 990 1800 8206 9739 13631 5556 10 388 466 1534 2101 2086 937 26 490 1275 3288 2198 6117 2791 22 77 760 2388 1133 1315 464 31 147 1350 4316 2099 2398 960 46 397 2750 8584 4280 5482 1682 13 300 660 1872 1686 2261 1430 37 1090 2500 7334 6595 13729 6719 43 1480 3340 9734 8866 17156 8636 27 515 1036 4360 51915 53695 6176 8 117 253 1179 1001 1345 526 5 96 162 777 871 925 350 28 536 1527 3058 1991 3556 2099 39 360 2166 6380 4588 4428 1608 33 360 1506 4400 3272 3252 1198 50 712 4363 12882 9224 8948 3224 42 712 2467 7194 5440 5464 2160 51 1042 5533 16276 11715 11193 4742 44 1042 2869 8284 6387 6289 2063 41 1222 2715 7951 6118 11665 5742 25 356 538 3831 6653 15281 6263 21 359 1258 3173 1758 2726 1505 7 198 329 945 1773 2217 974 Table 1. Quantitative data about LP problems Numbers in the second column are related to the sequence of problems ordered by the number of nonzeroes. The last two columns show numbers of used entries in LNZ and NZSUB resceptively. On the one hand these data show that usage of compressed storage scheme is very efficient, but on the other hand indicate that density of triangular Cholesky factor can be a serious problem. In general, storage consumption of our implementation can be estimated using the following approximate formula: real numbers (REAL*8)...: 6n + 4m + nz(A+c+u) + nz(L) integers................: n + 4m + nz (A+c+u) + nz (ind) where nz(A+c+u) is the number of nontrivial elements in A,c and u together. It must be noted that interior point methods generally consume more storage than simplex based methods. Nevertheless, use of the above formula 10 shows that about 40 out of 50 problems can be solved on a standard PC with no more than 640 KB memory. Our computational testing was performed on a VAX8550 computer. The operating system was VMS, version 5.2, and the VMS FORTRAN compiler, version 5.2, was used with the default options. The algorithm was used with default parameters ao=0.9995, x-0.1 and 6=10"8. Computed optimal objective value and number of iterations were compared with those obtained using OBI (Lustig et al. 1989). Problem: Iter. (Lustig) Computed opt. value Relative d. gap Opt. value Lustig et al. 25FV47 48 (48) 5501.8459 0. 321e- -06 5501.8459 ADLITTLE 21 (17) 225494.96 0. 447e- -07 225494.96 AFIRO 14 (13) -464.75314 0. 249e- -08 -464.75314 BANDM 31 (28) -158.62802 0. 579e- -07 -158.62802 BEACONFD 25 (21) 33592.486 0. 427e- -07 33592.486 BORE 3D 28 (25) 1373.0804 0. 184e- ■08 1373.0804 BRANDY 34 (27) 1518.5100 0. 412e- -07 1518,5099 CAPRI 40 (37) 2690.0165 0. 443e- -05 2690.0127 E226 34 (31) -18.751929 0.371e- -07 -18.751929 LTAMACRO 51 (52) -755.71523 0. 132e- -07 -755.71523 FFFFF800 66 (59) 555679.61 0. 869e- -06 555679.56 GANGES 41 (33) -109585.74 0. 670e-08 -109585.74 GFRDPNC 30 (26) 6902236.0 0. 896e-07 6902236.0 GROW 15 34 (25) -1.0687094e+8 0.105e- -07 -1.0687094e+8 GHOW22 32 (30) -1.6083431e+8 0.347e- -06 -1.6083434e+8 GROW7 30 (22) -47787812. 0. 519e- -08 -47787812. ISRAEL 47 (47) -896644.82 0.593e- -07 -896644.82 NESM 70 (66) 14076037. 0. 218e- -04 14076036. PIL0T4 58 (56) -2581.1378 0. 779e- -06 -2581.1405 PILOTJA 67 (67) -6113.1349 0. 155e- -06 -6113.1365 PILOWE 74 (71) -2.7201067e+6 0. 728e- -06 -2.7201075e+6 RECIPE 18 (16) -266.61600 0. 867e- -08 -266.61600 SC205 22 (16) -52.202061 0. 785e- -07 -52.202061 SCAGR25 28 (24) -14753433. 0. 132e-06 -14753433. 3CAGR7 22 (22) -2331389.8 0. 316e- -07 -2331389.8 SCFXM1 38 (31) 18416.759 0. 192e- -06 18416.759 SCFXM2 38 (37) 36660.263 0. 149e- -06 36660.262 SCEXM3 41 (39) 54901.255 0. 194e- -06 54901.255 SCORPION 21 (18) 1878.1250 0. 141e- -06 1878.1248 SCRS8 51 (50) 904.29696 0.453e- -07 904.29695 SCSD1 14 (12) 8.6666667 0. ,320e- -07 8.6666667 SCSD6 15 (15) 50.500000 0. 638e-08 50.500000 SCSD8 14 (15) 905.00001 0. 998e- -08 905.00000 SCTAP1 22 (22) 1412.2500 0. 643e- -10 1412.2500 SCTAP2 23 (23) 1724.8071 0. 809e- -08 1724.8071 SCTAP3 27 (26) 1424.0000 0. 150e- -08 1424.0000 SEBA 32 (29) 15711.600 0. 470e- -07 15711.600 SHARE IB 42 (40) -76589.318 0.254e- -07 -76589.319 SHARE2B 20 (17) -415.73224 0, 778e- -07 . -415.73224 SHELL 39 (37) 1.2088253e+9 0. . 116e- -06 1.2088253e+9 SHIP04L 26 (22) 1793324.5 0. ,161e-06 1793324.5 SHIP04S 22 (21) 1798714.7 0. , 124e- -06 1798714.7 SHIP08L 26 (24) V909055.2 0. . llle- -06 1909055-2 SHIP08S 25 (23) 1920098.2 0. ,856e-07 1920098.2 SHIP12L 29 (27) 1470187.9 0, . 139e- -06 1470187.9 SHIP12S 29 (27) 1489236.1 0, .161e-06 1489236.1 SIERRA 30 (26) 15394362. 0 . 140e- -06 15394362. STAIR 27 (25) -251.26695 0 . 187e -06 -251.26695 STANDATA 34 (28) 1257.6995 0 . 295e- -07 1257.6995 VTPBASE 28 (24) 129831.46 0 ,414e -08 129831.46 chosen diagonal matrix. On the whole, we were not able to obtain such quality of solution as it was reported for OBI, neither in tenns of number of iterations, nor in terms of obtained relative duality gap. This is easily explainable by the fact that many fine details and important options, which are present in OBI, are not yet implemented in our code. We have performed also some other types of computational testing, just to check validity of some assumptions about interior point methods. For example: - A good simplex code usually outperforms interior point methods on small problems. - Some problems, which are supposed to be difficult for simplex based methods, can be easily (and much faster) solved if some interior point method is used. B'or example, this is the case with problem 25FV47. - Computational work within a step of interior point methods is dominated by sparse Cholesky factorization of A0AT . Its share on bigger problems is 60% to 90% of overall time. System design and implementation Implementations of interior- point method were done in a form of highly portable fortran modules LPENV and LPMDG. The former uses the Reverse Cuthill-McKee algorithm, the latter uses the minimum degree algorithm. They are still in the phase of testing and development, for which we are using mainly VAX/VMS computer system, although the same code is running also on the PC. Our ultimate goal is to create reliable, portable and user-friendly LP software package based on the interior point algorithms, which is to be called LPINT. In our opinion such kind of system must contain portable full screen user interface and also subroutines for graphical display of different. LP matrices (Alvarado, 1990). It is also very important to allow user to include his/her subroutines into the package. Data exchange between a user program and LPINT may be done with the usage of some type of communication region (CR). An overall LPINT system architecture can be ilustrated with the following picture: LPINT Interactive interface: windows pop-up menus pull-down menus user program CR optimizer (LPENV or LFMDG) GRAPHICAL Subroutine Library external CiJes configurât ion profile basis solution <—> itération log MPS data IJvalues data problem file messages reports Figure 5. LPINT system architecture Table 2. Computational results Our testing was therefore successful in a sense that we have solved all 50 problems with reasonable accuracy. However, we have experienced numerical troubles (such as floating overflow or underflow) on some problems: 1) To solve problem SCFXM1 we had to use ao =0.95. 2) To solve problems PILOT4, PILOTWE and PILOTJA, which are known to be very ill-conditioned, we had to use ao=0.95 again and also to change initial x to value r=0.001. Although we have eventually solved above problems after some experimentation with algorithm parameters, a sound solution would be to use some kind of scaling. This means that instead of A8AT, in some cases it is better to work with the matrix RA(JATR, where R is a suitable Conclusions Generally we can say that the interior point methods arc-getting more and more reliable and sophisticated aa well. Moreover, interior point methods had greatly influenced algorithmic and experimental work in the field of linear programming. However, it is not likely that interior point methods can completely replace the simplex method in future. In our opinion reasons for this are mainly in simplex method ability to produce optimal basic solution. The knowledge of basic status of variables is very important, especially when postoptimal analysis or solution of mixed integer programs are considered. Ch the other hand, when one wish to solve big LP problem for the first time, it is advisable to start with some interior point based package. It is a fast, reliable and robust way to obtain the first information about LP model. A primal-dual interior point method can be implemented in a rather easy and straightforward way. This fact, as well as method's efficiency and mathematical elegance, can be a big step toward deeper understanding of interior point methods in general. References 1. Adler, I., Kannarkar, N., Resende. M. and Veiga, G. (1989), "An Implementation of Karroarkar's Algorithm for Linear Programming", Mathematical Progranming, Vol. 44, pp. 297-335. 2. Alvarado, F.L. (1990), "Manipulation and visualization of sparse matrices", ORSA Journal on Computing, Vol. 2, pp. 187-207. 3. Andersen, J., Levkovitz, R., Mitra, G. and Tamiz, M. (1990), "Adopting Interior Point Algorithm for the Solution of LPs on Serial, Coarse Grain Parallel Computers". Presented at the International Symposium on Interior Point Methods for Linear Programming: Theory and Practice, January 18-19, 1990, Europe Hotel, Scheveningen, Netherlands. 4. Billionnet, A. and Breteau, J.F. (1990), "A Comparison of Three Algorithms for Reducing the Profile of a Sparse Matrix", Recherche Opérationnelle, Vol. 23, pp. 289-302. 5. Carolan, W, , Hill, J., Kennington, J., Niemi, S. and Witchman S. (1990), "An Empirical Evaluation of the KORBX Algorithms for Military Airlift Applications", Operations Research, Vol. 38, pp. 240-248. 6. Choi, C., Monma, C.L. and Shanno, D.F. (1990), "Further Development of a Primal-Dual Interior Point Method", ORSA Journal on Computing, Vol. 2, pp. 304311. 7. Duff, I.S., Erisman, A.M. and Reid, J.K. (1989), Direct Methods for Sparse Matrices, Clarendon Press, Oxford. 8. Gay, D.M. (1985), "Electronic Mail Distribution of Linear Programming Test Problems", Mathematical Programming Society COAL Newsletter, Vol. 13, pp. 10-12. 9. George, J.A. and Liu, J.W. (1981), Computer Solution of Large Sparse Positive Definite Systems, Prentice Hall, Englewood Cliffs. 10. Gondzio, J. (1991), "A Method for Handling Dense Columns of Linear Programs in the Interior Point Algorithm", Presented at the International Symposium "Applied Mathematical Programming and Modelling", London. 11. Karmarkar, N.K. (1984), "A New Polinomial-Time Algorithm for Linear Programming", Combinatorica, Vol. 4, pp. 373-395. 12. Lustig, I.J., Marsten, R.E. and Shanno, D.F. (1989), "Computational Experience with a Primal-Dual Interior Point Method For Linear Programming", Technical Report SOR 89-17, School of Industrial Engineering and Operations Research, Georgia Institut of Technology, Atlanta.