An Algorithm for Computing the Optimal Cycle Time of a Printed Circuit Board Assembly Line Dušan M. Kodek and Marjan Krisper University of Ljubljana, Faculty of Computer and Information Science Tržaška 25, 1000 Ljubljana, Slovenia E-mail: duke@fri.uni-lj.si Keywords: combinatorial optimization, integer programming, minimax approximation Received: December 24, 2002 We consider the problem of optimal allocation of components to a printed circuit board (PCB) assembly line which has several nonidentical placement machines in series. The objective is to achieve the highest production throughput by minimizing the cycle time of the assembly line. This problem can be formulated as a minimax approximation integer programming model that belongs to the family of scheduling prob­lems. The dif.culty lies in the fact that this model is proven to be NP-complete. All known algorithms that solve the NP-complete problems are exponential and work only if the number of variables is reasonably small. This particular problem, however, has properties that allow the development of a very ef.cient type of branch-and-bound based optimal algorithm that works for problems with a practically useful number of variables. 1 Introduction The problem of optimal allocation of components to place­ment machines in a printed circuit board (PCB) assembly line is NP-complete and is often considered too dif.cult to solve in practice. This opinion is supported by the ex­perience with the general integer programming programs that are typically very slow and do not produce solutions in a reasonable time. It is therefore not surprising to see many attempts of replacing the optimal solution with a near-optimal one. The reasoning goes as follows: A near-optimal solution is often good enough and is usually ob­tained in a signi.cantly shorter time than the optimal solu­tion. Although this is true in many cases, it does not hold always. The dif.culty with the near-optimal methods is that they, as a rule, do not give an estimate of closeness to the optimal solution. This means that a signi.cantly bet­ter optimal solution, about which the user knows nothing, may exist. Given a choice, the user would probably always choose the optimal solution provided that it can be obtained in a reasonable time. This paper challenges the opinion that the optimal so­lution is too dif.cult to compute. An algorithm that takes advantage of the special properties of the minimax approx­imation optimal allocation problem is developed. This op­timal algorithm is much faster than the general integer pro­gramming approach mentioned above. The algorithm pro­duces, in most practical cases, the optimal solution in a time that is similar to the time needed for near-optimal methods. Because of its exponential nature, it will of course fail in the cases when the number of variables is large. But it should be noted that the algorithm practically always produces one or more suboptimal solutions which can be used in such cases. These suboptimal solutions are comparable to those obtained by the near-optimal methods like local search, genetic algorithms, or knowledge based systems. Or in other words, the user can only gain if the optimal algorithm is used. Let us start with an investigation of the PCB assembly line problem. The cycle time T of a PCB assembly line is de.ned as the maximum time allowed for each machine (or station) in the assembly line to complete its assembly tasks on the board. This time becomes important when the quan­tity of PCBs is large: A minor reduction in the cycle time can result in a signi.cant cost and time savings. Moreover, a PCB line in the problem has several non-identical place­ment machines. As a board contains hundreds of surface mounted components in different shapes, sizes, and pat­terns, different placement machines in the line are installed to cope with different components. The line ef.ciency de­pends on the combination of the machine types. Due to the costly placement machines, the optimization of the assem­bly process can signi.cantly increase the competitiveness of the production. Many factors affect the ef.ciency of the PCB assem­bly, namely customer orders [1], component allocation [2], PCB grouping [3], component sequence [4], and feeder ar­rangement [5]. Different algorithms have been developed to optimize different factors in PCB assembly [6], [7]). The genetic algorithm technique is one of the heuristic methods that has been used recently to .nd a near-optimal solution [8]. Table 1: An example of a PCB assembly line with 3 different placement machines and 7 different component types per board. The placemet times tij for different components and machines and the setup times si are in seconds. Machine Mi Placement times tij for component type j Setup time si 1 2 3 4 5 6 7 1 0.3 0.7 0.7 0.5 . . . 11.0 2 0.7 1.2 1.5 1.6 1.5 1.5 2.1 14.7 3 2.3 3.8 3.5 3.5 2.7 3.3 4.3 14.7 Number of type j components per board 324 37 12 5 7 5 4 tijxij , The components are usually of a surface mounted type, al­ xij i=1,2,...,m 2 Formulation of the problem When a board is assembled on a production line the board’s components are grouped and allocated to appropri­ate placement machines in order to achieve a high output of the line. The next machine can begin its tasks only af­ter the previous machine has completed the placement of all components that were allocated to it. After the board passes through all the machines, the component placement process is completed. It is clear that the slowest task dic­tates the performance of the assembly line. There are two important differences between the tradi­tional assembly line problem and this PCB assembly line problem. First, unlike the traditional assembly line, the precedence of operations in the PCB assembly is not im­portant and can be ignored. The second difference con­cerns the assembly times for the same component on dif­ferent machines. Due to various types and con.gurations of the placement machines, different machines have differ­ ent times for placement of the same kind of component. Topt = min line has the best performance. The PCB assembly line cy­cle time T is formally de.ned as the maximum time needed by one of the machines Mi, i = 1, 2, · · · , m, to complete the placement of the components allocated to it. Clearly, the time interval between two .nished boards coming out of the assembly line is equal to T which means that the number of boards produced in a given time span is propor­tional to 1/T . This number can be increased by allocating the components to the machines in such way that T is re­duced. A mathematical model that describes this situation can now be given. Suppose that there are m non-identical placement ma­chines Mi in a PCB assembly line and that a board with n types of components is to be assembled on this line. It takes tij units of time to place the component of type j on a ma­chine Mi. In addition, each machine Mi has a setup time si. There are exactly cj components of type j per board. The component allocation problem can be formulated as n . .. . (1) max si + though this is not important here. An example from Table 1 is used to make the problem easier to understand. This ex-subject to ample is the same as the one used in [8] and will allow the j=1 comparison of our optimal algorithm to the near-optimal m one. i=1 A PCB assembly line with three different placement ma­ chines M1, M2, M3 and a board with seven types of com­ponents is used in the example. The placemet times tij for different components and machines are given in the Table 1. If a machine cannot handle a particular type of compo­ nent, its placement time is assigned to be in.nite (.). The in.nity is used here for simplicity of notation only — it is replaced by a large positive number for computation. In ad­ dition to the time that is needed to place a component there is also a setup time si for each of the machines Mi. The machine needs this time every time a new board arrives for its positioning and placement preparation. Finally, a total number of each type of a component per board cj is also given. Obviously, there are many possible ways of allocating the components j to the placement machines Mi. Each of them leads to its own cycle time T . The question is how to allocate the components in such a way that the assembly xij = cj , j = 1, 2, . . . , n , (2) xij . 0 and integer . (3) The solution of this problem is the optimal cycle time Topt (opt) and the optimal allocation variables xij . The variable xij gives the number of components of type j that are allo­ cated to machine Mi. Constraints (2) ensure that all of the components will be allocated. The components are indivis­ ible and (3) ensures that xij are positive integers. Note that tij and si are by de.nition positive2 . 3 Complexity of the problem The problem (1)–(3) is a combination of assignment and .owshop scheduling problems [9]. It is NP-complete for 2The times tij and si can be arbitrary positive real numbers. It is easy to reformulate the problem and change tij and si into arbitrary positive integers. This does not change the complexity of the problem. n . 2. Proving the NP-completness is no too dif.cult. indices i that correspond to the known integers xik. The First, it is trivial to show that the problem is in P. Sec-subproblem’s variables can be formally described as ond, it is possible to show that the well known PARTITION problem can be polynomially transformed into (1)–(3) [10]. x .. . . . . . I ij, j = 1, . . . , k - 1, i = 1, . . . , m Since PARTITION is NP-complete, so is our problem. I x,j = k, i . Ik ij (5) xij = A typical approach to solving this problem is to treat it as xij, j = k, i . Ik a general mixed integer linear programming problem. The xij, j = k + 1, . . . , n, i = 1, . . . , m, minimax problem (1)–(3) is reformulated as where k can be any of the indices 1, 2, . . . , n. Notation xij min Topt , xij is used to describe the variables that are already known in­ nm L L tegers. The remaining variables xij are not yet known. The tijxij . 0, i = 1, 2, . . . , m, Topt - si - number of indices in the set Ik lies in the range 0 to m - 2. (4) j=1 I If there were m - 1 known integers xthe constraint (2) ik = cj , j = 1, 2, . . . , n, gives the remaining variable which contradicts the assump­ xij tion that not all of the variables xik are known. The index i=1 xij . 0 and integer. All algorithms that are capable of solving this problem op­timally work by starting with the noninteger problem where the variables xij can be any positive real number. Ad­ditional constraints are then gradually introduced into the problem and these constraints eventually force the vari­ables xij to integer values. Many instances of suitably re­formulated subproblems of the form (4) must be solved be­fore the optimal solution is found. An advantage of the formulation (4) is that general mixed integer programming programs can be used to solve it. Unfortunately, this advantage occurs at the expense of the computation time. The general programs use the sim­plex algorithm to solve the subproblems. The simplex al­gorithm is very general and slow since it does not use any of the special properties of the minimax problem. All these properties are lost if the original problem (1)–(3) is con­verted into the general problem. The fact that the general problem (4) is so slow has led to the development of suboptimal heuristic algorithms that search for a near-optimal solution. These algorithms are L k changes to k + 1 when all xik are known integers. De.nition (5) assumes that a certain rule is used to in­troduce the constraints which force the variables xij to in­teger values. This rule is simple: For every index k it is I necessary to constrain xik to known integers xik for all i, i = 1, 2, . . . , m, before k can change. The rule follows from the structure of constraints given by (2) and is needed to derive the lower bound theorem. There is no problem with this rule because the branch-and-bound method, on which our algorithm is based, allows complete freedom of choosing the variable xij that is to be constrained next. The indices k can be selected in any order. A simple ascending order k = 1, 2, . . . , n, is used in (5). This also applies to the case when the problem is .rst reordered along the in­dices j in a way that gives the fastest rate of lower bound increase. Such a reordering is used in our algorithm. To simplify the notation, let us .rst use the known inte- I gers xij and rede.ne si into si as k . I si + i . Iktij x ij , j=1 faster and often good enough. The dif.culty is that a signif-s(6) = i L k-1 tij x icantly better optimal solution may exist which these algo- I ij, i . Ik .si + . . rithms do not .nd. It is the purpose of this paper to develop an optimal algorithm that does not use the generalized for- j=1 mulation (4). The algorithm takes advantage of the special I Similarly, the known integers x(if any) are used to rede­ ik properties of the minimax problem (1)–(3). It avoids using .ne ck into cas k the simplex algorithm completely which leads to a much faster solution. c k = ck - L x I (7) ik . 4 The lower bound theorem The basic idea of our algorithm is to use a lower bound for Topt as a tool that leads to the solution. This lower bound must be computed for each of the subproblems that appear within the branch-and-bound process. It must take into ac­count the fact that some of the subproblem’s variables xij are known integers. To derive it, let us assume that the sub­problems’s variables xij , j = 1, 2, . . . , k - 1, are known integers for all i. In addition, some, but not all, of the vari­ables xik may also be known integers. Let Ik be the set of i.Ik The lower bound on Topt over all possible not yet known variables xij is the most important part of our algorithm. It is developed along the lines used in a related integer polynomial minimax approximation problem that appears in a digital .lter design [11], [12] and is given in the following theorem. Theorem 1 Let Topt be the minimum cycle time corre­sponding to the optimal solution of the problem (1)–(3) in which some of the variables are known integers de.ned by (5). Then Topt is bounded by m . si . cj + L + pj + qjtij i=1 Topt . max (8) mj=k+1,...,n . L . . 1 . i=1 tij where n pj = L cr min (tir ) , i=1,2,...,m tij r=k+1 r =j (9) (tik ) qj = ck min , j = k + 1, . . . , n . i .Ik tij Proof: Let h be a number that satis.es kn . I si + Ltijxij + L tij xij, i . Ik . j=1 j=k+1 h . . (10) k-1n I si + Ltij xij + Ltij xij , i . Ik . . j=1 j=k . Note that h is a lower bound for Topt if we can prove that (10) holds over all possible not yet known values xij. Us­ing (6) eq. (10) is simpli.ed n . si + L tijxij, i . Ik. j=k+1 h . . n(11) si + Ltij xij, i . Ik . . . j=k It follows from (11) that variables xij can be expressed as n h si tir xij . - -L xir, i . Ik, j = k +1,..., n, tij tij tij r=k+1 r =j n h si tir xij . - - L xir, i . Ik, j = k,..., n. tij tij tij r=k r =j (12) Adding all xij by index i and using (2) and (7) gives mmnmh si tir tik cj . L - L - L L xir -L xik, tij tij tij tij i=1 i=1 r=k+1i=1 i .Ik r =j j = k + 1,...,n , (13) and the lower bound for h can now be written as mnmsi tir tik cj + L + L L xir +L xik tij tij tij i=1 r=k+1i=1 i .Ik =j h . r , m L 1 i=1 tij j = k + 1,...,n . (14) D.M. Kodek et al. All the terms in (14) are positive. This means that h is a lower bound over all variables if the lowest possible values of the terms containing variables xir and xik are used. The variables xir are subject to m L xir = cr , r = k + 1, . . . , n . (15) i=1 It is quite easy to see that the sum containing xir is bounded by nmn tir (tir) LL L min =pj, xir . cr tij tij i=1,2,...,m r=k+1i=1 r=k+1 (16) r r=j =j j = k + 1, . . . , n, since it is obvious that a minimum is obtained if xir is given the value cr for index i that corresponds to the lowest of the factors tir/tij while all other xir are set to zero. Similarly, the variables xik are subject to L xik = ck , (17) i .Ik and the sum containing xik is bounded by L tik xik . ck min (tik ) = qj , tij i .Ik tij (18)i .Ik j = k + 1,...,n . Equations (16) and (18) are used in the de.nitions (9) and this completes the proof. D Note that the Theorem 2 does not include the lower bound for the case k = n. The following trivial lower bound, which holds for all k, can be used in this case Topt . max (si + tikxik) , k = 1, . . . , n. (19) i/.Ik Note also that index j = k was not used in the derivation of the Theorem 1. The equivalent of (13) for j = k is n h si tir ck . L - L - L L xir . (20) tik tik tik i .Ik i .Ik r=k+1i .Ik When Ik is not empty all xir in the sum over i . Ik can be zero and still satisfy (15). The lowest possible sum con­taining xir is obviously zero in this case. This gives an additional lower bound si ck + L tik i .Ik Topt . . (21) L 1 tik i .Ik This lower bound is almost always much lower than the one given by (8). It can included in the algorithm to bring a small decrease in computing time which is on the order of 1%. By choosing k = 0 one can use (8)–(9) to compute the lower bound over all possible values of variables xij. Ap­plying this to the example from the Table 1 gives Topt . 96.084. But there is more — the theorem plays a central role in our algorithm because it eliminates the need to use the simplex algorithm for solving the subproblems within the branch-and-bound process. 5 Application of the lower bound theorem The usefulness of the Theorem 1 is based on the following observation: The problem of .nding the all-integer solution that gives the lowest cycle time Topt can be replaced by the problem of .nding the all-integer solution that has the lowest lower bound for Topt . Both approaches obviously lead to the same solution since Topt equals its lower bound when all variables xij are integers. This observation, however, is not enough. A new con­straint must be introduced on one of the variables xik, i . Ik, at each branch-and-bound iteration. This constraint cannot be made on the basis of the Theorem 1 alone and requires additional elaboration. To see how the lower bound depends on xik let us de.ne the parameters TL(j, k) as msi tik cj + L + pj + L xik tij tij i=1 i.Ik TL(j, k) = , (22) m L 1 i=1 tij where j = k+1, . . . , n, and k = 1, . . . , n-1. The TL(j, k) are simply (14) rewritten in a slightly different way. The Theorem 1 lower bound (8) in which the variables xik are left is now equal to Topt . max TL(j, k) . (23) j=k+1,...,n This lower bound does not include the case k = n. This is easily corrected if (19) is included. To simplify notation we .rst de.ne parameters TI (i, k) as TI (i, k) = si + tikxik, k = 1, . . . , n, (24) and de.ne the new lower bound Topt . TLB (k) ( ) TLB (k) = max max TI (i, k), max TL(j, k). i /.Ik j=k+1,...,n (25) The TLB (k) are de.ned for k = 1, . . . , n (where TL(j, n) = 0). They include TI (i, k) for all k even if it is strictly needed only for k = n. There is a good reason for that because the TI lower bound sometimes exceeds the TL lower bound. This can occur when the values of tij differ by several orders of magnitude as is the case in the example from Table 1 where a large positive tij is used in­stead of .. Although the algorithm works if TI is used for Informatica 27 (2003) 105–114 109 k = n only, experiments show that it is usually faster if it is used for all k. The lower bound TLB (k) (25) is the basis of our algo­rithm. It is a linear function of the variables xik, i . Ik, and, as mentioned before, a new constraint must be intro­duced on one of them at each branch-and-bound iteration. Let ic, ic . Ik, be the index of the variable xick that is selected for constraining. Selection of the index ic is simple — any of the indices i, i . Ik, can be used as ic. * It is more dif.cult to .nd the value xick that will be used in the branch-and-bound iteration to constrain the selected I variable to integers xick which are the nearest lower and * * upper neighbours of x . The x must be a number that ick ick gives the lowest possible lower bound TLB (k) over all pos­sible values of the not yet known variables xik, i /. Ik, and xij , i = 1, . . . , m, j = k + 1, . . . , n. Or in other words, the * xick must be at the global minimum of TLB (k). * It is important to understand why xic must be at the k global minimum of TLB (k). It is because our algorithm uses the property that TLB (k) is a linear function of the variables xik and is therefore also convex. The convex property is crucial for the success of our algorithm since it ensures that every local optimum is also global. The al­gorithm uses this property by stopping the search along a variable in the branch-and-bound process when TLB (k) ex­ceeds the current best solution Tu. This, however, can be * used only if xick is such that TLB (k) does not decrease * * when an arbitrary integer is added to x . The x at the ick ick global minimum certainly satis.es this condition. A great advantage of using the lower bound comes from the fact that the lower bound TLB (k) in (25) depends only on the variables xik, i . Ik, and is independent of the re­maining variables xij , i = 1, . . . , m, j = k+1, . . . , n. This means that the number of variables is signi.cantly reduced in comparison with the general approach (4). Solution of the minimax problem min xik i.Ik max(max i /.Ik TI (i, k), max j=k+1,...,n TL(j, k)) , (26) L xik = ck , xik . 0 , (27) i.Ik * gives the nonnegative numbers x that give the global min­ ik imum of TLB (k) for a given k. A complication arises when k changes to k + 1 because the solution of (26)–(27) for k + 1 depends not only on ** * x but also on x (through si). The problem is that x ik+1 ik ik are not at the global minimum of TLB (k + 1). It is possible * that the minimum of (26) for k+ 1 decreases if different x ik are used. An error can occur if this is ignored because the algorithm stops the search along a variable if the minimum is > Tu when in fact a lower value for TLB (k + 1) exists. It is obvious that this error cannot occur if the minimum TLB (k + 1) . TLB (k). The following corrective procedure is used in the algo­rithm when the minimum TLB (k+1) > minimum TLB (k). * It consists of adding +1 and/or -1 to the xick that was used 110 Informatica 27 (2003) 105–114 * as the last constraint. Using the new xick we simply re­compute TLB (k) and solve again (26)–(27) for k + 1. If max(TLB (k), TLB (k + 1)) decreases we continue in that direction until it stops decreasing or until (27) is violated * (TLB (k) increases when the original xic changes). The k * corrected x is a solution of ick min max (TLB (k), TLB (k + 1)) . (28) xick, xik+1 It is used to replace the original and this eliminates the pos­sibility of error. Note that it is not necessary to correct I the remaining variables xeven if they were not derived ik from the global minimum of TLB (k + 1). This is because I the branch-and-bound process ensures that all values of x ik will be tried as long as their TLB (k) is lower than Tu. Ad­ditional details about the implementation of (28) are given in step 6 of the algorithm in section 7. The minimax problem (26)–(27) must be solved many times within the branch-and-bound process and it is ex­tremely important to have an ef.cient method that gives its solution. Most of the computing time in our algorithm is spent on solving this problem. The method that is used to solve it is worth a detailed description. 6 Solving the discrete linear minimax problem The number of variables xik in (26)–(27) is equal to the number of indices i, i /. Ik. Let m , 1 . m . m, be this number and let R(i), i = 1, . . . , m , be the indices not in Ik. Equation (26) contains m terms TI and n - k terms TL. The total number of terms n is equal to n = n + m - k , m . n . n + m . (29) It helps to rewrite (26) using a new index v ( min max max TI (R(v), k), xR(i)k v=1,...,m (30) ) max TL(v ,k), v=m.+1,...,n where v = v + k - m . Because of the sum constraint in (27) there are only m - 1 independent variables; the m -th variable can be expressed as m .-1xR(m.)k = ck - L xR(i)k. (31) i=1 The minimax problem (26)–(27) can now be reformulated into a more general form . m .-1. min max .fv + L .vixR(i)k., (32) xR(i)k v=1,...,n i=1 m .-1L xR(i)k . ck , xR(i)k . 0 . (33) i=1 D.M. Kodek et al. De.nitions of terms fv and .vi are somewhat tedious though they follow directly from (22) and (24) . sR(v) , v = 1, . . . , m - 1 v = m sR(v) + tR(v)kck , m stR(m.)k r . L . cv+ pvck fv = . trvtR(m.)v. r=1 , v>m m L 1 . trv . r=1 (34). tR(i)k if i = v, 0 if i = v, v = 1,..., m - 1 -tR(m.)k , i = 1,..., m - 1, v = m tR(i)k tR(m.)k . . - .vi = tR(i)v. tR(m.)v. m , i = 1,..., m -1, v>m L 1 . trv. . r=1 (35) for v = 1, . . . , n and i = 1, . . . , m - 1. The process of solving (32)–(33) is simpli.ed greatly by the theorem that gives the necessary and suf.cient * conditions for the variables x , i = 1, . . . , m - 1, that R(i)k minimize (32). The general version of the theorem is given in [15]. It is repeated here in the form that applies to our problem. * Theorem 2 The variables x , i = 1, . . . , m - 1, are R(i)k the optimal solution of the minimax problem (32)–(33) if and only if the following holds m .-1* min max L .vi(zi - xR(i)k) = 0 , (36) zi v.Vmax (x *) i=1 over all numbers zi, i = 1, . . . , m - 1, that satisfy m -1 L zi . ck , zi . 0 . (37) i=1 The set Vmax (x *) contains those of the indices v, v = 1, . . . , n , at which the maximum is obtained. That is m .-1* max ( fv + L .vixR(i)k) = v=1,...,n. i=1 (38) m -1 * fv + L .vixR(i)k , v . Vmax (x *) . i=1 Only the indices v, v . Vmax (x *), that give the extremal values of the function (38) are used in the Theorem 2. The * theorem says that x is the optimal solution if there are R(i)k no numbers zi for which (36) is lower than zero. To show how this can be used to solve (32)–(33) let us assume that * we have a set of numbers x and would like to check R(i)k if they are optimal. Depending on Vmax (x *) and .vi there are two mutually exclusive cases: 1. The set Vmax (x *) contains at least two indices v1 and v2 for which the following holds .v1i.v2i . 0 , i = 1, . . . , m - 1 . (39) It is easy to see that the numbers zi that give (36) lower than zero cannot exist. This is because of the opposite signs of .v1i and .v2i for all i. Any set of numbers zi * that is different from x makes (36) greater than R(i)k zero for at least v = v1 or v = v2. Thus, according to * the Theorem 2, x are optimal. R(i)k 2. The set Vmax (x *) does not contain two indices v1 and v2 for which (39) holds (this is always true if Vmax (x *) contains only one index v). This means that there exists a set of indices Ip, containing at least one index i, for which .v1i.v2i > 0, i . Ip, v1, v2 . Vmax (x *), (40) holds for any pair of indices v from Vmax (x *). Or in other words, for each i . Ip the .vi are nonzero and have the same signs for all v . Vmax (x *). Let us assume that there are numbers zi, i . Ip, that satisfy (37) and give * L.vi zi < L.vi xR(i)k, v . Vmax (x *). (41) i.Ip i.Ip These numbers, together with zi = xR(i)k for i /. Ip, obviously make (36) lower than zero. The numbers * x are therefore not optimal if such zi exist. They R(i)k exist almost always — the only exception occurs if the following holds * L .vi x = min L .vi zi , (42) R(i)k zi i.Ip i.Ip for some v, v . Vmax (x *). It is clear that (41) cannot * be satis.ed in this case because the x sum is al- R(i)k ready the lowest possible. The lowest possible sum in (42) is easy to compute by using zi = 0 for .vi > 0 and zi = ck for the most negative of .vi < 0. This * means that it is also easy to check if x are opti­ R(i)k mal. Using (39)–(42) it becomes straightforward to solve (32)– * (33). A starting solution for x is selected and checked R(i)k as described above. If it is found optimal, we have a solu­ * tion. If not, one of the variables x , i1 . Ip, is tried; if R(i1)k it can change towards zero (if .vi1 > 0) or towards ck (if .vi1 < 0) without violating (33), it leads to an improved solution. It is ignored otherwise and a new variable is tried. The set Ip always contains at least one index i that leads to an improved solution. * The new value of x is computed by trying all R(i1)k v1, v1 ./Vmax (x *), and solving * * f+ .v1i1 xR(i1)k = fv + .vi1 xR(i1)k, v . Vmax (x *), v1 (43) Informatica 27 (2003) 105–114 111 where fare de.ned as v m .-1* f= fv + L .vi x v = 1, . . . , n . (44) v R(i)k, i=1 i=i1 Each of the equations (43) gives a possible new value for * x . The one that is the least different from the current R(i1)k value must be used because the set Vmax (x *) changes at * that value. The new x must of course also satisfy R(i1)k * (33). Replacing x with the new value gives a new R(i1)k * solution x , i = 1, . . . , m - 1, for which the whole R(i)k process is repeated until the optimal solution is found. Selecting a good starting solution is important because it reduces the number of iterations. Our algorithm uses a * solution that is found by choosing x = ck (the re- R(i)k * maining x are zero) for i = 1, . . . , m , and comput­ R(i)k ing the lower bound TLB (k) for each of them. The choice that gives the lowest TLB (k) is the starting solution. This starting solution is often optimal; when it is not, it usually takes only one or two iterations to .nd the optimum. Note * that the search for the optimal x is not necessary if the R(i)k starting TLB (k) is lower than the lower bound (21). In such cases the algorithm simply uses the starting solution. * Having the optimal variables x , i = 1, . . . , m - 1, R(i)k it remains to select the one that will be used as the new constraint. This is done by computing the products * tR(i)kxR(i)k, i = 1, . . . , m , (45) where (31) is used to compute the remaining variable * x .)k. The index R(i) that gives the highest product is R(mselected as ic. The reasons for this choice is obvious: The highest of products (45) is most likely to give the largest increase of the lower bound TLB (k). 7 The algorithm The algorithm is based on the well known branch-and­bound method which is described in detail in many text­books (see, for example, [13] or [14]). We assume that the reader is familiar with this method and continue with the description of the algorithm. An important part of the branch-and-bound method is the branch-and-bound tree. Each node in the tree represents a subproblem that has some of the variables constrained to integers. Information that is stored at each node must con­tain the following: The node’s lower bound TLB (k), index k, the size of set Ik (it is equal to m - m ), the indices i I in Ik, integer variables xij, j = 1, . . . , k, and the nonin­ * teger variable xick that will be used as the next constraint (together with the index ic). The ef.cient organization of the tree is important. It does not, however, in.uence the results of the algorithm and will not be discussed here. The algorithm is described in the following steps: I 1. Set k = 0 and use (8)–(9) to compute xick in (48) does not, discard this subproblem (sub­ * m L problem (47) is never discarded because x satis.es ick si (33)). The number of noninteger variables xik is re­ + pj + qj cj + tij duced by 1 i=1 TL(j, 0) = of the solutions quickly. The index u indicates that , (46) m i=1 tij for j = 1, 2, . . . , m. Sort the lower bounds TL(j, 0) in the ascending order. The problem parameters tij and cj are reordered accordingly. It is assumed from here on that j = 1 corresponds to the lowest TL(j, 0), j = 2 to the next higher TL(j, 0), and so on. The reasons for this reformulation of the problem are simple: We wish to eliminate the indices j that give the lowest contribution to the total lower bound TLB (k) and at the same time keep the indices that give the highest contribution to the total lower bound. Several other strategies for selecting the indices j were tested; none performed better over a large class of problems. 2. Set the current best solution Tu to . (a large posi­tive number). The corresponding variables x(u) can ij be set to anything — they will be replaced by one L 1 m ‹ m - 1 . (49) If m . 2 go to step 6. Otherwise there is only one noninteger variable xik left. Its integer value is al­ready determined because (27) gives I I x k + x = ck, (50) ic ik I and xis easily computed. All variables xik are ik I known integers x, i = 1, 2, . . . , m. Because of this ik the index k is incremented as described by the de.ni­tion (5) k ‹ k + 1 , (51) The new set Ik is made empty (m = m). If k . n, go to step 6. Otherwise we have a case where all of the subproblem’s variables xij are integer. This is a complete integer solution and the cycle time T is simply computed as Tu is an upper bound on Topt . The alternative is to use some heuristic construction and compute a near- optimal starting solution Tu. We found that this is not really necessary because the algorithm quickly pro­duces good near-optimal solutions. 3. Create the root node. This is done by making k = 1, m = m (this makes the set Ik empty), and solv­ing the problem (32)–(33) as described by (36)–(45). The resulting information is stored in the branch-and­bound tree. Initialize the branching counter N to zero. 4. Choose the branching node by searching through the nodes of the branch-and-bound tree. Go to step 8 if no nodes with TLB (k) < Tu are found or if the tree is empty. Add 1 to the branching counter N and choose the branching node according to the following rule: If N is odd, choose the node with the lowest TLB (k), otherwise choose only among the nodes that contain I the largest number of integer variables xij and select the one that has the lowest TLB (k). This branching strategy is a combination of the lowest lower bound and depth .rst strategies and is used to get many of the near-optimal solutions as fast as possible. This is especially important for large problems with several hundred variables xij. 5. Two subproblems are created from the branching node * by .xing the node’s variable xick to integers I * xick = lxickj , (47) I * x = lx j + 1 , (48) ick ick * * where lx j denotes the nearest lower integer to x ick ick. I The integers xick must of course conform to (27). If . .. . L n j=1 If T < Tu, we have a new best solution; the current (u) Tu is set to T and the current best solution xis re­ ij I placed by xij . The branch-and-bound tree is searched and all nodes with TLB (k) . Tu are removed from the tree. Go to step 7. 6. Each of the non-discarded subproblems from step 5 is solved. The already known integers are taken into ac­count by computing si and ck using (6) and (7). Equa­tions (34) and (35) are used next to compute fv and .vi and the problem (32)–(33) is solved as described * by (36)–(45). The results are TLB (k) and x k. If ic TLB (k) . Tu ignore this subproblem since it obvi­ously cannot lead to a solution that is better than the current best Tu. Otherwise if m = 2 and k < n * do the corrective procedure (28) and replace xick and TLB (k) with the new values. The newly computed TLB (k) will in most cases be greater than that of the branching node. This growth is not monotone and it is possible that the new TLB (k) is lower. Since the lower bound cannot decrease we use the branching node’s TLB (k) as the subproblem’s TLB (k) in such * cases. The subproblem information containing xick and TLB (k) is stored as a new node in the branch­and-bound tree. 7. The subproblem in the branching node from step 4 is modi.ed (the root node is an exception — it is simply removed from the branch-and-bound tree and we go to step 4). The branching subproblem is modi.ed by I (52) si + T = max tij x . iji=1,2,...,m Machine Mi Allocation xij of components Assembly time on machine Mi 1 2 3 4 5 6 7 1 274 0 2 5 0 0 0 97.1 2 50 37 2 0 0 0 0 97.1 3 0 0 8 0 7 5 4 95.3 Number of type j components per board 324 37 12 5 7 5 4 Table 2: One of the 10 equivalent optimal solutions of the cycle time problem given in example Table 1. The solution was obtained with the algorithm described in this paper. I changing the integer variable xthat was created last. lk The modi.cation is equal to I I I . x- 1 if xwas created by (47) lk lk xlk ‹ I I (53) xlk + 1 if xwas created by (48). lk This of course means that each node in the branch­and-bound tree must also contain information about the integer variable that was created last and about the way it was created (either by (47) or (48)). The branching node is removed from the tree if the new I I x < 0 or if xlk > ck and we go to step 4. Oth­ lk erwise the modi.ed subproblem is solved exactly as in step 6. Note that k and m remain unchanged and that this subproblem can never be a complete integer solution. If TLB (k) < Tu the modi.ed subproblem is stored back into the tree, otherwise it is removed from the tree. Go to step 4. 8. The current best solution is the optimal solution. The optimal cycle time Topt is equal to Tu and the optimal (opt) (u) variables xij are equal to xij . Stop. 8 Experimental results and conclusions The algorithm was implemented in a program and tested on many different cases. It is typical for the problem (1)– (3) that there are often many equivalent optimal solutions. One of the 10 optimal solutions of the example given in the Table 1 is presented in the Table 2. It took less than 0.01 seconds of computer time (on a 2.4 GHz Pentium 4) to .nd all 10 optimal solutions. The computing time depends not only on the number of variables xij but also on the problem parameters tij and especially si. The lower values of si obviously make the search space smaller and reduce the computation time. Ex­periments have shown that for the problem parameters sim­ilar to those in the Table 1 all optimal solutions are typically found within a minute of computing time if the number of variables xij is 60 or fewer. For example, the 3-machine case from the Table 1 in which the number of different component types per board is increased to 20, takes less than a second to .nd all optimal solutions. This time in­creases to almost 2 hours if an additional machine is added (giving a problem with 4×20 = 80 variables xij ). It should be noted, however, that for this example a suboptimal solu­tion that is within 0.1% of the optimum was found after less than 0.1 second. This behaviour is typical for the branch­and-bound based algorithms where a great amount of time is often needed to prove the optimality of a solution that was found early in the process. The algorithm was also tested on problems with a much larger number of variables xij . Cases with up to 10 ma­chines and up to 100 different component types per board (giving up to 1000 variables xij ) were tried. Because of the exponential nature of the algorithm the optimal solution is not found and/or proved optimal in a reasonable computing time for problems this large. But the algorithm is useful even in such cases — the branching strategy ensures that many good near-optimal solution are obtained. In addition, the algorithm gives a global lower bound on the optimal solution which allows the user to determine how close to the best possible solution a near-optimal solution is. The global lower bound on the optimal solution is the lowest of the TLB (k) in the branch-and-bound tree and is obtained in step 4 of the algorithm. It can be used to decide if a near-optimal solution is suf.ciently close to the optimum and also if it is worth trying the longer computing time. Acknowledgment The authors would like to thank Prof. B. Vilfan for provid­ing the formal proof of NP-completeness for the problem (1)–(3). References [1] P. Ji, Y.S. Wong, H.T. Loh, L.C. Lee, “SMT production scheduling: A generalized transportation approach,” International Journal of Production Research, vol.32 (10), pp.2323–2333, 1994. [2] J.C Ammons, M. Carlyle, L. Cranmer, G. Depuy, K. Ellis, L.F. Mcginnis, C.A. Tovey, H. Xu, “Compo­nent allocation to balance workload in printed circuit card assembly system,” IIE Transactions, vol.29 (4), pp.265–275, 1997. [3] A. Schtub, O.Z. Maimon, “Role of similarity measures in PCB grouping procedure,” International Journal of Production Research, vol.30 (5), pp.973–983, 1992. [4] J. Sohn, S. Park, “Ef.cient operation of a surface mounting machine with a multihead turret,” Interna­tional Journal of Production Research, vol.34 (4), pp.1131–1143, 1996. [5] Z. Ji, M.C. Leu, H. Wong, “Application of linear as­signment model for planning of robotic printed cir­cuit board assembly,” ASME Manufacturing Processes and Material Challenges in Microelectronics Packag­ing, vol.ADM-v131/EEP-v1, pp.35–41, 1991. [6] M. Sadiq, T.L. Landers, G. Taylor, “A heuristic al­gorithm for minimizing total production time for a sequence of jobs on a surface mount placement ma­chine,” International Journal of Production Research, vol.31 (6), pp.1327–1341, 1993. [7] Y.D. Kim, H.G. Lim, M.W. Park, “Search heuristics for a .owshop scheduling problem in a printed circuit board assembly process,” European Journal of Opera­tional Research, vol.91 (1), pp.124–143, 1996. [8] P. Ji, M.T. Sze, W.B. Lee, “A genetic algorithm of de­termining cycle time for printed circuit board assem­bly lines,” European Journal of Operational Research, vol.128 (3), pp.175–184, 2001. [9] P. Brucker, “Scheduling algorithms,” Second Ed., Springer, pp.274–307, 1998. [10] B. Vilfan, “NP-completeness of a certain scheduling problem,” (in Slovenian) Internal report, University of Ljubljana, Faculty of Computer and Information Sci­ence, June 2002. [11] D.M. Kodek, “A theoretical limit for .nite wordlength FIR digital .lters,” Proc. of the 1998 CISS Conference, vol. II, pp.836–841, Princeton, March 20-22, 1998. [12] D.M. Kodek, “An approximation error lower bound for integer polynomial minimax approximation,” Elec­trotechnical Review, vol.69 (5), pp.266–272, 2002. [13] C.H. Papadimitrou and K. Steiglitz, “Combinatorial optimization,” Prentice-Hall, pp.433–453, 1982. [14] E. Horowitz, S. Sahni, “Fundamentals of computer algorithms,” Computer Science Press, pp.370–421, 1978. [15] V.F. Demyanov, V.N Malozemov, “Introduction to minimax,” Dover, pp.113–115, 1990.