JOURNAL OF MECHANICAL ENGINEERING STROJNIŠKI VESTNIK Г ^ 1 Image courtesy of: Section of Computer-Aided Design Engineering, TU De no. 6 \ ij 5 year 2008 v volume 54 1 4 Strojniški vestnik - Journal of Mechanical Engineering ISSN 0039-2480 Editorial Office University of Ljubljana Faculty of Mechanical Engineering Journal of Mechanical Engineering Aškerčeva 6 SI-1000 Ljubljana, Slovenia Phone: 386-(0)l-4771 137 Fax: 386-(0)l-2518 567 E-mail: info@sv-jme.eu http://www.sv-jme.eu Founders and Publishers University of Ljubljana - Faculty of Mechanical Engineering University of Maribor - Faculty of Mechanical Engineering Association of Mechanical Engineers of Slovenia Chamber of Commerce and Industry of Slovenia - Metal Processing Association Editor Andro Alujevič University of Maribor Faculty of Mechanical Engineering Smetanova 17 SI-2000 Maribor Phone: 386-(0)2-220 7790 E-mail: andro.alujevic@uni-mb.si Deputy Editor Vincenc Butala University of Ljubljana Faculty of Mechanical Engineering Aškerčeva 6 SI-1000 Ljubljana Phone: +386-(0)1-4771 421 E-mail: vincenc.butala@fs.uni-lj.si Publishing Council Jože Duhovnik, chairman Niko Samec, vice chairman Ivan Bajsić Jože Balie Iztok Golobic Mitjan Kalin Aleš Mihelič Janja Petkovšek Zoran Ren Stanko Stepišnik International Advisory Board Imre Felde, Bay Zoltan Inst, for Materials Science and Techn. Bernard Franković, Faculty of Engineering Rijeka Imre Horvath, Delft University of Technology Julius Kaplunov, Brunei University, West London Milan Kljajin, J.J. Strossmayer University of Osijek Thomas Lübben, University of Bremen Miroslav Plančak, University of Novi Sad Bernd Sauer, University of Kaiserlautern George E. Totten, Portland State University Nikos C. Tsourveloudis, Technical University of Crete Toma Udiljak, University of Zagreb Arkady Voloshin, Lehigh University, Bethlehem Editorial Board Anton Bergant Franci Čuš Matija Fajdiga Jože Flašker Janez Grum Janez Kopač Franc Kosel Janez Možina Brane Širok Leopold Škerget JOURNAL OF MECHANICAL ENGINEERING STROJNIŠKI VESTNIK Cover plate: Interactive Augmented Prototyping combines digital and physical modeling means to support design processes. The cover plate shows an I/O Pad implementation weighing 1,5 kilograms, based on a LED-Based projector and an Ultra-Mobile PC. The physical object is being augmented with 3D texture maps, optical markers were used to track location. Courtesy of the Section of Computer-Aided Design Engineering, Faculty of Industrial Design Engineering, Delft University of Technology, the Netherlands. Print Littera Picta, Medvode, printed in 600 copies Yearly subscription companies individuals students abroad single issue 100,00 EUR 25,00 EUR 10,00 EUR 100,00 EUR 5,00 EUR © 2008 Journal of Mechanical Engineering. All rights reserved. Strojniški vestnik - Journal of Mechanical Engineering is also SV-JME is indexed in SCI-Expanded, Compendex and Inspec. available on http://www.sv-jme.eu, where you access also to The journal is subsidized by the Slovenian Research Agency. papers' supplements, such as simulations, etc. Contents Strojniški vestnik - Journal of Mechanical Engineering volume 54, (2008), number 6 Ljubljana, June 2008 ISSN 0039-2480 Published monthly Editorial Horvath, I., Rusak, Z., Duhovnik J.: Enhancing Competitiveness of Engineering Processes and Solutions 382 Papers Gielingh, W.: Cognitive Product Development: A Method for Continuous Improvement of Products and Processes 385 Deciu, E.-R., Ostrosi, E., Ferney, M., Gheorghe, M.: Product Family Modelling in Conceptual Design Based on Parallel Configuration Grammars 398 Guédas, B., Dépincé, P.: Coupling Functions Treatment in a Bi-Level Optimization Process 413 Gerritsen, B. H. M.: How to Adapt Information Technology Innovations to Industrial Design and Manufacturing to Benefit Maximally from Them 426 Gupta, R. K., Gurumoorthy, B.: A Feature-Based Framework for Semantic Interoperability of Product Models 446 Verlinden, J., Horvath, I.: Enabling Interactive Augmented Prototyping by a Portable Hardware and a Plug-In-Based Software Architecture 458 Yoshino, D., Inoue, K., Narita, Y.: Understanding the Mechanical Properties of Self-Expandable Stents: A Key to Successful Product Development 471 Stetter, R., Paczynski, A., Zajac, M.: Methodical Development of Innovative Robot Drives 486 Instructions for Authors 499 Enhancing Competitiveness of Engineering Processes and Solutions We are pleased to introduce this Thematic Issue of Journal of Mechanical Engineering on Enhancing competitiveness of engineering processes and solutions to our readership. The included papers deal with various methodological, technological and practical issues of improving engineering processes towards better products. The papers have been selected from the most relevant papers presented at the Seventh International Symposium on Tools and Methods of Competitive Engineering (TMCE 2008). This Symposium was held in Izmir, Turkey, from 21 to 25 April 2008, with the participation of more than 220 speakers and delegates. The papers selected for this thematic issue have been reviewed by members of the Editorial Board of the Journal and reworked by the authors. Thematically, they have been arranged in three groups. The first three papers present theoretically underpinned methodologies for conceptualization of new products. The second three papers address various issues of using advanced information technologies in product development. Finally, the last two papers present examples of industrial application of knowledge-intensive product design methodologies. It is nowadays commonsensical that companies have to compete for their market shares, financial advantages and technological positions. Competitiveness is not about rivalry only, but also about careful planning of an individual strategy of survival and success. In terms of the organization, the word 'competitive' means a set of general technological and business characteristics such as: • commitment to strategic product innovation, • making the most of state-of-the-art knowledge and know-how, • access to and exploiting front end technologies, • readiness for action and learning, • being agile and responsive, and • being economic and anticipatory. In order to develop a competitive strategic plan, companies need to take into account all stakeholders, that is, the organization, processes, people and products. Achieving higher level of competitiveness is the main concern for the paper written by Wim Gielingh, which belongs to the first group. Entitled Cognitive product development: A method for continuous improvement of products and processes, it considers design as the most critical element of the product development process and approaches it as a scientific learning process. The proposed Cognitive Product Development (CPD) framework is based on the human cognitive cycle known from Cognitive Psychology. CPD combines existing and verified knowledge of multiple disciplines, which is collected throughout the lifecycles of existing products. It is claimed that this accelerates product development processes and results in higher quality and more reliable product designs, without affecting the creative freedom of the designer. The second paper, Product family modelling in conceptual design based on parallel configuration grammars, is contributed by Eugeniu-Radu Deciu, Egon Ostrosi, Michel Ferney and Marian Gheorghe. The authors propose to use configuration grammars for product family modelling in conceptual design. Configuration grammars can be used to describe both functional and structural design spaces, as well as the relationships between the elements of these spaces. The two proposed configuration grammars work simultaneously on different abstraction levels of the product family. Therefore, the evolution of the product model, from the stage of functional structure to the stage of physical structure, can be represented in an adapted way. The configuration grammars-based design approach constitutes an effective tool for the representation and the modelling of configurable product families. The proposed design approach has been validated by applying it to the design of an external gear pump family. The third paper in this group, co-authored by Benoìt Guédas and Philippe Dépincé, is presented under the title: Coupling functions treatment in a bi-level optimization process. The main contribution of this paper is Collaborative Optimization Strategy for Multi-Objective Systems (COSMOS) methodology, which has been developed to address the issues of simultaneously multi-objective and multidisciplinary optimization. In a nutshell, this methodology applies a simple method for the approximation of coupled variables of various concerned disciplines. The authors conducted experiments to verify their hypothesis that quality of the approximation will increase as the algorithm converges to optimal solutions. They found that their assumption is true when the objective function is convex, but the method may in a general case not be capable of determining the correct approximation value. As mentioned above, the papers belonging to the second thematic group address various issues of using advanced information technologies in product development. The fourth paper, entitled How to adapt information technology innovations to industrial design and manufacturing to benefit maximally from them, by Bart H. M. Gerritsen, on the one hand, gives an excellent survey of the current state of the art, on the other hand, identifies the major influences of applying advanced information technologies (IT) in product development. Interestingly, the hypothesis of the author is that IT is leading, and design and manufacturing follow. He gives many evidences for his claim in the paper. He also argues on what complementary technologies the industry should develop in order to be able to give appropriate answers to the IT trends and to benefit optimally from them. Furthermore, he emphasizes that deployment of (genuine) interoperability is critical in the whole palette of IT developments and encourages a joint academic-industry research towards collective innovation. A specific issue of robust interoperability is addressed in the fifth paper of Ravi Kumar Gupta and Balan Gurumoorthy, entitled A feature-based framework for semantic interoperability of product models. The authors propose a solution for the problem of exchanging product semantics along with other product information, such as shape, which can convey design intent, inter-relationships between entities in the shape, as well as other data important for downstream applications, and can facilitate reasoning with the shape model at higher levels of abstraction. The key contribution is a one-to-many framework for exchange of product information model and product semantics, which is based on the Domain Independent Form Feature (DIFF) model. This provides a comprehensive representation of features in the shape model along with an ontology that captures the vocabulary. Unfortunately, the concept of features has some inherent limitations, therefore many additional problems and issues could be identified for future research. The next paper in this group, co-authored by Jouke Verlinden and Imre Horvath, approaches the issue of enhancing product development processes from a completely different viewpoint and with a different objective. The paper, entitled, Enabling interactive augmented prototyping by a portable hardware and plug-in-based software architecture, the authors summarize their field research that explored that, though there is a need in the industry for using advanced augmented prototyping technologies, the practical use of this technologies is hindered by the lack of the off-the-shelf solutions. The authors hypothesized that a projector-based portable interactive augmented prototyping (IAP) hardware platform, called I/O Pad, together with an easy-to-use software package could be a solution for the problem of the industry. The I/O Pad is a self-sufficient, battery-operated devise that enables virtual imaging in association with physical models and enables collaboration of designers and other stakeholders through wired or wireless network connections. The authors also constructed a software platform that supports dynamic virtual imaging in a variety of modelling and simulation applications. Issues for further study are such as application-tailored imaging, model sharing and enhancement of projector characteristics. The following application paper by Daisuke Yoshino, Katsumi Inoue and Yukihito Narita is titled Understanding the technical properties of self-expandable stents: A key to successful product development. Stent is a medical device of mesh-shaped tubular structure used to expand the constriction of blood vessels. Actually the authors propose both a computational approach and a design methodology for a given class of stents. The mechanical properties of self-expandable stents are evaluated using a non-linear finite element method. In more detail, the authors employed the updated Lagrange method for formulation of a nonlinear equation, adopted the Newton-Raphson method for solution-seeking, and estimated the expanded shape of a stent by a finite element method-based elasto-plasticity analysis. The authors also considered factors such as flexural rigidity and shear deformation. They used the aggregated knowledge to optimize the shape of a self-expanding stent and to adapt the stent to a patient's condition. The last application paper, which reports on Methodical development of innovative robot drives, has been contributed by Ralf Stetter, Andreas Paczynski and Michal Zajac. This paper emphasizes the interdisciplinary characteristics of mechatronics product development. The authors' hypothesis is that the procedures and tools of one of the disciplinary domains can not be adopted for steering and controlling the whole design process. They adapted the VDI 2206 Guideline as a development methodology of mechatronics systems. The methodology has been applied both on a macro- level and on a micro-level. A design case study is presented in which the advantages of this approach have been explored and validated. The conclusion is that cross-domain functionality can only be achieved if the interfaces between the disciplines are clearly defined and thoroughly tracked. Adaptation of the concepts to other design application fields is indicated as topic for future research work. Prof. Dr. Imre Horvàth Dr. Zoltàn Rusàk Prof. Dr. Jože Duhovnik Guest editors Paper received: 28.2.2008 Paper accepted: 15.5.2008 Cognitive Product Development: A Method for Continuous Improvement of Products And Processes Wim Gielingh Delft University of Technology, Faculty of Civil Engineering, The Netherlands In current engineering practice, designers usually start a new project with a blanc sheet of paper or an empty modeling space. As a single designer has not all knowledge about all aspects of the product, the design has to be verified by other experts. The design may have to be changed, is further detailed, again verified and approved, and so on, until it is ready. But only the final product, once it exists, will prove the correctness of the design. Given the complexity of modern industrial products, the intermediate verification and change processes require substantial time. This has a major negative impact on the development time and costs of the product. Cognitive Product Development, as proposed here, approaches design as a scientific learning process. It is based on a well known and successfully applied theory for Cognitive Psychology. In stead of relying solely on the experiences of a single individual, CPD makes the combined knowledge of multiple disciplines, acquired throughout the life ofexisting products, available through generic design objects. CPD thus approaches design as the configuration of existing and verified knowledge. It is expected that this accelerates the product development process and results in designs of higher quality and reliability without affecting the creative freedom of the designer. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: product development, cognitive engineering, continuous improvement, parametric modelling 1 INTRODUCTION 1.1 The High Costs and Risks of Product Development The development of complex systems such as buildings, plants, infrastructures, off-shore structures and aircrafts, has a high risk of budget and time overruns. In the construction sector, for example, budget overruns between 10% and 30% happen frequently. But overruns of 80% or more are not exceptional [9]. Also the aerospace, automotive and railway industries are often plagued by serious budget and time overruns. These costs and risks make many enterprises reluctant with the introduction of new products or the investment in new projects. Although many different factors may contribute to these overruns, two factors appear to be essential for most cases: (1) problems caused by high complexity, and (2) the unpredictability of consequences of 'new' knowledge. The complexity of a system can be defined as the total number of interactions or interdependencies between components of a system [13]. Complexity depends on the number of components, but tends to grow more than proportional to this number. Also the number of interaction- or dependency-types may increase complexity. Examples of interaction types are mechanical interaction (such as mechanical fixation), electrical-, chemical-, and control interaction. If a system has n components and i interaction-types, it may have maximally i n- (n-1) interactions with other components. Complexity can thus be reduced by reducing the number of components or by reducing the number of interactions or dependencies. The latter can be accomplished by modularizing a design such that each module behaves as a 'black box' that has minimum interactions with its environment. On the other hand, the trend towards 'mass-customization', i.e. the offering of client specific solutions based on a generic design, increases product complexity because designers have to keep all variant-solutions and their consequences in mind. *Corr. Author's Address: Delft University of Technology, Faculty of Civil Engineering, Section of Building Processes, Stevinweg 1, 2628 CN Delft, The Netherlands, wgielingh@tiscali.nl 385 A new product is the result of 'new knowledge'. The consequences of this knowledge are often unpredictable. That is why a design has to be verified thoroughly through modeling, analysis, simulation and testing, before the actual product is being realised. For the automotive sector it is estimated that design changes make up 75% of total product development time and costs [1]. Product development may thus be improved in terms of costs, risks and quality if the problems caused by high complexity and the consequences of "new knowledge" can be reduced. This subject will be addressed by combining principles for systems modelling with a theory on cognitive psychology that is generalized and extended for design, engineering and production. 1.2 Origin of This Theory The methodology that is described in this paper finds its origin in a theory for product modelling developed by the author in the 1980-ies. After several implementations, applications and further refinements it evolved into a theory and methodology for the acquisition, organization and use of product and process knowledge. Milestone publications were: (a) the General AEC Reference Model [3] which became part of the Initial Draft of ISO 10303 STEP; (b) the IMPPACT Reference Model [4], which was implemented for integrated design and manufacturing of ship propellers at LIPS and the design and manufacturing of aircraft components at HAI; (c) the PISA Reference Model [5], which was implemented to support the early design process of cars at BMW; and (d) the Theory of Cognitive Engineering [6] which was partially applied in a large Design-Build-Maintain project in the oil and gas sector. This paper presents a comprehensive summary of the last - most recent -theory. 1.3 Structure of This Paper This paper starts with an introduction of the Theory of Cognition which is founded on a well known theory for cognitive psychology (chapter 2). Next, this theory is generalized and extended for design, production and product lifecycle support (chapter 3). Chapter 4 discusses in brief a Theory of Systems. It addresses in particular the principles of modularization of systems. Subsequently chapter 5 combines the theories unfolded in 3 and 4 into a theory for cognitive design, production and support of complex systems. Chapter 6 describes one of the cases where these principles are applied: a large design-build-maintain project for the oil and gas sector. Chapter 7, finally, draws conclusions. 2 KNOWLEDGE CREATION AND CONTINUOUS IMPROVEMENT 2.1 Continuous Improvement Continuous Improvement (CI) is a widely used concept with a variety of meanings. For many it is synonymous to innovation, for others it is a corner stone of total quality management. CI is a basic element of Lean Manufacturing, where it is known as the kaizen principle [16]. In this context CI aims primarily at quality management. It is implemented here as an organizational solution, by making teams of people responsible for improving their own part of the production process. Bessant and Caffyn [2] define CI as 'an organization-wide process of focused and sustained incremental innovation'. Lindberg and Berger [8] propose a model, identifying five types of CI organization, which is based on two dimensions. The first dimension addresses whether CI is part of ordinary tasks or not, the second makes a distinction between group tasks and individual tasks. In practically all studies, focus is on the human aspect of Continuous Improvement. 2.2 A Theory of the Learning Organization Also theories about learning organizations are human centered. Probably the most well known one is the SECI model of Nonaka et.al. [11] and [12]. According to this theory, design knowledge within an organization is developed according to a spiraling process that crosses two arrays of apparently opposite values, such as chaos and order, micro and macro, part and whole, implicit and explicit, body and mind, deduction and induction, emotion and logic. According to these authors, the key for successful knowledge management is to manage dialectic thinking in order to resolve apparent conflicts in design objectives. The spiral develops in an interaction between implicit (i.e. residing in the heads of human individuals) and explicit (i.e. recorded, transferable and shareable) knowledge. Four stages of knowledge transformation are recognized: Socialization, Externalization, Combination and Internalization, to be abbreviated as SECI. Socialization is a process in which individual implicit knowledge is transformed into shared implicit knowledge, for example through (informal) meetings, discussions and other forms of human interaction. Externalization is the expression and/or recording of knowledge with the objective to become a shared resource for the organization. Combination is a process aiming at unification, integration and/or generalization of individual pieces of shared knowledge, which results in knowledge of higher value for the organization. Internalization, finally, is the process in which individual members of the organization pick up shared explicit knowledge, for example through reading, learning, training and experiencing. The process of sharing depends heavily on the existence of a platform of common experiences, values and concepts. Such a platform provides context that is necessary for the understanding of words and gestures. It forms the 'commonality' of a shared domain. This platform is called 'ba' by Shimizu [14]: 'ba' is the Japanese word for 'place' or 'location'. 'Ba' can be a physical or a virtual place, and represents not only a location in space but also a location in time. Nonaka's theory comprises further the notion of Knowledge Assets (KA's): the different implicit knowledge soCdisction ind vi dual -> community internai i sati on informatone imp essi on external is cti on informational expression oombinction partid -> integra explicit knowledge Fig. 1. The SECIprocess as proposed by Nonaka et al. forms in which knowledge is encapsulated. Examples of KA's are experiences (implicit knowledge developed through practicing), routines (business practices that may exist in implicit or explicit form), concepts (common ideas) and systems (explicit knowledge, recorded in the form of documents, databases and models). 2.3 An Assessment of the SECI Model Nonaka's theory focuses largely on (a) the interaction of human individuals with other individuals through socialization and (b) the sharing of knowledge in explicit (i.e. documented) form. It does not incorporate feedback from real world experiences. Also, it does not address the possible role of more advanced forms of knowledge representation, such as in the form of product models. Product models that are used for simulation, such as structural analysis, virtual reality and the digital mock-up, may play an essential role in the learning cycle of a design and engineering team. Product modelling requires a paradigm shift in industrial production, because its nature is so different from document based working. A product model is a near-to-reality 'image' of a product that may exist on different levels of concreteness and completeness. Product models result in (virtual) experiences for the human beings that work with them. In contrast, documents must be read in order to provide knowledge for the reader and are hence only accessible for those who master the language in which they are written. Consequently, there is a need to make KM theory consistent with modern scientific principles, including feedback from virtual or real experiences. 2.4 Theory of Cognition The missing link between KM theory and feedback from reality is provided by a modern theory on cognitive psychology [10]. Neisser defines Cognition as ' the acquisition, organization and use of knowledge'. As his theory originated in the context of psychology, it is focused on the human individual. This part of the theory will be discussed first; in section 3 it will be generalized and adapted to learning organizations for design and engineering. Experiences in the immediate and remote past influence human cognition. Experiences are memorized and organized by the human brain. Similar experiences confirm and reinforce each other, and result in abstract structures in the human mind that are called schemata. For example, a person who has seen dozens of dogs will develop an abstract idea that combines the observed features that all dogs have in common. This abstract idea becomes a concept [15]. Concepts may become independent sources of knowledge. By associating concepts with symbols such as words, knowledge can be communicated to other people. Concepts and conceptual structures have a biological origin: they enable a human being to anticipate and act more effectively in new situations. A child that has touched a hot stove once or twice will associate the two concepts 'stove' and 'hot', and will be more cautious in next encounters with stoves. Similarly, once we have eaten many apples, we associate the shape and colour of apples with their taste: we know that small green apples can be hard and sour, and that yellow or red ones are mostly sweet and soft. The human senses provide an enormous amount and a continuous stream of sensory stimuli. Only some of these are really of importance. The extraction of useful stimuli from the irrelevant ones is called perception [10]. As the life-experiences of individual people differ, the understanding and interpretation of sensory stimuli, in the form of new experiences, will also differ. Hence, two people may act and react differently if they are confronted with one and the same situation. For example, a person who is once attacked by an aggressive dog will have a different concept, and may act differently, than a person who never had such an experience. From the above it can be concluded that 'old knowledge' plays an essential role in human perception, an thereby affects the creation of 'new knowledge'. Existing knowledge determines how new information is interpreted and valued. The entire process of sensing and the interpretation of sensory stimuli by a human being will be called impression. Learning is not just a passive process that is based on the observation of physical reality. Much can be learned by exploration and experimentation. Baby's learn by touching things, by putting them in their mouth, and by throwing them away. Children learn by playing, which is a combination of action and observation. Action partially affects and changes the physical reality that surrounds us. The whole of activities performed by human beings that affect physical reality will be called expression. The learning process of human individuals can thus be depicted by a circle that is intersected by two orthogonal axes (Fig. 2). The vertical axis represents physical reality (top) versus knowledge about reality (bottom). The horizontal axis represents impression, which includes processes such as sensing, observing, interpreting and perceiving (left) and the process of expression, which includes various forms of acting (right). The cognitive process that is depicted in Figure 2 applies to individual people but it can also be applied to organizations, such as industrial enterprises. For enterprises, physical reality may form a market. Market analysis, or the interpretation of client needs, is a form of impression. To serve this market or these clients with products and/or services is a form of expression. Once these products and/or services are consumed or applied, physical reality has changed. These changes may form input for product or process innovation. The cognitive cycle may also be applied to electronic knowledge processing systems. Knowledge about existing reality may be obtained via input devices, such as sensors or measuring equipment, or through human observations that are documented. And existing reality can be changed through output devices such as CNC machinery, process control systems or documented instructions. The cognitive cycle forms thus the basis for a theory about learning enterprises and learning physical is reality observed modifies through / \ impression expression V / adds experiences to directs knowledge Fig.2. The Cognitive Cycle information systems that, in contrast to Nonaka's theory, involves reality. 3 RETHINKING DESIGN AND PRODUCTION AS A SPECIAL FORM OF COGNITION Concepts that result from experiences with real phenomena can be analyzed and decomposed into conceptual primitives. These primitives - or features - can be manipulated in the human mind to form new concepts. A simple but illustrative example is the mermaid, which originated in the mind of Hans Christian Andersen. This imaginary creature is partially a girl and partially a fish. The mermaid is an example of an imaginary concept: although mermaids do not exist and cannot be observed, it is 'assembled' from features that can be sensed. As a concept it can also be visualized in the form of a naturalistic expression, such as a painting or a statue. A new product results also from imagination. Designs of new products are assemblies of features that are extracted from existing reality. These features comprise knowledge about shapes, materials, techniques and technologies, and are manipulated in the mind: they can be resized, reshaped or re-arranged. If the cognitive cycle is applied to industrial enterprises, then "physical reality" includes physical products, clients and markets, "impression" includes analysis, "knowledge" includes technological and production knowledge as well as design, and "expression" includes production and servicing. The cognitive cycle for product creation is shown in Figure 3. physical reality . tfl-o a -g is =S observed "a a creates through / \ analysis production (impression) (expression) adds experiences to s s щ o нь dire< SS rara I й .S gH. knowledge Fig. 3. The cognitive cycle for product creation The design of a new product usually has to be verified before it can be produced. Such a verification can be done by making physical prototypes, or by asking experts to analyze and approve the design. With the advent of CA-technologies, it is now also possible to verify a design in virtual reality through product models and simulation technologies (Fig. 4). As a model is an abstraction of reality, it is used to check some - but not all - properties of the anticipated product. In early design phases, only a few properties are checked, and the more design progresses into detailed design, the more properties are added. Hence, the cognitive cycle is not traversed only once for the development of a product, but many times. Each successive traversal of the cognitive loop adds more detail to the product specification, up to a point where sufficient knowledge is acquired so that a safe, error-free production process can be expected, and the resulting product is likely to meet client and market expectations. This iterative process can be depicted by expanding the cognitive cycle into a spiral (Fig. 5). In this figure, the design process is supposed to start with an initial idea (Product Concept L0), which is modelled and simulated in virtual reality through Model L0, and which is subsequently analyzed. Based on the outcome of this analysis a modified and/or more detailed specification is made (Product Concept Ln), modelled and simulated, and so on. This process ends once the final specification analysis modelling production Fig. 4. To improve speed and quality of a design, physical reality can be simulated through virtual reality as part of the cognitive product development cycle. Physical Realit Expression Impression Feedback for Future products Fig. 5. A top-down design process can be is ready (Product Concept Lp), after which production can start. Production is the final stage of expression, resulting in the intended physical product. The Figure 5 shows that the cognitive loop doesn't stop there. The physical product, once in use, can be considered as a prototype for a future product. Knowledge that is acquired from the product in actual use can be very helpful for the creation of new products, for example for analysis and simulation purposes. A product concept is not limited to a static description of the product. It may also apply to processes, such as production, maintenance and operation processes. The whole of models, analysis results, performance data and other information forms an interrelated structure of product and process knowledge that can be used as a basis for new designs. This structure will be briefly described in the next chapter. 4 THEORY OF SYSTEMS Modern products can be seen as complex systems, consisting of objects, where each object interacts with one or more other objects to form a Conceptual World depicted in the Cognitive Cycle by a spiral functional whole. Hence, because of their interaction, the whole is more than the sum of objects that form its parts. An object can be a system by itself, in which case it is called a sub-system. Complex systems may have multiple levels of sub-and sub-sub-systems. And as the performance of a product is determined by its behaviour in its environment, the product itself can also be considered as a sub-system. Systems are often modelled and depicted graphically by means of an inverted tree structure. The top (or root) of this inverted tree depicts the whole; the branches depict the parts; see also Figure 6. The terms 'whole', 'system', 'sub-system' and 'part' have no absolute meaning. Parts may be seen as 'wholes' or 'systems' in their own right. Any object of which a model is made, is part of a larger whole: buildings are part of cities, while cities are part of regions or nations, and so on. On the other side, even the smallest object that is modelled consists of things that are smaller. Hence, no absolute dividing line can be drawn between the model of a system and the context in which it is placed. For this reason, the presented theory does not use the terms 'system' or 'part'. These terms are only used for explanatory reasons, (system) (sub-systems) (sub-sub-systems) Fig. 6. System composition can be modelled in the form of an upside-down tree and are therefore placed between parentheses in Figure 6. The present theory is about knowledge of systems, and considers therefore knowledge as a system by itself. The circles in Figure 6 refer to Units of Knowledge (UoK's). These Units may comprise any kind of knowledge about any subject. For reasons of comprehensibility, Units of Knowledge will also be referred to as 'knowledge objects', or simply 'objects'. A knowledge object may refer by itself to a physical object, a (physical) process, a feature of a physical object, or other phenomena that are subject of interest. Modern enterprises operate today in collaborative networks. A vehicle, for example, is not designed and built by a single company, but by many companies that together form a supply chain. As a consequence, a part or subsystem within a vehicle may have two intellectual owners: the OEM and the supplier. The OEM defines requirements and boundary conditions for the part or subsystem, while the supplier proposes a solution for it. The supplier may, on his turn, have its own supply-chain. This idea is depicted in Figure 7 by splitting each circle in two halves. The upper halve represents requirements and boundary conditions, the lower halve the proposed solution. This idea was first proposed as part of the General AEC Reference Model [3], where the upper halve was called 'Functional Unit' and the lower halve 'Technical Solution'. The importance of this concept is that knowledge about objects in a system can now be г N À 1 к Я > À 1 Fig. 7. Modular system decomposition modularized, where modules of higher aggregation subsume modules of lower aggregation. The split between Functional Units and Technical Solutions does not have to be restricted to knowledge transactions between different companies. Also a single company can benefit from the modularization of a system model. The Functional Units in a system model may form network relations with other Functional Units within the same module, or with other modules on the same level of aggregation. Each module can be described at two distinctive levels: (1) generic, parametric description, and (2) specific description, where all parameter values are defined or where objects are described in explicit non-parametric form. The specific description is split into three sub-levels: (2a) Lot (i.e. one or more identical objects), (2b) Individual (a single individual object) and (2c) Occurrence (a moment in the life of an individual). The modules should not be made too large so that the number of interactions or dependencies inside a module remain limited. Complexity reduction makes it possible to describe all modules with parametric technology. The resulting hierarchy of modules is capable to represent any system, regardless its overall complexity, using parametric technology. A top-down oriented design process usually stops at a point where pre-existing solutions exist that fulfill the requirements of corresponding Functional Units. More details about the theory of lifecycle modeling are given in [6] and [7]. 5 PRODUCT CREATION AS A COGNITIVE PROCESS The system model that is described in chapter 4 fits into the cognitive product development process such as described in chapter 3. While a top-down design process progresses, it uncovers several layers of detail, each layer corresponding with a level of the composition hierarchy of the system. This continues until preexisting solutions are found that meet the requirements of corresponding Functional Units. These final (pre-existing) solutions are depicted by the halve circles marked F at the bottom of this figure. The system modules that are depicted by rectangles with rounded ends in Figure 7 may also be based on pre-existing solutions. These solutions may be reused in parametric form, so that parameter values still have to be defined, or in explicit, non-parametric form. In either case it will be possible Physical Realit to replace older solutions that were chosen in previous designs by new solutions. This principle is shown in Figure 9. It shows on the left the configuration hierarchy of the design of an initial product, and on the right of a revised design, possibly of a next version of this product. The solutions colored white are reused without modification. The solutions colored black are new. The solutions colored grey are reused but with some modification. The latter group may make use of the same generic (parametric) template, but with different parameter values. This idea can be remotely compared with the configuration of a personal computer. At the top-level of a computer model, a computer has a processor, primary memory and secondary memory. The architecture of a computer is such that for each functional unit different solutions can be installed: a computer may use different processors, Я Performance Measurement Impression Feedback for Future products Expression Ф Ф Ф ^b 07 Conceptual World Fig. 8. Modular system decomposition as an integral part of the cognitive design process Product Version 1 Product Version 2 ^ ^ ^Ю ^SP Fig. 9. Solutions (or specifications) are marked with coloured white, the ones that are changed but based and fully new solutions different primary memories and different secondary memories. To replace one by another is often as simple as pulling out one card or device and plugging in another. Although two primary memory cards may have different capacity and be produced by different vendors, the chips or other electronic components may be obtained from the same sub- Conceptual Hierarchy S pecif ication Hierarchy , Oh..... Ш Ш Ф & ф an S. Reused solutions that are not changed are on the same generic template are coloured grey, are coloured black supplier. Hence, re-use of solutions may occur on any level. Using this principle, design becomes basically a process of configuring solutions. If each module - i.e. each solution - is traced from design to subsequent lifecycle phases, lifecycle performance knowledge becomes Installation Hierarchy Actud Phase (production, construction, operation, maintenance, disposal) Imaginary Phase (design, planning) ^ ^Ю ^Ю ^Ю ^ ^Ю Fig. 10. The three types of hierarchy, filled with imaginary or actual data, form the basis for cognitive product development. It starts at the lower left corner (a) with the specification of a product using the generic design objects. During and after the realization of the product, real world data are collected, analyzed, combined and generalized, and made available for use and re-use in future projects. available for each object. This knowledge will be acquired for individuals at different occurrences, but can be generalized and, after statistical processing, be made available to generic (parametric) descriptions. This means that for all objects marked white or grey in Figure 9 product lifecycle knowledge is available for design. This knowledge can be used for gradual improvement of a design. The result is that the generic parametric design objects in design systems provide access to a huge amount of lifecycle data associated with previous applications and implementations. The more this system is used, the more experiences become available for design. Design then becomes a cognitive process, where generic, parametric templates built on top of the knowledge acquired by other experts in the product lifecycle play the role of cognitive schemes. Figure 10 shows horizontally the three types of hierarchy (i.e. the conceptual hierarchy of implicit parametric objects, the specification hierarchy of explicit objects, and the installation hierarchy of individual objects) and vertically actual versus imaginary data. The cognitive cycle starts with the specification of a new product (10a, lower left corner). This specification can be either in explicit or implicit form. In the latter case, use is made of a generic parametric model. The result is an Imaginary Specification Hierarchy. The individual components are derived from the specification, resulting in an Imaginary Installation Hierarchy (b). After realization of the product, real world experiences are collected first on an individual level (c) and then combined via statistical analysis (d). This knowledge can then be generalized and added to a generic (parametric) product model. From then on it will be available at the start-up of new design projects. After several traversals the solution base becomes richer and offers increasing levels of historic life-cycle knowledge to the designer. 6 APPLICATIONS 6.1 Early Applications Based on Parametric Technology Most principles described in this paper have been applied, implemented and improved in projects of different kind and for different industrial sectors, such as mechanical products, ship-building, electronics, automotive and construction. The first detailed software implementation was for a manufacturer of interior walls in 1982, and is described in some detail in [7]. A second case was the implementation of a feature based, fully integrated CAD/CAM solution for a manufacturer of ship propellers. It is described in [4]. Both cases led to significant efficiency gains in the overall production process. But they lacked knowledge feedback to support the cognitive process such as described in this paper. 6.2 Application in a Large Project for the Oil and Gas Sector The case that will be described in more detail here did address the latter subject. For practical reasons it was not based on parametric technology but on a data warehouse supported by a PDM system. This case concerns the realization of 29 almost identical plants in a serial construction process. Below the surface of Groningen, a province in the northern part of the Netherlands, lies one of the largest reservoirs of natural gas in the world. This reservoir is exploited since 1958 and is now more than half empty. As the natural gas-pressure has dropped there was recently a need to install compression units. Also, as the installations were nearing the end of their lifetime and had high operational costs, there was a need to renovate the installations. The company that exploits this gasreservoir - NAM, a joint venture of Shell and Exxon - decided to contract this huge effort as an integrated Design-Build-Maintain project. The project started in 1996 and has a duration of at least 25 years. The natural gas is exploited via hundreds of pipelines that reach the surface of the Earth on 29 locations, called clusters, distributed over a large area of land (Fig. 11). Each cluster is equipped with a small plant for the drying and cleaning of the gas, and for the separation and processing of pollutants. An aerial photo of one of these clusters is shown in Figure 12. The 29 plants cannot be constructed all at once. In the most favorable scheme between 2 and 3 plants per year would be constructed. Consequently, construction of the last plant would Fig. 11. Northern part of The Netherlands with the towns Groningen and Delfzijl. The light grey area is the Groningen gas-field. Locations of gas production units (socalled clusters) are marked with dots. start 12 to 15 years after the first one. In these years, construction-, operation- and maintenance personnel gain a lot of experience that can be used to improve the quality of the design, the quality of processes, and the reduction of overall lifecycle costs. Furthermore, technology and science will continue to develop. Equipment such as pumps and valves, measuring devices and control systems will be improved. It makes therefore sense to incorporate the potential of new knowledge into a design. But, on the other hand, it may be a disadvantage for operation and maintenance if all plants become different. Hence, it was decided to strive for an optimum between functional uniformity and technical differentiation. This problem was solved by organizing the design as a modular system, in line with the principles described in this paper. Wherever possible, the same solutions would be chosen for each plant, resulting in uniformity. This made it also possible to track lifecycle experiences, enabling building and maintenance processes to be further optimized. The goal was to build the last plant for 70% of the costs of the first. New knowledge that could improve the design would be incorporated in new modules that replace older modules. Application of modularization principles was essential here: it had to be avoided that a simple design change would propagate too far in other places of a design. An important tool for the realization of this concept was the development of the knowledge Fig. 12. Aerial photo of one gas production cluster near a canal. The wells surface at the light-grey rectangular area. The gas is treated in a plant before it is supplied to the international gas distribution network. feedback system, see Figure 14. Knowledge created in each process would be used by that process for continuous improvement. But this knowledge also had to be made available to the design and planning disciplines, so that design and planning could be further optimized. The latter is also called front loading. Three types of data and knowledge sources are identified; see also Figure 13. The first is automated data collection from sensors and other equipment in the plant. The second is non-automated data collection such as from inspection reports. Inspection reports are directly entered as data in a computer, such as lap-top or a hand-held device. These two sources of data are still raw and need to be processed before they become useful. Figure 13 shows that this is a two stage process. Raw data may be analyzed and diagnosed for operational usage. Not all of that data is useful for other purposes. The filtered data are stored for long term data analysis, such as for tactical purposes (maintenance planning and scheduling) and strategic purposes (continuous improvement). Tactical analysis can be supported by a knowledge system, using rule based inference, while tactical analysis can be supported by data mining technology. The third source is explicit knowledge recording, such as in the form of idea's and suggestions for improvement. The various kinds of data and knowledge associated with design modules are stored in a i Automated Data Acquistion Г- raw data L_ i I Inspection Reporting raw data Data Collection Knowledge Recording lessons learned & improvement concepts Operational Analysis & Diagnostics rule based inference Data Filtering i Data Collection f Tactical Analysis rule based inference Strategic Analysis & Knowledge Acquisition data mining Operation support data Operation new inference rules i i Improvement Planning Condition & Performance Data » Improvement Plans Maintenance Planning & Scheduling Design & Maintenance Improvement Planning Fig. 13. The GLTproject uses three principal sources of data collection (top) that are analyzed and processed for operational, tactical and strategic usage common data warehouse and managed with support of a PDM system. Apart from the PDM system, a large number of other computer applications are used in this project, ranging from a variety of CA-applications, ERP, maintenance management and operations management systems. The logical knowledge structure as described in this paper affects most -if not all - applications in use. Practical limitations made it however unviable to change all applications according to the principles outlined in this paper. The reason was that software changes had to be done in a fully operational environment, which would disrupt the ongoing work too much. Therefore the principles were applied as a working practice within the organization, supported by the PDM/WFM software. Despite this restriction, the principles appear to be highly beneficial for the structuring and organization of knowledge and supported continuous improvement of the design, construction and servicing processes. Benefits result from cost reductions and higher end-user value. Lifecycle costs are estimated to be reduced between 25 and 30%, while the system also contributes to other performance factors such as higher availability, better reliability and increased safety [6]. Facility 1 Facility 2 Design and Operate and Facility n and Plan Operate Build and Maintain Transfer or Dispose integrated knowledge pool for product and process improvement I 1 1 discipline specific knowledge pools for process improvement Fig. 14. The creation of a learning and innovative organization by attaching knowledge created in all phases of the product lifecycle to design objects. This results in discipline specific knowledge pools and an integrated knowledge pool that is available for designers in new projects. 7 CONCLUSIONS AND RECOMMENDATIONS The theory and methodology that is described here aims at the reuse and improvement of design and engineering solutions. It supports Continuous Improvement (CI) as a task that is fully integrated with regular business processes. Generic design objects that are available in the design and planning systems give designers and engineers access to actual performance data of these objects in earlier projects. This enables them to learn from the past, even if the designers were not personally involved in these projects. The improvement process does not only rely on the creation of idea's by people involved in each business process, but makes also use of the analysis of data that are collected automatically via sensors or via inspection reports, or of knowledge records. Early implementations were based on parametric technology and led to substantial efficiency gains in business processes. Parametric technology offers the possibility to re-use design knowledge even in the context of entirely new specifications. More details about these applications can be found in [7], [4] and [6]. A more recent application based on a PDM based data warehouse closed the cognitive cycle and supports a process of continuous improvement in a large construction project for the oil and gas sector. Parts of the theory have been published and/ or used for standards for the exchange and sharing of product data, but are in these contexts not presented as a solution for cognitive processes. It could be of interest for users and application vendors to explore this aspect of the presented theory further. Generic software that supports theory and methodology via parametric technology did exist in the past but was not maintained. Future applications would benefit from redevelopment of this software. 8 REFERENCES [1] AIAG. Engineering change management -business case. AIAG Collaborative Engineering Steering Committee, 2005. [2] Bessant, J., Caffyn, S. High-involvement innovation through continuous improvement, Int. J. Technology Management, vol. 14 (1997), no. 1, p. 7-28. [3] Gielingh, W.F. General AEC reference model (version 4); Document N329 of ISO TC184/ SC4/WG1, Oct. 1988. Published as TNO Report BI-88-150. [4] Gielingh, W., Suhm, A. IMPPACT reference model for integrated product and process modelling. Springer-Verlag, ISBN 3-54056150-1 / 0-387-56150-1, 1993. [5] Gielingh, W., Los, R., Luijten, B., Putten, J.van, Velten, V. The PISA project, a survey on STEP. Aachen: Shaker Verlag, ISBN 38265-1118-2, 1996. [6] Gielingh, W.F. Improving the performance of construction by the acquisition, organization and use ofknowledge. Delft, p. 372, ISBN 90-810001-1-X, 2005. [7] Gielingh, W.F. A theory and methodology for the modelling of complex systems. Submitted for publication in the Journal for IT in Construction, 2008. [8] Lindberg, P., Berger, A. Continuous improvement: design, organization and management. Int. J. Technology Management, vol. 14 (1997), no. 1, p. 86-101. [9] Morris, P.W.G., Hough, G.H. The anatomy of major projects. John Wiley & Sons, ISBN 0471-91551-3, 1987. [10] Neisser, U. Cognition and reality - principles and implications of cognitive psychology, New York, ISBN 0-7167-0477-3, 1976. [11] Nonaka, I., Byosiere, P., Borucki, C.C., Konno, N. Organizational knowledge creation theory. International Business Review, 3 (1994). [12] Nonaka, I., Toyama, R., Konno, N. SECI, Ba and leadership: a unified model of dynamic knowledge creation. Long Range Planning, vol. 33 (2000), no. 1. [13] Rocha, L. M. Complex systems modeling: using metaphors from nature in simulation and scientific models. BITS: Computer and Communications News, Los Alamos National Laboratory, November 1999. [14] Shimizu, H. Ba principle: new logic for the real-time emergence of information. Holonics, 5(1), 1995 [15] Smith, E.E. Concepts and thought. The psychology of human thought. Cambridge: Cambridge University press, ISBN 0-52132229-4, 1988. [16] Womack, J., Jones, D., Roos, D. The machine that changed the world, New York, 1991. Paper received: 28.2.2008 Paper accepted: 15.5.2008 Product Family Modelling in Conceptual Design Based on Parallel Configuration Grammars Eugeniu-Radu Deciu1,2* - Egon Ostrosi1 - Michel Ferney1 - Marian Gheorghe2 1Université de Technologie de Belfort-Montbéliard, France 2University "Politehnica" of Bucharest, Romania In this paper, a product family modelling in conceptual design approach based on configuration grammars is proposed and developed. Product modelling is essential during the conceptual design both in the functional and the structural design spaces. However, there is no adaptedformal representation to support the modelling of configurable products. Grammars can be considered as formal powerful tools to represent the relationship inside the configurable products. Therefore, we propose a configuration grammar design approach which is based on two types ofgrammars: a functional grammar for configuration and a structural grammar for configuration. The two configuration grammars work simultaneously on different abstraction levels of the productfamily. Therefore, the evolution ofthe product model, from the stage of functional structure to the stage of physical structure, can be represented in an adapted way. The configuration grammars design approach constitutes an effective tool for the representation and the modelling of configurable product family. The proposed design approach is validated by the application to a design case of industrial products - the design of an external gear pump family. The results of the application validate the pertinence ofour approach. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: product family design, configuration design, product configuration, modelling, CAD 1 INTRODUCTION Design of configurable product family or design for configuration has emerged as an efficient tool to deal with the new challenges of a constantly dynamic and volatile market [1]. Design for configuration is the process which generates a set of product configurations based on a configuration model and is characterized by a configuration task [1] to [5]. The configuration task then consists in finding the configuration of a product by defining the relations between its components in order to satisfy a set of specifications and a set of constraints imposed on the product [1] to [5]. An essential characteristic of the conceptual design of a configurable product family is the product modelling [1], [5] and [6]. The effective modelling of a configurable product family must be capable to represent the complex relationship between the components of a product on the one hand, and the members of the family, on the other hand [7] and [8]. This modelling must also be capable to represent the product structures both in the functional space and in the structural space of design. Furthermore, the modelling must deal with the problem of generation and derivation of the different products, and thus carry out the variety of the new and innovative products [9]. However, in conceptual design, there is no adapted formal representation to support the modelling of the configurable products [10]. Grammars can be considered as formal powerful tools to represent the strong structural relationship inside the configurable products [7], [8], [10] to [14]. Grammar-based design systems have the potential to automate the design process and allow a better exploration of design alternatives [15] to [17]. This paper proposes and develops a configuration grammar design approach to support the computer-aided-design for product family modelling. Two interrelated subjects are considered in this research: 1. What are the properties of configurable products? (Properties extraction of configurable products) 2. How can we develop such generative configuration grammars capable to handle the configurable products structures in both, functional and structural, design spaces? *Corr. Author's Address: Université de Technologie de Belfort-Montbéliard, Laboratoire M3M 398 90 010 Belfort Cedex, France, d_eugen@hotmail.com (Development of Configuration Grammars Design Approach for Product Family Modelling) This paper is structured as follows. In the first section, the problem of design for configuration of product family is presented and the use of design grammars during the conceptual design is introduced. In the second section is developed the proposed configuration grammar-based design approach for product family modelling. The theoretical bases of the proposed approach are set-out. First, the properties of the structures of configurable products are extracted. Then, using the extracted properties of the configurable structures, the functional and the structural configuration grammars are defined and developed. The inference of configuration grammars and the algorithm of generation are also indicated. In the third section is presented the case study of an external gear-pump family modelling and representation. The design case study illustrates and validates the proposed design approach based on configuration grammars in the conceptual design. The conclusions and the perspectives of this research study are finally presented. 2 CONFIGURATION GRAMMAR-BASED DESIGN APPROACH FOR PRODUCT FAMILY MODELLING In this paper, we propose a grammar-based design approach for product family modelling in conceptual design, which is based on two configuration grammars (Fig. 1): - a functional grammar for configuration (FGC) and - a structural grammar for configuration (SGC). These two configuration grammars work in parallel in two design spaces, the functional space Fig. 1. The configuration grammar-based design approach and the structural space, in order to represent, model and configure a product family. The physical structure must accomplish the functional structure of the product family, and this means that the set of product functions has to be accomplished, in the structural space, by a set of physical solutions. So, the construction of the physical structure of the product family is based on its functional structure. In this paper we propose the development of both structures in parallel, by the means of the configuration grammars approach. The functional structures are of great importance in the development of the configurable product family. The functional structure of a product is used to represent the product functions. Then the first configuration grammar, FGC, is used to construct and to represent the functional structure of the product family in the functional design space. The second configuration grammar, SGC, is used to construct and to represent the physical structure of the product family in the physical design space. The proposed SGC grammar has two forms of representation, respectively: - a structural attributed graph grammar for configuration (SAGGC), and - a structural grammar for configuration based on features (SGCF). In the next sections are developed the proposed configuration grammars. 2.1 Properties of the Structures of Configurable Products To define the configuration grammars, we have extracted a set of properties of the structures of configurable products [12]. From the engineering design point of view, the feature-component-module-product relationships are adequate structural means for a general product representation. Since such means are recursive, any proper granularity level of representation must be introduced to asses design for configuration possibility. We note with structure(i), the level i of a precedence relationship. For instance a component is a structure(2) level. Then, the properties are defined to represent and generate the structures of a configurable product family and are defined as follows: Property 1. The significant structures. The significant structures(i) handled by the grammars are: the primitive structures or the terminals, the intermediate structures or the nonterminals and the final structure. Property 2. The configuration features. Each structure(i) is provided with a particular set of attaching elements, called the set of configuration features [11] and [12]; Property 3. The configuration connections. Each pair of structures, structure(i)-structure(/), is connected through the configuration features. The association of two or more, features of configuration, belonging to different structures, generates new elements called configuration connections [11] and [12]. Property 4. Generation of new structures. From the interconnection between the structures, primitive or intermediate, evolves a higher level structure to these ones [10]. The association of the configuration features produces the joint features on the one hand, and the tie features on the other hand [9] and [10]. The consequences of this property are [12]: o Addition - refers to the possibility of a structure(i) to be added to the existent product structure (of the product family); o Suppression - refers to the possibility of a structure(i) to be removed from an existent product structure (in a product family); o Recurrence - refers to the possibility of adding (and repeating) several times the same structure(i) in order to generate and configure the product configuration; o Replacement or swapping - refers to the possibility of a structure(i) of the product structure to be replaced by another structure(i); o Change of attributes - refers to the possibility of a structure(i) to change some of its attributes. Property 5. Geometric and topologic constraints. The interconnection between the structures can occur if and only if the structures satisfy some conditions defined on the geometric and topological domains (i.e. the geometric and topologic constraints). For instance, spatial orientation - indicates the relative spatial orientation of each structure in the product structure. 2.2 Functional Grammar for Configuration Different authors have proposed several functional grammar approaches to address the functional design space [7], [18] and [19]. In a similar way, we propose a functional grammar for configuration to generate the product functional structure. Given the set F = {f, f2, f3,..., f.,... fm}, with i e [1, m], the set of m functions of the family of product FP, these functions define the functional structure or the functional graph for each product in the family FP. Then, to generate (construct) the functional graph, characteristic for each product, we define the functional graph grammar for configuration (FGGC). The functional graph grammar for configuration is defined as the rewriting graph system expressed like a quadruple (1): FGGC = (VN,VL,P,S ) (1), where: VN = is the alphabet of labeled nodes that represent the set of elementary functions of the product family; VL = is the alphabet of nodes labels that designate the functions names; P = is the set of graph productions (the rules of graph productions); S = {O} is the start symbol of the graph. The alphabets of the nodes and labels are defined as follows: - the alphabet of labeled nodes: The alphabet of the labeled nodes is defined as the union of the terminal alphabet and the nonterminal alphabet of nodes. vn = VN u VT, where: V n I function_namet I v n S - the alphabet of labels: The alphabet of the labels is defined as the union of the terminal alphabet and non-terminal alphabet of labels. vl = vN u vT where: vN = {f} is the non-terminal alphabet of labels; vT = {function_namei}ie [1,m] is the terminal alphabet of labels, m is the total number of functions of the product family and the variable function_name1 is the specific name of each function. The set of productions P is formed of the following rules: Pj) Generation of the first (non-terminal) node of the functional graph: f P2) Transformation of a non-terminal functional node into a terminal functional node of the functional graph: f funct1on_name1 P3) Addition of a non-terminal functional node to a final functional node of the functional graph: function_name1 f Шаю^т^ f e ш-► e-% P4) Transformation of a terminal functional node into a non-terminal functional node of the functional graph: function_ namei f §—► § P5) Suppression of a non-terminal functional node from the functional graph: funct1on_namei f funct1on_namei e-e —► © Each rule of production of the functional graph grammar for configuration (FGGC) is defined like a triplet (GL, GR, T), composed of the two graphs Gl and GR, called the left side and the right side, and T is called insertion function. A production rule is applied to the graph G whenever we wish to change the structure of G. More precisely, the production is carried out by locating an isomorphism of GL in G, by eliminating Gl from G, by replacing GL with GR and by inserting GR in G, using the function T. 2.3 Structural Configuration Grammar In this approach, we propose the representation and the modelling of a product family in the structural design space, by the help of the structural grammar for configuration (SGC). The structural grammar for configuration has two forms of representation: A) the first representation is based on a structural attributed graph grammar for configuration, B) the second representation is based on a structural grammar of configuration based on features. These two forms of the structural grammars work together and are complementary in their purpose to model and configure a product family. 2.3.1 Structural Confìguratìon Attributed Graph Grammar The structural attributed graph grammar for configuration (SAGGC) is defined as a seven-tuple: SAGGC = {N, E, A , B, S, P, O} (2), where: - N={nt , Nn } is the alphabet of terminal and non-terminal nodes; - E = {et , En } is the alphabet of terminal and non- terminal edges; - A = is the set of nodes attributes; - B = is the set of edges attributes; - S = is the start symbol; - P = is the set of production and transformation rules for the nodes of the attributed graph grammar. In the above definition, the sets N and respectively NN have the following definitions: - NT = {Lcture, jut-tie features } represent the terminal alphabet of nodes and consists of a terminal alphabet of significant configuration structures and a terminal alphabet of joint and tie features of configuration. The two alphabets are defined as follows: - Vjiru^ = {structure(i)} is a finite, non-empty set of significant and primitive configuration structures of a product called the terminal alphabet of configuration structures. For example, the terminal structures are the parts of a configurable mechanical product. - Vjornt-üe features = featurej is a fi^ non-empty set of configuration features called the terminal alphabet ofjoint features and tie features. For example, the generic term feature(j) is defined by the following set: feature. = {ext_thread, cone, planar_circ_face, key_groove, parallel_groove, ext_spline, ext_cyl_face, ext_plane_face, gear, ext_cyl_area, ext_prismatic_area}. The set NN has the following definition: - NN = Ornature, Vjbint-iie connections, s} represent the non-terminal alphabet of nodes and consists of a non-terminal alphabet of significant structures, a non-terminal alphabet of joint and tie connections, and the start symbol S. = is a finite, non-empty set of non- primitive significant configuration structures, called the non-terminal alphabet of configuration structures. The non-terminal structures are the modules/products/families of mechanical configurable product family. - Vpint-tie connections = connectionk represent a fi^ non-empty set of connections of configuration between the configuration features called the nonterminal alphabet of joint and tie connections. For example, the generic term connection(k) is defined as follows: connectionk = {plane_connection, circ_plane_ connection, cylindrical_connection, cylindrical_ stepped_connection, slot_connection, threaded_ connection} - E ={et , EN } is the terminal and non-terminal alphabet of edges. - ET = is the terminal alphabet of edges. A terminal in ET, represent a relation between a terminal of Klucuure and a terminal of Vjoint-tie features , where ET t VT X VT — structure joint-tie features ' - EN = is the non-terminal alphabet of edges. A terminal in EN, represent a relation between a nonterminal of VNucture and a non-terminal of VN. . joint-tie connections 5 where EN T VN X VT — structure joint-tie connections' - S = is the start symbol of the configuration grammar. - P = is the set of graph productions of the attributed graph grammar. The set of production and transformation rules satisfy the property of generation of new structures of the configuration grammar. The inference of the properties of addition, suppression and recurrence makes possible to generate new structures, and consequently, the corresponding rules define the set of production rules of the configuration attributed graph grammar. The properties of replacement and change of attributes make possible to transform a structure of a given level (primitive, intermediate or final) into another structure, and consequently, the corresponding rules define the set of transformation rules of the configuration attributed graph grammar. Then, in our SAGGC approach, we distinguish three types of production rules: - the production of addition (Paddition), - the production of suppression (Psuppression), - the production of recurrence (Pecurrence). The graph representation of these productions is indicated in Figure 2. Also, in our approach SAGGC, we distinguish two types of transformation rules: - the transformation of replacement, - the transformation of change of attributes. The transformation of replacement Tireplacement has the following graph representation (Fig. 3). Formally, the transformation Tireplacm has the expression: r -1 Teplacm r 1 {X,x}—->{Y,y} (3). a -o + oX y <э о о-^И^ю^ооо О^ооьЧЗ - о—О ^ о- о -о + о- o yecu/r enee pec ■o^O О -о + о- О ■Recurrence pec -ос} о- y z О Fig. 2. Set of productions rules used in the SAGGC approach I W I W I W W T replacement Tchg a Fig. 3. Example of the transformation replacement of structures The transformation of change of attributes g attb has the following graph representation (Fig. 4). Formally, the transformation Tchg attb has the following expression: Tchg at -*{X(A2), x} (4) {(AO, x}- O = {+, -} is a set of operators applied on the structures. These operators satisfy the property 4. 2.3.2 Structural Configuration Grammar Based on Features A configuration language describes the generation of configuration structures, joint elements and tie elements [10]. So, a configuration grammar based on features provides the formal and generic description for this configuration language. Then, the structural grammar for configuration based on features, SGCF, is defined as the 8-tuple: SGCF = {Vs T VT VN structures> joint-tie features> structures> Vj N joint-tie connections S, v, л, p} (5), where: VLctures = {structure(i)} is a fmite, non-empty set of primitive significant structures called the terminal alphabet of configuration structures. Where structure(i), is a terminal configuration structure. VLctures = {STRUCTURE(Д-, S} is a finite, nonempty set of non-primitive significant structures called the non-terminal alphabet of configuration structures. Where STRUCTURE(i), is a nonterminal configuration structure and S is a special non-terminal symbol called the start symbol. Vjoint-tie features = {(0, featurej v} is a fm^ nonempty set of configuration features called the X(Ai) ^ X(A2) Fig. 4. Example of the transformation change of attributes of the structures terminal alphabet of joint and tie features. The term feature, has the same meaning like in the SAGGC definition. Vpint-tie connections ={O,■■■, connectionk ,lV , A} is a finite, non-empty set of configuration connections between the configuration features called the nonterminal alphabet of joint and tie connections. The term connectionk has the same meaning like in the SAGGC definition. S, V, A are respectively the starting symbol of configuration structures, the starting symbol of joint connections and respectively of tie connections. is a finite, non-empty set of productions rules. The following conditions must be held concerning the terminal vocabulary and the nonterminal vocabulary, of configuration structures and respectively of joint features-tie features: Structure n VsNructure = ® and, Vjoint-tie features ^ Vjoint-tie connections ^ a ß P : < Га rß Aa. Aß (VT . . V'N 'joint-tie features 'joint-tie connections To define the generation of new structures in a non-ambiguous way, we have to state a set of conditions on the production rules. The interconnection between structures can occur if and only if the structures satisfy some conditions defined on the geometric and topological domains. These conditions are represented by the geometric constraints and topological constraints. For instance, spatial orientation - indicates the relative spatial orientation of each structure in the product structure. The geometric and topological constraints are imposed at each level of production rules. This Z Z Z Z X Y means that the grammar productions rules must meet obligatory conditions or constraints before being applied. Then, a conditional production rule C is defined as follows: (6), а C ß Га rß г Аа Aß Ca^ß C = Cra^rß are semantic CAa^Aß _ where associated to each level of a productions rule. generation. Comparing to that paper, the main difference is the purpose of our approach, which is to configure a product family. Firstly, our contribution is to define the concept of significant regions as structures with a functional signification in the configuration of the part. Secondly, the introduction of the concepts of configuration features and configuration connections, on the one hand, and of conditional grammars, on the other hand, constitutes important contributions that make possible to handle and connect the structures, in order to generate a variety of part configurations. 2.4 Inference of Configuration Grammars 2.4.1 Inference of Configuration Grammars on the Multiple Abstraction Levels ofthe Product Structures The configuration grammar design approach is based on two parallel grammars that are developed to address the product family modelling in two design spaces during the conceptual design: the functional space and the structural space. Moreover, the two proposed configuration grammars work simultaneously on four different abstraction levels of the significant structures: - the part level, - the module level, - the product level, - the family level. Then, the functional grammar is used in the functional space of design to generate the functional structure of a product, structure which is translated, in the structural space, in a physical structure solutions corresponding to the previously four levels. In this paper we present only the inference of configuration grammars on the part level and the product level of configurable product structures. Next we indicate the application steps of our configuration grammars approach on the part level generation. 2.4.2 Inference ofConfiguration Grammars on the Part Level On the part level, our structural configuration graph grammars have a similar representation, in a certain limit, with the representation form of the graph grammars suggested by Fu et al. [20] for part structure Steps of inference of configuration grammars on the part level 1) Generation of the functional structures of the product. 2) Identification of the set of elementary functions to be accomplished by the part. 3) Identification of the possible configuration features which can accomplish and materialize the elementary functions required. 4) Join of the features and identification of the regions with a functional signification in the component structure. 5) Join of the functional significant regions, generation and validation of the various configurations of the part. According to the variation of the elementary functions, the structural space is concretized in several alternatives of part regions. These regions are combined in order to generate several part configurations. The various configurations are validated according to the main functions that are defined initially. These steps are summarized in the following algorithm of generation of the configurations structures of a part (Fig. 5). The same algorithm is applied to all abstraction levels of a product family in order to generate a variety of configurations, where each level will has its corresponding significant structure. Concerning the application of the algorithm of generation, an interesting research direction could be how to improve the design information extraction from the data base. In this direction, different methods and approaches could be considered such as the probability-based approach proposed by Kim et al. [21] and the conceptual design algorithm proposed by Žavbi and Duhovnik [22]. Base Expert Knowledge Definition of the set of configuration features Definition of the set of configuration connections Yes 1 Identification of the couples of joint and tie features < Comparison with the available configuration connections No Do they --..„„_ correspond? Connection of the significant regions i Generation of the intermediate structures No All the regions are connected? Yes Generation of the final structures Fig. 5. Algorithm of generation of the product structures using the configuration grammars 3 INDUSTRIAL APPLICATION 3.1 External Gear Pump Structure In this section, we present the application of our configuration grammar approach to the design case study of an external gear pump family. The external gear pumps are widely used in hydraulic systems, due to their simplicity, reliability, and very high power ratings. External gear pumps are fixed displacement and are used in different applications, such as: hydrostatic mobiles units, equipments for transport, machine tools and other applications with a large number of variants and configurations. The structure of the gear pump is composed of the following main components [22] (Fig. 6): flange (1); body (2); thrust seal (3); bearings (4); Fig. 6. Generic structure of the external gear pump shafts (5, 5'); rear cover (6); screws (7); dowel pin (8); lip seal (9); woodruff-key (10); washer (11); nut (12). The purpose of this application is to represent, to model and to generate the structures of an external gear pumps family by the help of configuration grammars inferred on different level of abstraction. In this case study, we are indicating the inference of configuration grammars on the part level and respectively, on the product level. 3.2 Inference of Configuration Grammars on the Part Level First, we present the inference of configuration grammars approach on the part level. The steps of inference of the configuration grammars approach correspond to the steps presented in the paragraph 2.4.2. 3.2.1 Functional Significant Regions On the part level, one or more configuration features are joined together, in order to generate a region with a functional meaning. We call this region a functional significant region. Let us take as example the generation of the driving shaft of the external gear pump. The structure is composed of three main functional significant regions: - Region I - the left area of the shaft; - Region II - the central area of the shaft; and - Region III - the right area of the shaft. Each one of these three regions is made up of a collection of one or more configuration features which are "assembled" together in order to accomplish a common function, i.e. the corresponding function of the region (Fig. 7). Region I Region II Region III -а a О ■=> о О + О ^ О—О 00+ О ^ (НЮ ООО о О Fig. 8. Graph productions to generate the functional significant regions of the Pi, Pi Tighten О Tighten Block Pi, P3 Tighten Block Tighten Block Guide Ph P3 Tighten Block Guide Fig. 9. Productions to generate the functional structure of region I The set P of graph productions describes the generation of the structure , starting from its functional significant regions, and is given above (Fig. 8) In the following section, we present the generation of each of the three significant regions of the . Generation of region I of the structure: As we specified, the feature association is made with the purpose to accomplish one/many functions of the product. In this case, the function which must be accomplished by region I (the left area of the part) is "to transmit the torque of the movement", with elementary sub-functions: "to tighten", "to block" and "to guide". The three functions are materialized respectively by three features: , and . Other geometric alternatives of the configuration features are available, in order to generate various configurations of the same region (Fig. 7). Thereafter, we present the productions inferred to generate the functional structures of the regions, and respectively the physical structures of the regions. In the functional space, the functional structure of region I is generated by the inference of the productions of FGC grammar (Fig. 9). In the structural space, the production rules used to generate region I of the structure are as follows (Fig. 10). ^suppression -> Psuppression + FÖ7! P* :ddition Fig. 10. Productions to generate the region I of the Below. are indicated the formal and the graph representations of the first production. The formal representation of the production which generates region I is (7): [CONICAL_THREAD_ AREA — (CYL- THREAD_ ARE^(CONICAL_ AREA) [[PlaneConnection^ — lExtPlaneFace^E ExtPlaneFace^) ] (ExtThreatf) — (ExtThrea^(i0) (ExtCylFacej — 0)( ExtCylFace2) {KeyGroov^ — ( 0^ KeyGroove) {ConicalLatFac^ — ( ConicalLatFacèj (7). The graph representation of the production presented above is (Fig. 11). ConicalLa tFa ce KeyGroove О ExtPlaneFace\ CYL_THREADED AREA + ExtPlaneFa cei CONICAL AREA KeyGroove ° i=> Plane Connection CYL_THREADED CONICAL AREA AREA ConicalLatFace О KeyGroove ExtPlan eFace2 CONICAL_THREADED AREA Region I Fig. 11. Graph productions to generate the region I Generation of region II of the structure: In the case of the central area of the . the function to be accomplished is "to turn around its own axis" and "to transmit the rotational movement to the driven shaft". We distinguish two cases of possible solutions: the first one. when the two functions are materialized together into the same single solution. and the second one. when the functions are materialized by two distinct solutions. The set of features which materializes the two functions is composed of three features: for the first case; and respectively . for the second case. In the functional space. the production used to generate the functional structure of region II is (Fig. 12). P1 . P2 Turn О Transfer of rotat. motion Transfer of rotat. motion Turn P* P" Turn Fig. 12. Productions to generate the functional structure of the region II The formal representation of the production which generates region IIis: [(CONICAL_ THREAD_ ARE.A — CYL_ THREAD_ AREA){C0NICAL_ AREA)] [( PlaneConnection) — ( ExtPlaneFace)^ ExtPlaneFace^ ] (ExtThread) — ExtThread) (p) ExtPlaneFace2) — (o) (ExtPlaneFace2) (KeyGroove^ — (KeyGroove) (' ConicalLatFace) — (0)( ConicalLatFace) (8). In the structural space. the production which generates the region II of the structure shown in Figure 13. The graph representation of the production presented is (Fig. 14). P"* + Fig. 13. Production to generate the region II of the ExtCylFace P^ddition + ExtPlaneFace ExtPlaneFace ExtPlaneFace ExtPlaneFacei CYL AREA SPUR GEAR F> ExtCylFace1 ExtPlane ExtPlane Face2 Face1 ExtPlaneFace2 ' ^ExtPhneFace Х^У ^ ^ ' ^ ExtPlaneFace1 Plane Connection CYL_AREA SPUR_GEAR ExtCylFace1 ExtPlaneFace1 ExtPlaneFace2 CYL_AREA SPUR_GEAR Region II Fig. 14. Graph productions to generate the region II Generation of region III of the structure In the case of region III of the structure, the functions to be accomplished are: "to turn around its own axis" and if required "to transmit the torque of the movement". For this example, we considered only the first function. So in the functional space, the functional structure of region III consists only in one function (Fig. 15). In the structural space, the corresponding feature which materializes this function is the (Fig. 16). Pi, P2 Turn О -► • Fig. 15. Production to generate the functional structure of the region III О ExtCylFacst с^-о о О ExtPlaneFacei V--' ' CYL_AREA Fig. 16. Graph production to generate the region III of the Generation of the final structure : This stage consists in connecting the structures, which were previously generated for the three main regions, in one single structure, i.e. the final structure of the . To generate the configuration of the part structure implies the arrangement of the structures of the significant regions I, II, III, Tighten Trans fer of rotat. motion P2, P3 Tighten Transfer of rotat. motion Fig. 17. Production to generate the functional structure of the region II' their connection according to the set of configuration productions (Fig. 8) and the validation of the final generated structure according to the initial functional structure. In functional space, the rule of production to generate the functional structure of region II' has the given form in Figure 17. In the structural space, the production used to generate the structure is given in Figure 18. The production rule to generate the final functional structure of the region III', i.e. , is given in Figure 19. + P2dd Fig. 18. Production to generate the region II' of the P2 , P3 Tighten I I Block I | Guide | | Turn Fig. 19. Production to generate the functional structure of the region III' In the structural space, the production used to generate the physical structure of the is given in Figure 20. The result of this production inference, Figure 20, represents the final structure of the . Once the final structure is generated, we have inferred the transformation of replacement for each of the three functional significant regions Turn Turn + Fig. 20. Production to generate the final structure of the constituting the structure , and we have generated several configurations of the structure shaft. Some of the configurations that were generated are indicated in the Figure 21. The same mechanism of configuration is used to generate the other parts of the external gear pump. The alternative configurations of the parts are then used in the generation of the structures of an external gear pump family. 3.2.2 Inference of Configuration Grammars on the Product Level The configuration grammars are inferred on the product level in order to generate different instantiations of the external gear pumps. So, based on the significant structures, on the one hand, and on the transformation rules, on the other hand, we can produce new and novel product structures. The inference of the replacement rule on the product level produces the replacement of a part in a given product structure. By using this transformation rule in the case of an external pump structure, the replacement of several parts, we can generate a variety of configurations of gear pumps, i.e. a family of pumps. In Figure 22 are presented some configurations of gear pump structures generated from the inference of the transformation of replacement. Fig. 21. Various configurations generated for the structure Fig. 22. Various configurations of gear pumps generated after the inference of the transformation of replacement 4 CONCLUSIONS In this paper we have proposed a product family modelling approach in conceptual design based on configuration grammars. The proposed approach is based on two configuration grammars: a functional grammar for configuration (FCG) and a structural grammar for configuration (SCG). These two grammars work in parallel and are used to generate simultaneously the functional structure and respectively the physical structure of a configurable product family. The FGC grammar is used to construct and to represent the functional structure of the product family in the functional design space. The SGC grammar is used to construct and to represent the physical structure of the product family in the physical design space. The SCG grammar has two forms of representation: an attributed graph grammar for configuration and a grammar for configuration based on features. The contributions of our design approach are: 1. The configuration grammars are defined on the properties of the structures of configurable products. 2. The proposed grammars are based on feature-component-module-product relationships, considered as adequate structural means for a general product representation. 3. The functional domain and structural domain are considered simultaneously in order to generate the functional structure and the physical structure of a configurable product family. 4. The configuration grammars, defined on the functional design space and on the structural design space, are complementary and work in parallel in the two design spaces. 5. An algorithm of generation of the product structures using the configuration grammars was proposed. 6. The configuration grammars work simultaneously on multiple levels of abstraction of the significant structures. We have applied the proposed configuration grammars-based design approach to the design case study of an external gear pump family. In this design case, the configuration grammars were inferred on the part and the product level, in order to generate the external gear pump family. The results obtained from the inference of configuration grammars validate and demonstrate the interest of our approach. Our current research work is concentrating on the possibilities of implementation of the configuration grammar design approach in CAD environment software. The proposed method meets functional requirements in a qualified way, but not necessarily quantified (e.g. stress limit constraints). The introduction of the behaviour space in the definition of design grammars could be a new open direction of research. Another direction of research is how the proposed method can facilitate the application of analysis based examinations. 5 REFERENCES [1] Tiihonen, J., et al. Modelling configurable product families. Proceedings of the 12th International Conference on Engineering Design (ICED'99), vol. 2, Munich, p.1139-1142, 1999. [2] Snavely, G.L., Papalambros, P.Y. Abstraction as a configuration design methodology. Advances in Design Automation, (New York: ASME) DE - (65)-1, p. 297-305, 1993. [3] Brown, D.C. Defining configuring. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Cambridge University Press, vol. 12, p. 301-305, 1998. [4] Sabin, D., Weigel, R. Product Configuration Frameworks - A survey. IEEE Intelligent Systems, vol. 13(4), p. 32-85, 1998. [5] Soininen, T., Tiihonen, J., Männistö, T., Sulonen, R. Towards a general ontology of configuration. AIEDAM, vol. 12(4), p. 357372, 1998. [6] Männistö, T., Soininen, T., Sulonen, R. Modeling configurable products and software product families. Presented at the IJCAI'01 Workshop on Configuration, Seattle, USA, 2001. [7] Siddique, Z., Rosen, D. Product platform design: a graph grammar approach. Proceedings of DETC'99, ASME Design Engineering Technical Conferences, 1999. [8] Du, X., Jiao, J., Jiao, J. Tseng, M. Graph grammar based product family modeling. Concurrent Engineering: Research and Applications, vol. 10(2), p. 113-128, 2002. [9] Ostrosi, E., Ferney, M. Feature modeling grammar representation approach, AIEDAM, vol. 19(4), p. 245-259, 2005. [10] Ostrosi, E., Ferney, M., Deciu, E.R., Garro, O. Feature-based reasoning for designing structural configuration in advanced CAD systems. 5th International Conference on Integrated Design and Manufacturing in Mechanical Engineering (IDMME 20)04), 57 April, Bath, UK, 2004. [11] Deciu, E.R., Ostrosi, E., Ferney, M., Gheorghe, M. Configuration grammar-based design approach for product family modeling in advanced CAD systems. Proceedings ofthe 16th International Conference on Engineering Design (ICED'07), ISBN 1-904670-02-4, Paris, France, 2007. [12] Deciu, E.R., Ostrosi, E., Ferney, M., Gheorghe, M. A configuration grammar design approach for product family modeling in conceptual design. Proceedings of the 7th International Symposium on Tools and Methods of Competitive Engineering (TMCE'08), Izmir, Turkey, 2008. [13] Schmidt, L.C., Cagan, J. Grammars for machine design. Gero J.S., Sudweeks F. (ed.), Artificial Intelligence in Design'96, Kluwer Academic Publishers, p. 325-344, 1996. [14] Schmidt, L.C., Cagan, J. Optimal configuration design: an integrated approach using grammars. ASME Journal of Mechanical Design, vol. 120(1), p. 2-9, 1998. [15] Mullins, S., Rinderle, J.R. Grammatical approaches to design. Part 1: An introduction and commentary, Research in Engineering Design, vol. 2(3), p. 121-135, 1991. [16] Rinderle, J.R. Grammatical approaches to engineering design. Part II: Melding configuration and parametric design using attribute grammars, Research in Engineering Design, vol.2(3), p. 137-146, 1991. [17] Chase, S. A model for user interaction in grammar-based design systems. Automation andConstruction, vol. 11(2), p. 161-172, 2002. [18] Starling, A.C., Shea, K. A grammatical approach to computational generation of mechanical clock designs. Proceedings of International Conference in Engineering Design, ICED'03, Stockholm, Sweden, 2003. [19] Schmidt, L.C., Shi, H., Kerkar, S. A constraint satisfaction problem approach linking function and grammar-based design generation and assembly. ASME Journal of Mechanical Design, vol. 127(2), p. 196-205, 2005. [20] Fu, Z., De Pennington, A., Saia, A. A graph grammar approach to feature representation and transformation. International Journal of Computer Integrated Manufacturing, vol. 6(1&2), p. 137-151, 1993. [21] Kim, S., Ahmed S., Wallace, K. Improving information extraction using a probability-based approach. Journal of Mechanical Engineering - Strojniski vestnik, vol. 53(7-8), p. 429-441, 2007. [22] Žavbi R., Duhovnik, J. Conceptual design chains with basic schematics based on an algorithm of conceptual design. Journal of Engineering Design, vol. 12(2), p. 131-145, 2001. [23] External gear-pumps catalogs: Hesper, Rexroth Bosch, Enerflux Industrie. Paper received: 28.2.2008 Paper accepted: 15.5.2008 Coupling Functions Treatment in a Bi-Level Optimization Process Benoìt Guédas* - Philippe Dépincé Ecole Centrale de Nantes, Institut de Recherche en Communication et Cybernétique de Nantes, France The optimization of complex engineering systems is often a mix of multi-objective optimization process - each service, discipline has to fulfill several objectives - and multidisciplinary optimization process - several disciplines are required. The disciplines are bound to each other: outputs of one discipline are used as input of other disciplines (the coupled variables) and a discipline can not have a direct access and knowledge to the whole set of variables. An approximation of the coupled variables is thus needed. The Collaborative Optimization Strategy for Multi-Objective Systems (COSMOS) has been developed at IRCCyN to perform simultaneously multi-objective and multidisciplinary optimization while discipline autonomy is guaranteed. It uses a simple method for the approximation of coupled variables and assumes that the quality of the approximation will increase as the algorithm converges to optimum solutions. In this paper, experiments are made to verify wether this assumption is true. We show that satisfying results are found on some test problems but limits of the methods are pointed out. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: multidisciplinary optimization, multi-objective optimization, genetic algorithms, coupled problems 1 INTRODUCTION Nowadays, the designer has to face the continuous growing complexity of engineering problems, but also, the increasing economic competition that have led to a specialization and distribution of knowledge, expertise, tools and work sites. Consequently, multi-objective optimization (MOO) and multidisciplinary design optimization (MDO) are more and more used to provide one solution or an optimal set of solutions. While single-discipline optimization is mature, the design and optimization of complex systems (more than one discipline) is still quite young. Since the white papers provided in 1991 and 1998 by the AIAA [18] and [12], lot of research has been done in the multidisciplinary optimization domain: at the beginning centered on the aerospace industries, they are now used in different kind of enterprises (automotive, ship building, etc.) which expect from such a tool a way to improve their products, their organizations, Alexandrov and Lewis [1] defined MDO as a "systematic approach to optimization of complex coupled engineering systems where "multidisciplinary" refers to the different aspects that must be included in the design problem". A classical way to describe a multidisciplinary problem is presented in Figure 1. In a multidisciplinary problem, each sub-system (discipline) has its own design variables, objective and constraint functions. Some design variables, common to at least two sub-systems, are called common variables. Disciplinary outputs from one discipline can be needed to evaluate another subsystem. In this case there is a coupling between two disciplines and these variables are called coupling variables [8] and [5]. The third variable type, state variables, are internal variables particular to one discipline: they represent conditions that have to be satisfied within the discipline. In each discipline an evaluation/analysis is conducted that allows to compute the outputs: functions, constraints and coupling variables if needed. Frequently, complex systems are non-hierarchical implying that there is no reason to process the optimization of one sub-system before another [5]. In the optimization process of such systems, the presence of coupling functions and their recognition constitutes a real challenge for researchers. Several methods have been designed to deal with coupling problems (MDF, IDF, AAO, CO, CSSO, BLISS, etc. - section 2), but they are not *Corr. Author's Address: Ecole Centrale de Nantes, Institut de Recherche en Communication et Cybernétique de Nantes, 1, rue de la Noè - BP 92 101, 44321 Nantes CEDEX 3, France, benoit.guedas@irccyn.ec-nantes.fr 413 GS Fig. 1. A fully coupled disciplines system suited for the extended enterprise context where disciplines and tools are distributed on multiple sites. The main drawbacks of theses approaches are i) the unique solution given to the designer and ii) their mathematical formulation that is not always adapted to the industrial context: most of these methods centralize the optimization at the system level while it should be handled by sub-system levels. Collaborative Optimization Strategy for Multi-Objective Systems (COSMOS) [19] is a method aiming to fill the gap between classical MDO methods and industrial needs by: i) taking advantages of the multi-objective genetic algorithms and provides the designer with a set of optimum solutions and ii), giving more autonomy to the disciplines. The next part of this paper will present the MDO methods: the classical ones based on exact mathematical approaches and some that try to simulate an engineering process and are based on multi-objective genetic algorithm. The third part introduces the problems caused by the autonomy of disciplines in the resolution process. Parts 4 and 5 describe the test examples and the result obtained by COSMOS method on coupled problems. Finally, some conclusions and perspective are given. 2 COUPLING FUNCTIONS IN MULTIDISCIPLINARY OPTIMIZATION METHODS The ideal optimization process to solve a multidisciplinary optimization problem consists in separating the analysis phase from the optimization phase: the multidisciplinary analysis (MDA) computes the set of feasible solutions then the optimization selects the optimum from the previous set. This approach is not possible in practice because of the high computational cost required to determine the whole set of feasible solution. Moreover in most problems the disciplines cannot easily exchange data between each others. 2.1 A Multi-Objective Coupled Problem Let's consider a simplified model of two disciplines D1 and D2 (it can be generalized to n disciplines). Each discipline D . has a state equation E(xc, x, yj, u) = 0. We will consider that there is an implicit function e. : X x X x Y ^ U, where x e A i c i j i c Xc is the design variables that are shared among disciplines (common variables), x.e X. is the vector of disciplinary variables, yj e Yj is a parameter given by the discipline D and u. e U the vector of state variables given by the state equation e.. This results in the following coupled problem: (1). «1 = -<-l = ei{xc,x1,y2) ^ u2 = e2(xc,j:2,yi) Ut = hiXc^UU!) I 1/2 = '''2 ■ "l) (3) The set S of solutions to the problem is then: (4). With p the order relation of Pareto dominance: given a = (a1,...,an) and b = (bl,...,b), ,n}, a. < b.. 1 ' j j ip b ^ Vi e {1,..., n} , a, < b, a3J e {1, Unfortunately, such a set of optimum solutions is intractable under the hypothesis of partitions of the variables in each discipline. Indeed, a discipline i cannot access another discipline j variables and does not know its coupling function. Most of the MDO methods reported in the literature are developed specifically for single-objective problems with continuous variables and differentiable objective. These MDO methods are classified in two groups: mono-level and bi-level. The single-level (mono-level) group implies optimization at the supervisor level only. The bilevel group allows each discipline to manage its own optimization regarding its design variables. Multidisciplinary problems are often written in a simpler form, where the state variables are directly given to the other discipline, so they are also coupling variables: f min /lUv.-fb i/i) f mill fo(xc, ,t'2, г/2) Dl —i D J — r5). { m = eil^^'bi/s] ( У:2 = e2(Xc,X2,yi) This notation will be used in the next section to present some multidisciplinary optimization methods. 2.2 Mono-Level Approaches The mono-level family contains three multidisciplinary methods: Multidisciplinary Feasible (MDF), All-At-Once (AAO) and Individual Discipline Feasible (IDF) [16], [1], [14] and [6]. All the given formulations have different ways to handle the dependency of coupling functions. Dennis et al. [9] proposed an extension of all the above methodologies to the optimization of system of systems. disciplines are independent. Drawbacks are the computational effort and the lack of guaranty for the coupling variables to converge to a feasible solution. (6). The optimization variables are xc, x1 and x2. At each optimization step, the set of feasible solutions - described as A in (2) - is computed. The system of coupling equations must be solved. A fixed point iteration (FPI) algorithm, often used in this case, may not converge if the functions are not convex and may avoid hidden solutions [3]. All at Once (AAO) All the variables (design, coupling, state) are considered as design variables and the analysis system equations becomes constraint. Hence, CPU consuming iterative analysis of sub-system are skipped but it increases the dimension of the design space. The problem formulation can be expressed by: (7). and y The optimization variables are xc, x1, x2, y1 2.2.1 Multidisciplinary Feasible (MDF) 2.2.3 Individual Feasible (IDF) MDF is the most used approach to solve a MDO problem. A complete multidisciplinary analysis is performed for each choice of the design variables by the optimizer. This is conceptually very simple, and once all disciplines are coupled to form one single multidisciplinary analysis module, one can use the same techniques used in single discipline optimization. In this formulation the optimization variables are the design variables, the optimization is global and each iteration gives a feasible solution. Moreover the evaluation within the IDF is a compromise between AAO and MDF. At each point, each discipline is feasible but the whole system will only be feasible at the end. In this methodology, coupling variables are added to design variables and some auxiliary variables, z,, are introduced to allow decoupling the disciplines. Some equality constraints are added that allow compatibility between coupling and auxiliary variables. This substitution relaxes the coupling between disciplines: for some iterations, a point cannot fulfill all the coupling. (8). The optimization variables are xc, x1, x2, z1 and z2. In mono-level approaches, the optimization problem is seen as a single global problem and all the variables are accessible. Multi-level approaches give more autonomy to the disciplines by allowing them to solve their own optimization problem locally. knowledge of the other groups constraints or design variables is not required. The objective of each subsystem optimizer is to agree upon the values of the interdisciplinary variables with the other groups. A system level optimizer is employed to coordinate this process while minimizing the overall objective. It promotes disciplinary autonomy while achieving interdisciplinary compatibility. The subsystem optimizer does not allow discipline optimization but only tries to reach consistency upon the common and coupled variables. The optimization process remains global at the system level. All the methods described above are not designed for multi-objectives problems and give only a single solution to the designer. 2.3 Multi-Level Approaches 2.4 Multi-Objective Multi-Level Approaches In the case of bi-level optimization method, the original optimization problem is divided into optimization at both system and sub-system levels. Coordination between sub-systems is managed by an optimizer in charge of solving inconsistencies between the disciplines. Several strategies have been developed and the most discussed are Collaborative Optimization (CO) [7], and Concurrent SubSpace Optimization (CSSO) [20]. Other methods like Bi-Level Integrated System Synthesis (BLISS) [15], Analytical Target Cascading (ATC) [2] or Physical Programming (PP) [17] have been developed but will not be detailed in this paper. The two firsts are part of the Discipline Feasible Constraint (DFC) group. The primary features of each of these architectures include : i) the use of heterogeneous hardware or software, specific to the domain, to solve the subspace optimization problems, ii) the decomposition keeps domain-specific constraint information in the sub-problem, iii) the system leaves most of the design decisions (selection of local variables) to the disciplinary groups that understand the local formulation. 2.3.1 Collaborative Optimization (CO) In CO subspace optimizers are integrated with each subsystem. Through sub-system optimization each discipline can control its own set of local design variables and is in charge of satisfying its own domain specific constraints. Explicit As far as we are concerned, main advantage of MDO methods should focus on their ability to decompose a multidisciplinary problem into several sub-problems of manageable size that can be solved simultaneously. According to the current complexity and antagonist objectives to achieve, it should also be able to provide a set of solutions (not only a single one that relies on an a priori choice of the designers) and finally MDO should be adapted to the structure of the enterprise and the way design of systems including several disciplines is conducted. The three methods presented thereafter are a first answer to such specifications. They can solve MDO problems that are decomposed into a hierarchy of several subsystem-level problems each of which has multiple objectives and constraints. Among different optimization algorithms that can be used for solving the subsystem problems, genetic algorithms (GAs) are used in the three methodologies. Using a population based optimization approach at both levels (i.e., system and subsystem levels) implies that a compromise as to be found at the system level to map fitness of solutions from multiple Pareto sets to a single system level candidate solution. 2.4.1 Multidisciplinary Optimization and Robust Design Approaches Applied to Concurrent Engineering (MORDACE) The MORDACE method [11] is based on a robust design approach: finding solutions that are robust with respect to changes in variable values due to discipline interactions. The MORDACE approach allows to independently perform the different discipline optimizations. Each discipline aims at finding optimum solution with respect to its own design variables thank to a Multi-Objective Genetic Algorithm (MOGA) in order to obtain for each discipline the Pareto frontier as the set of best solution design. When the independent optimization processes are finished, the designer has to find a compromise on common variable values. Changes in common variable values in order to find the best trade-off may worsen performance levels. This difficulty is solved by adding to the set of design objectives fi, a function that minimizes the effect of values variation of common variables. As disciplines simultaneously minimize objective functions f and sensitive function, they are always multi-objectives. Among available designs, the procedure chooses Pareto designs plus all individuals that dominate the original one with regard to different disciplines. Then, it defines all possible couples made up of solutions proposed by discipline 1 and 2, respectively. At this stage, the calculation of a distance parameter allows efficient solutions to be sorted out from the very large set of all possible couples. Thus, a limited number of couples are automatically chosen. Those solutions show small difference between discipline 1 and 2 common variable values, and they are robust with regard to changes in those values. Then, performances and coupling functions of the compromise designs defined by the new vectors of variables have to be verified. Within the MORDACE method the designer needs to use a compromise method limited by the number of evaluation of potential solutions the designer allows. MORDACE uses approximations (response surfaces) of the coupling functions in each discipline. 2.4.2 Entropy-Based Multi-Level Multi-Objective Genetic Algorihm (E-MMGA) This method relies on a decomposition of the optimization at the disciplinary level. A first proposal was given in [13], but do not take into account the coupling functions and was limited to hierarchical system. Each multi-objective GA at the subproblems operates on its own population of (xc, x). The population size, n, for each sub-problem is kept the same. In addition, E-MMGA maintains two populations external to the sub-problems: the grand population and the grand pool. Both are populations of complete design variable vector: (xc, xì, ... , xd). The grand population is an estimate of the solution set to the overall optimization problem. The grand pool is an archive of the union of solutions generated by the sub-problems. The size of the grand population is the same as the sub-problem population size, n. The size of the grand pool is d times the size of the subsystem's population (d is the number of sub-problems or disciplines). The population of the grand population set is used as the initial population for each subproblem. Since the sub-problem multi-objective GAs operates on its own variables (x, x), only the chromosomes corresponding to xc and xi are used in the /h sub-problem. After each run of subproblem multi-objective GA there will be d populations having n individuals each. As each of the d populations contains only the chromosomes of only (x, x), i = 1,...,d, they are completed using the rest of the chromosome sequence (xì, ... , xM, xM, ... , xd) from the grand pool. After the chromosomes in all d populations are reconstituted to form the complete design variable vector, they are added to the grand pool. Then based on an entropy index that preserves the diversification of the solutions set, n individuals are chosen within the grand pool and replace the n individuals from the grand population. An important drawback is that the size of the grand pool increases very quickly with the number of disciplines and individuals. A variant has been proposed in [4]: in each subsystem only one solution is selected on the Pareto frontier and its objectives and constraints values are used to assign a fitness value for the system level individual. The so-called best solution for each disciplines is chosen by an algorithm thank to the designer or decision maker preferences. The coupling functions are taken into account thanks to supplementary constraints -added both at system and subsystem levels- and auxiliary variables. Note that the shared variables can be treated as parameters in the subsystems and it reduces the dimensionality of the subsystem level optimization problems. In this last case, the coupling variable values are not passed to the system level. 2.4.3 Collaborative Optimization Strategy for Multi-Objective Systems (COSMOS) Two variants (COSMOS-G and COSMOS-L [19]) have been proposed and the fundamental difference between them resides in a different treatment of common design variables. In the following, COSMOS refers to COSMOS-G. Lets n the size of the population and d the number of disciplines. During the initialization the supervisor creates a population of common design variables Xc, with xc,k the kth element of the n-tuple Xc = (xc1,...,xcn). Each discipline i also creates a population of disciplinary variables X = (x. 1,...,xin). In order to get a fully determined population, the supervisor sends the n-tuple of coupling design variables Xc to each discipline. Each discipline i builds a disciplinary population Pop, = {(xck,xlk)|1 < k < n} for which it can evaluate objective and constraint functions. An initial population can be created by the aggregation of common and disciplinary design variables and saved in Pop . 1 memorized Optimization at sub-system level: The supervisor provides a set of common design variables Xc to the disciplines. Each discipline i optimizes the design variables of a population of individuals (xck, x.k) where 1 < k < n. The n-tuple Xc is fixed in order to keep the disciplinary population coherent with the other disciplinary populations. At the end of the disciplinary optimizations, each discipline sends a vector of disciplinary design variables optimized X* to the supervisor. Since the vector of common design variables has not been modified in the discipline, a global population can be built and is naturally coherent: PoPcurrent = {(xc,k, xl,kx'd,k ^ < k < n}. Optimization at system level: The goal is to propose new and better common design variables (in order to improve the population). So, the current population, Pop , is assembled with the a a ? i current' memorized population, Pop , in order to memorized provide a double-sized population: Popdouble. This population is ranked by the Fonseca and Fleming's criterion (notion of Pareto domination) according to all the objective functions of the problem. The best individuals are selected to build a normal-sized population, Popcurrent. This population will be sent to the disciplines. In parallel, cross-over and mutations are made on the common design variables of the population. This new population is saved: Pop . It will be evaluated once by memorized the disciplines in order to determine its objective and constraint functions. Coupling function treatment: All the values of the coupling variables y. computed in the discipline i at the end of its sub-system optimization are stored in an array. This array contains all the couples (yik, xck) where xck is the vector of coupled variables of the kth individual of the population, and y,k is the coupled variables computed by the coupling function I in the discipline i with the common xck and disciplinary variables Xi*,k obtained at the end of the sub-system optimization. When another discipline j needs the value of y, for all its xc k it will search the corresponding y, k - when it exists - in the array. If the array does not contain the desired xc,, the closest is picked. In other words, for each discipline i, there is a table T of couples (xc, y.) such as Ti с Xc x Yi. Thus, T is the graph of the relation T =(Xc, Yi, T) which is given to the discipline j. Our main objective, in this paper, is to study the behavior of COSMOS on coupled problems and more precisely the evolution of the error of approximation of the coupled variables introduced by using the relation T instead of the function l. We will first introduce the problems caused by the separation of the design problem into more autonomous sub-problems. 3 PROBLEMS CAUSED BY DISCIPLINE AUTONOMY In order to study the treatment of coupled functions, we need to point out the types of difficulties that coupled functions and division of work in subsystems may introduce in multidisciplinary problems. From a global point of view, a multi-objective multidisciplinary problem is composed of two steps: analysis and optimization. Within the natural organization - set A in Figure 2 - analysis could be performed on the whole system aside from the optimization but distribution of work in multiple disciplines implies to split the problem in a way that suit to industrial context - set B in Figure 2. This raises new difficulties in problem solving because optimization and analysis are no more fully solved at each step. Indeed, they are split in two sub-problems in which optimization and analysis are partially performed for each discipline. Fig. 2. Decomposition of the problem 3.1 Dependency Between Optimization and Analysis The mix of optimization and analysis in each discipline can lead to two problems: 1. optimization disturbs analysis: optimization could tend towards solutions that may not be feasible 2. analysis disturbs optimization: analysis tend towards non-optimum solutions 1) Optimization leads analysis Optimization is treated first, and the results are given to the coupling functions. The values computed by the analysis do not influence the objective values, but just ensure that the design variables allow to find a feasible solution or not. The design variables are seen by the coupling functions as simple parameters so it cannot influence on their choice. The multidisciplinary feasibility becomes more difficult to reach. 2) Analysis leads optimization The analyzers give solutions to the optimizers which select the optimum. The analyzer may give solutions which are not optimum. COSMOS uses the first solution: the first goal is to reach the disciplines objectives, and the coupled variables are just results computed at the end of the disciplinary optimization process that are needed in other disciplines. This can be illustrated by using another notation derived from the standard two discipline multidisciplinary optimization problem (1) where the objective is to minimize the state variables: miti «i := fiM'c ( min. [t2 := j/i) Di 2 < . in = hU'c-J'l "l) [ !!2 = ■,2(J'<.-..t'2.«0 (9) In the notation described in (5), coupling functions and state equations are confused, and the evaluation of the objective function comes after the analysis. However, in the latter notation (9), objective and state function are confused, and evaluation of the coupling functions only comes after optimization. From this point of view, disciplinary optimization is more important than multidisciplinary consistency. This means that COSMOS is more suited to problems where disciplinary optimization is difficult but coupling between disciplines is weak. Moreover, if we remove u. in the right side of the coupling functions, the resolution with classical mono-level methods is obvious because the multidisciplinary analysis consists in function evaluations. This sort of coupling is only difficult when optimizers and analyzers cannot access all the disciplinary variables at the same time. These organizational constraints cannot be handled by mono-level methods, only by multi-level ones as presented previously. An approximation of the coupling function has to be found. 3.2 Approximation of Coupling Variables A discipline D . needs a coupling variable y from discipline D . to solve its problem. Giving the value of the coupling variable y at a time t is not useful without knowing the parameters x, x y. used to find it. One way to do it, is to let the system level in charge of proposing a set of variables to the disciplines and ensure that it is consistent with the values computed in the disciplines (as in CO). Another one, is to use an approximation of the coupling function l. in the discipline D. (CSSO,MORDACE). COSMOS has a different approach using an array at each system iteration, the x are consistent, the x. are the ones obtained at c ' i the end of optimization of each discipline and the y are approximated using the table T.. If we call yi the approximation of y found in T, it seems that as the population converge to the optimum solutions, the distance between ^ and y. tends toward zero. The next section describes the test problems designed to verify this hypothesis. 4 TEST PROBLEMS 4.3 Problem #3 To test the behaviour of COSMOS, we have implemented several examples. Problems #1, #2 and #3 are built to show normal conditions. #4 and #5 were especially designed to exhibit specific behaviours in limit conditions. All the problems given are multi-objectives per sub-systems. They remain simple because our goal is to understand the behaviour of the approximation method on coupled problems. Thus the results should not be hidden by the complexity of the other parts of the problem. 4.1 Problem #1 xc G [0: 111], .ci. .ca. .('a. .t'4 € [0; 20] Sub-system 1 mill h{xc..>:i) = (,fi - I)2 + (xc - 3)2 mill f2i.xc.x2) = (X2 — 3) + (xc — 9) У = (10) Sub-system 2 (11) 4.2 Problem #2 .('i. .t'2. .t'3. ,t'4. ,t'r>. xg G [—211: 2( i] Sub-system 1 min •'-'!--'-'a) = {^i - I)2 + -t'22 xc yX 2 min /2(1^^1,^2) = J-'c X (-''1 - 3)2 X (x2 - T)2 Xc ,X\ ,X2 min fs[xc. .ej. x2) = .ci + .('2 + 1 (12) Xc ,X\ ,X2 3/12 = J/13 = 10 X Ti + Xc Sub-system 2 min /4(^,^3,^-4) = ,t'32 + (x4 - 3)2 min /5(^,^3.^4) = У12 + (-'"3 + -za)2 + ixc - 4 Г (13) Sub-system 3 (14) Xc, .ci. .t'2. .t'3. ,t'4 e [0; 20] Sub-system 1 ill fi(xC: .t'i J = (.t'l - lj2 + (xc - 3)2 + (уз - 5)2 111111 0, 1 min f2(xc, x2) = (12 - 3)2 + (ic - 7)2 + (j/4 - 7) ■t'2 .'/1 = 112 = \fx~c + ,t'l +0,1 ■'1 \fx~c + -t'2 +11.1 7i2 (15) Sub-system 2 111111 fz{xc,xz) = (.t'3 - 3)2 + {xc - 5)2 + (ju - 3)2 ^с.ЯГЗ 111111 fi(xc,x4) = (,t'4 - 5)2 + i.rc - О)2 + (г/2 - 9)2 Хс,ХЛ да = -Г3 (16) 2/4 = 4.4 Problem #4 \fx~c + .t'4 +0,1 ,t'4 \Jj:c + ,t'3 +0,1 xc fc [—5( I: 50], .t'i. ,t'2. .('3. X4 t [-10; 10] Sub-system 1 (.'■1-■'■, + « M)2 uuu ■■ 11 ' 1 1 = 1 1 ' 1-, .i i .■ •> I..t'l - у2 + 0.1)J {■' 2 с + 0,1 )2 mm f2(xc,x2) = X2 X -- l.t'2 - У2 + 0.1,1^ (17) Sub-system 2 ,(/1 = xc (18) 4.5 Problem #5 xc. .c 1. ,t'2- .t'3. .t'4 e [—50; 50] Sub-system 1 ' min fi{xc,xi) = (.fi - I)2 + {xc - 3)2 + (3/3 - 5)2 min /2(хс, ,c2) = - 3) + (xc - 7) + (я/4 - 7) .'/i = Ш + 2) - 22 "Г (19) У2 = (У4 + 2) - 22 Sub-system 2 ' min fa (.('с- -t'3) = ! -t'3 - 3)2 + (xc - r>)2 + (j/i - 3)2 ^с.ЯГЗ min fi(xC:x4) = (.t'4 - 5)2 + (xc - 9)2 + (y2 - 9)2 (20) ,'H = (У2 + 3)2 - 33 5 RESULTS COSMOS has been run on each test problem with two sets of parameters: the first with a population of 20 individuals, 20 system iterations and 20 subsystem iterations (light); the second one with 100 individuals, 50 system and subsystem iterations (heavy). Two criteria are verified for the validation of results. First, the convergence of the approximate coupled variable to its real value, and then the quality of the solutions obtained. 5.1 Convergence of Coupled Variables During subsystem optimization, each discipline only knows the coupled variable computed in the other subsystem at the previous system iteration and uses it to find an approximation of its real value. We only evaluate the evolution of the error of the approximation of the coupling functions along time (i.e., at each iteration) and not the evolution of coupling variables to their optimum value at the end of the optimization process. We also verify if the y. which is put in the array T is the one that would give the best approximation among all the y which have been computed during the sub-system optimization. Table 1 presents the results computed on the problems #1, #2, #3 #4 and #5 with the light and heavy sets of parameters. The results are given in % of the error relative to the size of the domain of the coupled variables. We observe that the error between the approximation and the real value is quite small except for y12 and y13 in example #2 as shown in Figures 4 and 5. The fact that a coupling function is used in two different disciplines at the same time seems to disturb its approximation. We also notice that the mean value of the error is close to the minimum value except for this coupling variable that seems to indicate that there are as many good values as bad ones. Two phenomena could explain these errors: • The values of x* do not cover its domain: the optimum xc for disciplines 1, 2 and 3 are respectively in {-20}, [-20,4] and {20} so x* e [-20,4]e {20}. For values of xc e ]4;20[ the approximation will not be good. opt. test var. min mean max Light #1 У i 0.0007 0.0157 0.1038 #2 yix 0.0192 9.2963 38.2864 У32 0.0041 1.4345 21.3760 #3 yi 0.0047 0.1540 0.4441 У2 0.0047 0.2923 0.9366 У3 0.0051 0.1391 0.9548 У4 0.0972 0.4443 0.8602 #4 yi 0.0495 2.4839 15.0109 У2 0.0495 2.4839 15.0109 Heavy #1 У1 0.0001 0.0030 0.0129 #2 У1х 0.1293 15.5426 41.0447 У32 0.0002 0.1891 2.4605 #3 У1 0.0041 0.2539 1.3165 У2 0.0035 0.2215 1.0609 У3 0.0022 0.2020 0.8487 У4 0.0008 0.3336 1.3102 #4 У1 0.0054 0.4863 2.3182 У2 0.0054 0.4863 2.3182 #5 У1 failed failed failed У2 failed failed failed У3 failed failed failed У4 failed failed failed Table 1. Minimum, mean and maximum value of the error of approximation (%); 1 minimum, mean and maximum value of the error of approximation (%) Fig. 3. Relative error of y on #1 (light) • The optimum value of x is not unique but in [-20;3] When this value is unique, the other discipline only has one choice for the corresponding x. Here the discipline increases its probability to pick an xc corresponding to the x that does not give the optimum value of y1x for its own discipline. There is no strict decrease of the error along time. But we notice a global decrease around the five firsts iterations which is stabilized then (Figs. 3 and 6). The error of the approximation, in the general case, does not decrease but stays quite small along time. Moreover, the tests performed with the heavy set of parameters are less chaotic (e.g., the differences between Figures 4 and 5). We explain this by the spread of the values of v in the coupling table. Indeed, the larger the population is, the more chance we have to find a x in the table which is close to the value we c need. This error should also tend toward zero if the population tends to a unique xc. Fig. 4. Relative error of у = y on #2 (light) To summarize, on most cases we distinguish two phases in the evolution of the error of approximation of the coupled variables illustrated by Figure 7. The first phase is a global decrease of the error due to the convergence of the individuals (disciplinary variables). The second phase is a state where the error is stabilized and does not show noticeable changes. The evolution is chaotic and its amplitude decreases when we add more individuals in the population. The distance d decreases by increasing the population size. This description seems to be relevant on examples where the objective functions are convex. Under this assumption, to each xc, there is a single and unique xi* that minimize fi. Thus, there is a function s. : Xc ^ X. x U. that allow to replace the Fig. 5. Relative error of у = y on #2 (heavy) Fig. 6. Relative error of y. on #3 (light) 4 » % of error 2 ^^л IJ time Fig. 7. Scheme of the evolution of the error along time function 1. = (X x X x U, Y, Gi) by a simplified one t = X Y, G) with G. = {( xC, y)((x„ s(Xc)), y) e gì.}, where Gf is the graph of the function f G{= {(x, y)y= fx)}. We have T. t., thus if the objective functions are convex, T. is a good candidate for the approximation of 1.. Moreover, - as shown on Figure 6 - the approximation at the system level is not always the best computed at subsystem level. This is true in two conditions: • the optimum of the x. are unique and the system did not yet converge to these solutions. • the optimum of the x. are not unique and as explained above, the x. value picked by the first discipline does not correspond to the x. that gives the best (xc, x.) that can lead to the optimum solution of the coupled variable for the other discipline. Example #5 does not fit to the same category of problems tested. The coupling functions have been chosen as a system of equation that do not converge with a Fixed Point Iteration algorithm. On this problem, COSMOS does not converge either. The error of approximation increases during time. 5.2 Quality of Solutions The solutions found for problems #1 and #3 are optimum. They correspond to the solutions that are Pareto-optimum in the both disciplines at the same time. Thus, we are sure that they are on the global Pareto front. Indeed, for the first test problem, COSMOS globally converged to the solutions x* = (a, 1, 3, 3, 5) with a e [3;7] which are optimum solutions of the problem. Results of the third test problem are satisfying too (Figs. 8 and 9). The thin lines at the bottom of the figures are the Pareto frontiers of the two sub-problems solved independently. The larger line corresponds to the Pareto front for fixed values of the other subproblems at their optimum (i.e. x3 = 3 and x4 = 5 in the first discipline, and x1 = 1 and x2 = 3 for the second one). The solutions found by COSMOS are close to this second Pareto front. On example #4 the coupled variable converged but COSMOS did not find the right solution. Indeed, the optimum solution is unique and ( f*, f2*, f3*, f* ) = (-10, -10, -10, -10) but COSMOS converged to a unique point (f1, f2, f3, f4) = (0, 0, 0, 0). We explain this behavior by the fact that the population is used at the same time to fit the objectives and to fit the approximation of the coupling functions. Here, this interference is COSMOS " global 0 10 20 30 40 50 60 70 80 9[ f2 Fig. 8. Pareto front f-f on #3 ) 5 10 15 20 25 30 35 40 45 f4 Fig. 9. Pareto front f.-f on #3 100 80 60 40 20 0 explicit because the common variables are also the coupling variables. If we call P, the set of solutions found by COSMOS, and St the set of theoretical solutions found for the global optimization problem, the P solution are all in the Pareto set of the global problem St (i.e., P с St ) but they are not spread on the whole Pareto front (i.e., Pn St ф 0 ). This comes from the decomposition of a multi-objective problem in multiple multi-objective sub-problems. One way to obtain lost solutions is described in [10]. 6 CONCLUSION Many methods have been proposed to solve multidisciplinary problems. Mono-level ones (MDF, IDF, AAO), are global methods and thus are not adapted to the context of extended enterprise where each discipline has to solve its own optimization problem without direct access to the others variables. Multi-level methods give more autonomy to the disciplines but most of them are not specifically designed for multi-objective problems and need adaptations to propose more than one solution to the designer. COSMOS has been developed to fulfil these problems. In this paper, we pointed out problems that can arise in the treatment of coupling variables in multi-objective multidisciplinary optimization. COSMOS, as a first step, has only be tested on simple example problems and it would be interesting to perform more tests and particularly real industrial problems. Nevertheless, it appears that in this case and under the assumption of "to a given common variable x, it exists one and only one value x* of the disciplinary variable x. that minimizes the objective function f', the array that uses COSMOS to approximate the coupling functions is a good approximation of the coupling functions. This assumption comes true if the objective function is convex. However, in general case our tests indicate that if this condition is not verified, the method is not capable of determining the correct value for approximation. Moreover, we showed that the values that are kept for approximation - the ones found at the end of subsystem optimization - are not always the values which give the best results. The system tends to solutions that are optimum for the objective functions but not enough to solutions that respect the coupling functions. Taking into account these results, future research will focus on improving the method of approximation of coupling functions. All the above methods have been intensively studied in the literature, many classification of methods exists but nothing about the problems. The difficulties pointed out in this paper could be use as a basis of a classification of multi-objective multidisciplinary design problems. This would permit to run tests on specific class in order to test the methods. 7 REFERENCES [1] Alexandrov, N.M., Lewis, R.M. Comparative properties of collaborative optimization and other approaches to mdo. Technical report, ICASE-99-24; NAS 1.26209354; NASA CR-1999-209354, 1999. [2] Allison, J.T., Kokkalaras, M., Papalambros, P.Y. On the use of analytical target cascading and collaborative optimization for complex system design. 6th World Conference on Structural and Multidisciplinary Optimization, 2005. [3] Allison, T.J., Kokkolaras, M., Papalambros, Y.P. On selecting single-level formulations for complex systrem design optimization. Journal of Mechanical Design, 129(9):898-906, September 2007. [4] Aute, V., Azarm, S. A genetic algorithms based approach for multidisciplinary multiobjective collaborative optimization. 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2006. [5] Balling, R.J., Sobieszczanski-Sobieski, J. Optimization of coupled systems: A critical overview of approaches. AIAA Journal, vol. 34(1):6-17, 1996. [6] Behdinan, K., Perez, R.E., Liu, H.T. Multidisciplinary design optimization of aerospace systems. Proceedings of the 2nd International Design Conference CDEN, 2005. [7] Braun, R., Cage, P., Kroo, I., Sobieski, I. Implementation and performance issues in collaborative optimization. 6th AIAA/NASA/ ISSMO Symposium on Multidisciplinary Analysis and Optimization, vol. 1, p. 295-305, 1996. [8] Cramer, E.J., Dennis, J.E., Frank, P.D., Lewis, R.M., Shubin, G.R. Problem formulation for multidisciplinary optimization. SIAM Journal of Optimization, 4:754-776, 1994. [9] Dennis, J.E., Arroyo, S.F., Cramer, E.J., Frank P.D. Problem formulations for systems of systems. IEEE International Conference on Systems, Man and Cybernetics, 2005. [10] Engau, A., Wiecek, M.M. 2D decision-making for multicriteria design optimization. Structural and Multidisciplinary Optimization, 34(4):301-315, October 2006. [11] Giassi, A., Bennis, F., Maisonneuve, J.-J. Multi-disciplinary design optimization and robust design approaches applied to concurrent design. Structural and Multidisciplinary Optimization, 27:1-16, 2004. [12] Giesing, P.J., Barthelemy, M.J.-F. A summary of industry mdo applications and needs. 7-th AIAA/USAF/NASA/ISSMO, 1998. [13] Gunawan, S., Farhang-Mehr, A., Azarm, S. Multi-level multi-objective genetic algorithm using entropy to preserve diversity. Proceedings of the Second International Conference on Evolutionary Multi-Criterion Optimization, vol. 2632, p. 148-161, Faro, Portugal, April 8-11 2003. [14] Hulme, K.F., Bloebaum, C.L. A simulation-based comparison of multidisciplinary design optimization solution strategies using cascade. Structural and Multidisciplinary Optimization, 19(1):17-35, 2000. [15] Sobieszczanski-Sobieski, J., Agte, J., Sandusky Jr., R. Bi-level integrated system synthesis (bliss). Technical report, NASA/TM-1998-208715, 1998. [16] Kodiyalam, S. Evaluation of methods for multidisciplinary design optimization (m.d.o.), phase 1. Technical report, CR-1998-20871-NASA, 1998. [17] McAllister, C.D., Simpson, T.W., Lewis, K., Messac, A. Robust multiobjective optimization through collaborative optimization and linear programming. 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2004. [18] MDO-Technical-Committee. Current state of the art in multidisciplinary design optimization. Technical report, AIAA Techreport, 1991. [19] Rabeau, S., Dépincé, Ph., Bennis, F., Janiaut, R. Comparison of global and local treatment for coupling variables into multidisciplinary problems. 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2006. [20] Wujek, B., Renaud, J., Batill, S. A concurrent engineering approach for multidisciplinary design in a distributed computing environment. Multidisciplinary Design Optimization: State of the Art. SIAM, 1997. Paper received: 28.2.2008 Paper accepted: 15.5.2008 How to Adapt Information Technology Innovations to Industrial Design and Manufacturing to Benefit Maximally from Them Bart H. M. Gerritsen TNO Netherlands Organization for Applied Scientific Research, The Netherlands IT developments come at a dazzling pace. Industry feels having no option but to keep up. All too often, in hindsight, conclusions are that IT innovation might have been better exploited if... This paper surveys IT trends and seeks to assess their impact on industrial design and manufacturing in the near term future of 5 to 20 years from now. This paper assumes that IT technology is leading and design and manufacturing follow. To that extent, this paper discusses what complementary technology the industry should develop in order to prepare, and to benefit optimally from these IT trends. It presents a joint academic-industry research framework to settle, with the aim to attain collective innovation. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: information technology, CAx technology, smart objects, epistemologica! research 1 INTRODUCTION Among experts, there is a firm belief that in the near future, around 2030, say, only two IT-environments will prevail: a personal environment descended from the current office suite and a 'professional', technical application environment. The personal suite is the one almost everyone will share, no matter what position. It supports personal and business communication in all its extents, and for many basically clerical jobs (lawyers, accountants, journalists, and the like) this environment will contain everything needed. It will consists of small modular pieces of software, distributable across multiple small smart devices, such as handhelds and able to set up communications with intelligent environments such as cars, offices, shops, traffic systems, home, etc. It will tailor itself, slowly fading out what is not used and strengthening what is used frequently and intensively. Smart content management features will remember what has been written before and recognize semantic intentions and reasoning pattern. Documents will no longer flow around organizations, but in contrast, amendment rights and certificates will, managed by online collaborative content management tools. Roaming documents are available worldwide. We will no longer key in every single word: we organize our thoughts, findings and messages and a smart mind mapping tool turns them into plain English. A second 'professional' environment will be available, with which all kinds of virtual worlds can be crafted: landscapes, buildings, urban and rural patterns, water, crowds, avatars, noise, smells, etc. Geographical information models and virtual and real urban and rural environments are the common denominator. Location and time snapshots can be assigned knowledge and documented and scenarios and scenes that live somewhere in a virtual world can be shared with others in a variety of forms. Behavior can be restricted to follow verified patterns, as well as patterns mentally possible but physically impossible. Smart naturallike pervasive substances and materials interact with the human controlled smart environments, not only in the virtual world, but also in the real physical world. Designers do no longer design our artifacts they organize knowledge and information for us, so that we ourselves can decide on details and behavior. Fed by snapshots, mental maps, fuzzily matched shapes, or whatever representation, a first impression of a new product can be presented to potential customers. Customers can take part in further conceptualization and its equipment with intelligent behavior. Designers can verify and validate customer use cases by querying collected knowledge smart objects have acquired about their own functioning. Although acting autonomously, pervasive products remain supervised through *Corr. Author's Address: TNO Netherlands Organization for Applied Scientific Research, The Netherlands, 426 Bart.Gerritsen@tno.nl world-wide clouds of near field communication networks and continuously obey adapting their function to customer habits and preferences. Future scenarios like the above have been delineated by many futurists. Pondering on the near term [51], medium term [3], and far future [45] and [12] is inspiring1. It may help us in finding consensus on development paths to open up, in targeting development resources, and in uncovering obstacles. It is the domain of epistemological research [40], [44] and [45]. Institutional thinking about the future is by no means new: the World Futures Studies Federation was established some forty years ago and is still very active today. Indeed, today, the need to foresee and anticipate future is more compelling than ever. Epistemic research seeks to present argued projections on the future, not fantasies. In this paper, we will not dwell on epistemic research any further, but we will use it as a teaser showing the power and potential of interdisciplinary thinking about a world we can help shaping ourselves. At the end of the day, however, technology is needed to materialize desired futures. This paper surveys brewing IT trends inducing the '2030' scenarios and surveys what the impact is on industrial design and manufacturing in the near term future of 5-20 years from now. This paper assumes that IT technology is leading and design and manufacturing follow. This has been the case for the last three decades and there is no reason to believe this will change in the near term future. Of particular interest and central to the discussion inhere is how industry can adapt to unrolling IT trends so as to benefit optimally. Supporting technological development may have to be initiated and adopted by the industry. This paper will seek to find out how to identify these 'missing' technological developments. Cost and financing are left out of the discussion. Not because these aspects have no impact or play no role (on the contrary) but primarily because this paper focuses on technology. This paper is organized as follows. Section 2 discusses the current state, as a starting point for the discourse into the future. It discusses generic IT and more specific industrial IT and business developments. Next, section 3 presents a 'missing' technology inventory; section 4 frames future research developments in a framework providing conditions and controls to foster developments. This section also prospects the potential and merits technical risks. The paper is concluded by section 5, presenting conclusions and further suggestions. 2 CURRENT STATE OF IT 2.1 Current State in Generic IT In this topic, we refer to information technology not specifically targeting to engineering, as generic IT (Fig. 1). Generic IT is notoriously 'dynamic' and in many respects, "technology drives applications". The need for alignment of business and IT processes is widely perceived as a necessary condition to increase efficiency and utilization of IT technology at the strategic level [9]. The following trends are observed. Generic IT split up Generic IT tends to fall apart into two main fields: Strategic level Facility level Business trends Q Q I Manufacturing trends Design trends □ [ CAx IT trends Education trends Generic IT trends Process excellence today 5 10 15 20 years Fig. 1. Various trends and their technology-driven causal-loop upward interactions 1 Which does not mean that it foresees only positive prospects; e.g. [51] and [45] • generic IT as a generic facility service; • generic IT as part of product design. IT as a facility service tends to become a commodity like fresh water and electrical power. On the near-to-medium long term, IT service departments will gradually be replaced by a service contract with some remote provider. IT as a facility will no longer be a competitive instrument, just a conditio sine qua non. On the other hand, we have IT as an integral part of the product design: IT as a product enabler2. This IT will further stand out in its capacity of enabling the design of highly competitive products [10] and [51] and processes to craft such products. Future design will show logic and intelligence programming and autonomy constraining on a much more intense scale. Like with robotics, self-learning and group intelligence will be exploited, to create smart collectives, bounded and directed by designers. Correspondingly, manufacturing will grow into fitting logic and function and setting smart object evolutionary learning in motion. Nano-layers, sheets and films and fibers in construction composite materials store and process probed data from the environment. Rapid advances in near field communication Advances in near field communication (NFC), support and are supported by the advent of pervasive smart products and environments [2] and [51]. Products (consumer and professional alike) are endowed with logic and NFC, with significant consequences for design, ownership and use. Familiar current examples are PDA's, cellular Fig. 2. Autonomous AGV's (automated guided vehicles) in the Rotterdam Port ECT sea container yard, operational for almost two decades now [source: www.ECT.nl]. 2 The term enabler is also used in other meanings, e.g. [4], but product functions and features. 428 Gerritse phones, digital cameras, video and audio devices, barcode scanners, smart cards and labels, laser-equipped handheld measurement devices, hospital beds, AGV's (automated guided vehicles, Fig. 2), but many more are still to come. Bluetooth, WiFi, RFID-based handhelds invade our cars, homes, offices, shops and factories. Inter-communication can make products smart, i.e., capable of autonomously performing a tasks according to predefined logic and depending on input data sampled from the environment, alone and in collectives. Smart products are aware of each others' presence, within predefined range and capable of exchanging data and services, unilaterally, bilaterally or through Internet. To do so, they must share some protocol, for instance Bluetooth or WiFi, both belonging to the IEEE 802-family of protocols. Smart products can also negotiate to collectively perform a task. To learn of each others features, a catalogue is usually exchanged as a first step in the negotiation protocol. Many smart products are reconfigurable, i.e., their service catalogue is not fixed for life, but can be updated life long. That is not to say that smart object can adopt just any behavior and any task they happened to learn: designers will have to limit the room for self-learning and bound polymorphism. They will have to balance object functions they would like to attribute to the designed object with services obtainable from neighboring objects, encountered during usage. Nanotechnology is advancing rapidly. But even today considerable intelligence can be compressed in less than 1 mm3 of material, so for designers this brings up new challenges and opportunities to design low cost disposable microdevices and logic-endowed consumer products. To construct such species of objects, smart materials are needed, in the form of foils, varnishes, embedded composites, micro-connectors, and molecular frameworks. Apart from state-capturing, chargeable/ dischargeable materials, embeddable logic wiring will be needed, manufactured and assembled at an industrial scale. For manufacturing, this invokes new lithographic processes, new gluing and surface mounting techniques, nano- and vaporizing techniques, logic initializing processes, but also new protection foils, malware protection, electromagnetic 'clean rooms' etc. More and more, construction materials will be organic, like bio- n this paper exclusively used in the sense of a precondition to sensors, bio-tracers, etc. stable under the right environmental conditions, and bio-degrading when out of conditions. Computer-to-computer web services Web Services are SOAP/XML-based data and services that can be exchanged between computers. The SOAP protocol exchanges data and services request messages in the form of a self-describing XML text file. Services can be registered (published) for public use in a UDDI file on Internet, a Universal Discovery Description Discovery and Integration registry, and the invocation of the service (input, output description) is specified in an accompanying WSDL file, a Web Service Description Language file. The idea of public web services is to anonymously and publicly offer a generic service that others can use. Web services need not be public, but can also be restricted to use within a supply chain for instance. British Petroleum, for instance, deals with over 1500 suppliers [27] alone. The UDDI commonly has a white pages part, a yellow pages part and a green pages part, describing the taxonomy and the category of the offered service and where to get access. A UDDI file enables discovery of the service on Internet, describing the identity of the publishing party and where to get access, the WSDL file describes definitions, types, bindings, services, messages, etc. Perhaps the most important feature of web services is that they allow computers to talk to computers directly (no human intervention), invoking operations on the receiving computer, by sending it a SOAP/XML request message. This paves the way for a new type of interoperability. Unlike a tight coupling like sharing a distributed object framework like CORBA, (D)COM, ADO, or Java RMI, web services lead to loosely coupled (file-based and off-line rather than runtime and online) interoperability. In combination with its self-description capacity, and well-designed adaptive business processes behind [39], operations and data sharing can remain relatively stable. Compared to traditional forms of neutral file-based exchange (STEP, ISO16926, etc.), web services seem to have a number of advantages, but the "lingua franca of the business internet" (Bill Gates, 2000) also shares a few drawbacks. To illustrate this, envision the exchange of design data: • Being loosely coupled, the exchange of model data in an XML file spawns a new offline copy of the model to a remote application (Fig. 3). This may give rise to subsequent change management and release management problems. These problems do not occur when sharing objects within a distributed object framework. Such objects are modified real-time, online, and typically have transaction locking mechanisms; • Upon exchange, data, including references to other objects and operations, have to be spelled out explicitly, in contrast to distributed object frameworks. Frequently, data transformations are necessary between sender and receiver; • The extent of self-description is limited in practice when not based on a shared model (like STEP) or taxonomy/ontology; • The ability to exchange design intent and to get semantics across is severely limited. The use of ontology may help, but everything beyond simple well-contained data remains difficult; • Exchanging large models leads to verbose text files. Today, XML files can be stored (e.g. as a BLOB) in XML- and other databases, but nonetheless, this remains a problem. Inline compression may partly remedy this. SBVR, Open Management Group's Semantics of Business Vocabulary and Business Rules (see [http://www.OMG.org/]) may help to define semantics down from the business level. XPDL and BPEL4WS are languages that support business process alignment [37]. XSLT, fuelled with appropriate templates can assist in intermediate data (format) transformations. Web services are capable of running over various protocols, but are almost exclusively run on top of HTTP for security reasons. They are no solution to Fig. 3. With all XML files around; are we all looking at the same model version? the need for secured exchange of information. This general problem is often remedied by the use of VPN (Virtual Private Network), certificates and/ or secure peer-to-peer socket connections. In addition, web services based modifications to the data can be accompanied by an audit trail, to keep track of modification records. Web services are expected to have more impact on manufacturing than on design. Web services may be used for intelligent and autonomous e-procurement systems of catalogue parts (bearings, shims or stop nuts, for instance), stock balancing systems across supply chains, for 24/7 order tracking, customer care, QA, etc. but also for scheduling milling capacity with a supplier, recruiting contractors from a personnel hub on Internet, etc. As far as design is concerned, web services are expected to be primarily applied to exchange product data (PDM-data). Dispersion of storage The storage and management of massive amounts of distributed data, still growing at a rate of tens of percents a year, across networked volatile media, is a complex problem. It is not uncommon for a company to have terabytes of data nowadays, e.g. [25]. With increasing design variants, more demanding PDM/PLM procedures, e.g. Katzenbach in [16], this trend has an evidently negative impact on its controllability. Conditions for appropriate intellectual property protection and security are also negatively affected. Not the mere amount of data, but the dispersion makes handling a challenge. Distributed design leads to a serious data and release synchronization problem that only can be solved by adequate change and configuration control, using CVS-like and Source Forge-like environments. The next generation change and configuration control software will have to be real time, online up- and downstream synchronizing, distributed, and must be able to present change consequences and possible response scenarios to designers. With the advent of pervasive smart objects and environments, streaming data to central stores is no longer an option. In the near term future, data will live within smart objects and stay there. Objects will process data to knowledge on their own performance and functioning in a pre-designed manner. Upon request, objects report knowledge back to designers and other professionals with adequate access rights. Data and also knowledge lives only as long as the object lives, so knowledge must be transmitted before the end-of-lifetime. This requires adequate estimating methods to determine end-of-lifetime design and estimation; Xing et al, in [29]. Internet, qualitate qua, permits for 'lazy' forms of storage: not everything of interest needs to reside in our own database. All it takes is the storage of a hyperlink, pointing to a source location (URL) on Internet. That is in fact Internet by its very nature. The problem is that we cannot inflict any form of data management on data we do not own. On the other hand, copying in all external data invokes another problem: when the information is modified by the owner, we miss the changes; we look at an obsolete copy. There is the tradeoff. Data management standards and agreements may help to remedy the remote data management problem (hyperlinked data), update alerts and automated updates (e.g. through web services) may solve the latter old-copy-in-own-database problem. The introduction of 'base registrations' may cut the storage and exchange of data tremendously. Basic data, regularly shared among applications, may be stored in a single, shared base registration, to prevent every single database from having to do so itself. Personal data, car data, electronic and mechanical part data, compliance data, consumer data, real estate data, drug data, etc. lend themselves to some extent for this type of registration (Fig. 4). An appropriate form of data sharing and interoperability is required for this type of storage. The idea of base registration is not new, but as of yet industrial uptake and exploitation is limited. Desktop operating system vanishes The operating system gradually disappears from the desktop. Configuration management costs (almost exclusively software) gave rise to a trend towards 'thin clients', blade server-based clients and fully web-enabled software at the desktop. Server-side virtualization further accelerates this development. Eventually, software applications are believed to become dominantly or even fully networked and roaming, running in their own adaptable run time container. Current thin clients (mostly booting an embedded XP), web interfaces and software as a service trends (SaaS) can be seen as precursors. For industrial design offices, this trend doesn't need to have severe consequences, or may even have a positive (cost reduction) impact. It may help to synchronize software platforms across supply chains as well as design office - shop floor alignment. This trend may further help to circumvent the software application lock-in problem, making the use of software best-fitting-the-purpose feasible on an occasional basis. Geo-mapped information For many of us the Google homepage is the access point to Internet. Tomorrow's access point to Internet will be a 'home location' on Google Earth or Virtual Earth. Information on internet will become attached to a location, assigned to a building, a street, a local event, a restaurant, a memorial statue, a stationary camera, etc. Not just Fig. 5. Simple ontology framework, modified after [24] geo-information per se, but all information and the map environment is not just for navigating the world map itself, it is also an overlay over internet [8]. Pointing with a mouse will be replaced by (a stream of) location data from a handheld device, a car, an avatar, an Internet access point, or a Bluetooth parrot. This will drastically change the way information is added and retrieved from Internet and how knowledge will be organized. It is expected to also reflect on the design of real world objects. Virtually any object designed in the near term future will be exchanging directly or indirectly information with the Internet continuously. Long after it left the workshop floor where it was manufactured, it can still be reached through internet across the globe. It will be capable of scheduling its own maintenance, for instance, and report on its own performance. During its lifetime, it can serve as an agent, returning useful usage data records to the designer. The introduction of virtual worlds is accompanied by the advent of an entire generation of emerging urban- and rural simulation approaches [8] and [28]. Servicing and servicing coverage is a central theme in this work. Today, authors assume single-step service-to-client types of servicing, like in telecom, in the near term future, servicing will be cloud-based and multi-step: not every object needs to have access to the correct time, for instance, as long as at least one object in a cloud has and is programmed to share this service with other cloud members.

Log ^ ; - IrdL Explorer 1.6.1 Memory (Available/Total) 6.34 Mb /19.61 Mb 1 ™ 1 Fig. 6. The ISO 15926 data model and the Reference Data System containing the Reference Data Libraries. Here the data of the object Hydraulic Pump. Source: http://15926.org/ Top level Ontology lL Software suite re-bundling Software deployed in personal environments tends to cluster in a single suite. See the Introduction. Particularly if present industry standards can be swapped for open standards, multiple such suites may appear. This trend has no specific fundamental implications, neither for design nor manufacturing. All other software re-bundles in a second (SOAP/) XML-based environment: the professional suite. Like with office suites, where the market welcomed OpenOffice.org, inspired by Sun Microsystems, Inc. genuine interoperability will appear sooner or later and professional environments will follow. Professionals want to be able to use such suites on their home infrastructures and in mobile computing, companies want them, governmental bodies may demand them, and open consortia will develop them. In the future, electronic PDM/PLM dossiers may be sent off to the customers, too. At present, it is customary to distribute separate 'viewers' (e.g. Acrobat Reader for reading PDF), but this will fade out. Software as part of the provider's portfolio will change dramatically. Software 'seats' will become no more than attractors for something much more valuable: knowledge and excellence centers, providing niche strategies and best practice solutions. As of yet, neither ISO 10303 (STEP) nor ISO 15926, brought a fundamental omni potent solution to the general data exchange problem. STEP "will go XML" and as far as ISO 15926 (POSC/Caesar) is concerned: the ultimate goal is to develop a general purpose computer interpretable framework providing a single solution for the industry at large. Domain specific RDL's (Reference Data Library), Fig. 7. Example mind map. Thoughts are represented by boxes and associated by arcs. Attributes and objects can be attached. e.g. for construction, ship building, mechanical engineering, process industry, E&P, etc. together with a common grammar and generic data model, may constitute a framework conquering ground up till now reserved for ISO 10303 (STEP). Refer to Figure 6. Recall that ISO 10303-221 (AP 221) and ISO 15926 are entangled. The use of ontology may be woven in and that what's happening. Generally, domain ontology (Fig. 5) matures most naturally, like with object-oriented technology. On top of that, domain ontology, like domain object classes, may be better re-usable than application-based ontology. What holds true for a domain ontology also holds for low level ontology, lexicons, definitions, generic data formats etc. [23]. The real touchstone will be the cross-discipline application and upper level-ontology [6]. Integrated professional geo-mapped applications are to arrive soon (a Google avatar API or even more innovative?). Integration with design data may not be immediate, but will ultimately be the case. In manufacturing, shop floor-ERP might be one of the first areas to show uptake. The lack of genuine interoperability is one of the biggest technological challenges at the moment, for which basic low-level IT technology exists, but present data and knowledge modeling techniques at the semantic level fall short. Mind mapping Mind mapping is the expression of thoughts and free associations between them, in a diagramming style called a mind map (Fig. 7). Mind mapping provides great freedom in expressing thoughts, without obstructive formalisms. Mind mapping will become image-based and eventually emotion-based, expressions-based, etc. Finally, mind maps will become personal and computer-interpreted. The impact of mind mapping is thus expected to stretch far beyond generic IT, more so when combined with artificial intelligence and ontology. In can help bridging the gap of designers and non-technical customers. Mind mapping can also serve as a personal way of organizing information. Impact on design and on manufacturing will be enormous. Agent, ontology based knowledge management The research fields knowledge management, ontology, and multi-agent technology tend to overlap more and more [4], [7], [19], [41] and [43]. Organizing, classifying and inferring knowledge is expected to become a principle field of research in the near future. Further to this [42] relate this to individual and organizational learning. Here too, the impact on design will be enormous. Many design tasks, particularly in the conceptual phase, benefit strongly from knowledge. Personalized use cases, queried from smart objects, may support design. The design process is expected to become more 'rich' and 'soft' information can be merged more easily. Similarly, agent technology has also profound impact on manufacturing, in production process data gathering. Agents, pieces of software programmed to be autonomously crawling and pruning the Internet in search of targeted data and information, map their collected data and information through ontological schemes into a knowledge framework. Agents are nowadays equipped with latest Artificial Intelligence. Multiagent technology (MAS) allows for collective, self-coordinating acting, outperforming humans in simple and moderately complex tasks [50]. Open standards, platforms and consortia The trend to open platforms will further support interoperability at a global scale. Open standards force dominant technology providers to reconsider their technology and beyond a critical industry uptake as more open platforms enter the market, this process is expected to accelerate. Counteracting movements are also seen, for instance in the US where business models, algorithms, and living entities can be patented, leading to what is discussed as "open science" versus "private science" in [14]. The impact on design, more in particular on the design CAx tool suite and on data exchange in manufacturing are great. 2.2 Current State in Design In recent years design activities have intensified, expanded, become more flexible, geographically dispersed, become a team performance, under increasing time pressure, Microsoft r. HP Apple ЁяЁЗС Foxconn v Flextronics South Chile; ' ("Argentina South Atlantic Ocean gv .(' Madagascar Ocean Australia • ^ep Naw k. 1000 mi n Щ " 2000 km =0 % ©2008 Google - • Imagery ©2008 TerraMetrios, NASA, Map data ©2008 Europa Technoloaies - Terms of Use Fig. 8. Design anywhere-build anywhere in consumer electronics: consumers purchase and pay to OEM's (HP, IBM, SONY, etc.), who forward (solid lines) an order-to-build to manufacturers (Flextronics, Foxconn, etc), who ship to customers worldwide (dashed lines). Headquarter locations may change in the near term future; however, the DA-BA principle is not expected to disappear soon. holistic, global and inter-cultural. Design can be subdivided in a number of stages, in various ways, according to various criteria. Here, we are not going to enter that discussion. See for instance Fujita and Kikuchi, and Clement et al. in [29]. At the early stages of design, emphasis is on conceptual ideas requiring a large degree of (mental and technical) freedom. During the global and detailed (component) design, more formal and interoperating tools and techniques are needed. Data volumes explode and detailed knowledge communication spreads across the supply chain. Data and release management and communication are becoming dominant factors. Cost estimation and production planning is at hand. Beside the traditional waterfall like design process, more and more evolutionary approaches are now in use. Some of them are spiral-like, spawning evolving prototype-to-mature model versions after each development cycle, Clement et al. in [29]. Others involve the (possibly unskilled) customer, [24]. Some are co-designed, usually at part or component level, using online collaborative design environments, that can have an inter-regional, international or inter-cultural setting, Katzenbach et al. in [15]. Some target at (configurable) product family design rather than a single product [43], some target make-to-stock production, some make-to-order and some deal with compliance restrictions. Major IT-related trends are as follows. Design anywhere - build anywhere Cheap communications led to design anywhere-build (make, produce, manufacture) anywhere schemes, e.g. Marais and Ehlers in [29]. For repetitive design or manufacturing tasks, potential partners, contractors, etc. can even be recruited online, e.g. Klamann in [15]. A clear example is the spreading across the globe of Consumer Electronics OEM's, like HP and Sony, and the actual manufacturers, like Flextronics and Foxxcon. See Figure 8. In earlier days, manufacturers just built as specified; nowadays they co-design and even deliver ready-to-re-label electronic equipment to OEM's. The design of high-end equipment, such as iPod-s and Xbox-es, however, is still typically done by OEM's themselves. In future strategic alliances, partners are expected to bundle their collective knowledge, object lifecycle long. Mass customization and e-consumer services Mass customization (MC) is the ability for customers to adapt products and services to personal needs and preferences at (approximately) mass production price and delivery conditions. MC thus seeks to match the production efficiency of mass production with the flexibility and added customer value of customization. MC is however more than just providing the customer with a few optional features in an order form. It is the precious interplay between marketing, design and manufacturing staff in the entire supply chain on the one hand and customers on the other hand. Mass customization is now rapidly becoming common practice, not only for relatively simple goods, such as eyeglasses [46], also for more expensive products like cars [34]. The need for consumer intelligence is coming up strongly, Seelmann-Eggebert and Schenk in [29] and [22], [24], [32] and [33] and is expected to be one of the decisive competitive capacities in the near future. This includes knowledge on production machines and processes [38]. Design to be broken down for manufacturing As a result, for a flexible and adaptive manufacturing process, design needs to be parameterized and/or broken down in a basic part of more or less standard components that can be produced make-to-stock (or commissioned to a supplier), and a customized finishing, to be conducted at delivery time in a make-to-order atmosphere or even at installation at the customers site [19] and [43]. Time and form postponement [46] and unifying and serializing operation principles [21] will have to be envisioned and anticipated during design. Furthermore, designers will have to assemble and maintain catalogs of more or less standardized components customers can assemble their preferred product configuration from. Expanding CAx tool suite CAx environments contain a growing number of tools, for sketching, RE-tools, feature recognition and feature description, soft computing, fuzzy design, augmented reality rendering and photo-realistic visualization, Siodmok and Cooper, and Woksepp and Tullberg in [29]. CAD-CAPP preproduction coupling has been introduced, like geometric dimensioning and tolerance tools, and QA-support, Roller et al. in [29] and Katzenbach et al. in [16]. Compliance Compliance has become a major issue, to be supported by tools, with online data submission for approval, in parallel to the evolving design, e.g. Calder and Sivaloganathan in [29]. The above trends do not specifically zoom in on important trends in the domain of product data management (PDM) and product life cycle management (PLM). Within a context of mass customization the building up of a solid electronic personalized product dossier is critical for future support, maintenance, amendments, and end-of-life strategy. All these trends are noteworthy, but do not pose IT-problems not yet addressed. 2.3 Current State in Manufacturing Over the past decade, manufacturers have replaced their traditional maie-to-stock production by a maie-to-order production strategy, in order to respond to the growing demand for mass customized consumer goods and services. Successful implementation of these inherently dynamic processes requires thorough knowledge of the (potential) customer, as stated. The better knowledge of who customers are, where they are, what they want and when they want it, the easier the mastering of the varying design, production, delivery and custom service process. The major challenge is to design e-consumer services and customer decision support systems (CDSS) that allow for the entrance of a new group ofcustomers, typically the ones that do not yet have the affinity and the technical background knowledge [24]. Consultative and cross-selling will navigate potential customer through a sheer endless virtual shop. Strategic bundling of complementary and perceived value enhancing products in product suites should also comfort novice e-customers. Customer knowledge across the entire supply chain is expected to become decisive factor [13]. Apart from what customers want beforehand, at the time of purchase, it is also vital to have a thorough understanding of how they really use the product afterwards and determining their satisfaction. Most workers in this field proposed solutions based on the idea to postpone the moment of delivery, often called time postponement, or the moment of differentiation of the product into a variant, called form postponement, and push it as far downstream as possible [46]. Form postponement may take even the form of assembly at the customer site. Su et al. point out that above a certain production floor, time postponement is more effective for larger numbers of products and higher interest cost. Form postponement is less vulnerable for disturbances, such as a late component arrival. In practice manufacturers tend to follow a mixed small series-large series production scheme, often mixing make-to-stock (e.g. standard components) and make-to-order strategies [13]. IT-related manufacturing trends are as follows. Pull production in a digital factory Manufacturers adopt a pull production mixed make-to-order and make-to-stock adaptive 'digital factory' model, Schneider in [16] and [47] and [49]. Consumer and production data dispersion The dispersion of customer and production data down the supply chain has come down as far as the level of cellular production units. The data exchange demands that comes with it not only give rise to change and release management problems and intellectual property concerns [14], Klamann in [15], it also brings up the interoperability problems and alignment issues discussed earlier [39] and [41] and Katzenbach in [15]. A need for responsiveness There is an increasing need for responsiveness to commonly just-in-time arriving consumer and production data. Not only logistics itself must be justin-time, also data accompanying it. In n-tier supply chains, early global production and delivery schedules are commonly refined and updated up to the deadline, with only small tolerances allowable. Production adaptation data stream Adaptive production requires a constant stream of resource, performance and quality-related data as well as maintenance and reconfiguration planning data. The difference with traditional organization of production is that scheduling maintenance is not 'invariant', as production configuration changes from hour to hour. Production line swaps must be planned machine-by-machine, cell-by-cell etc. [47] and Novak in [29]. 2.4 Current State in Business Whereas the eighties are often associated with quality, and the nineties with business process re-engineering (BPR), the years 2000 are associated with responsiveness (or: agility); e.g. [30]. This requires strategic thinking up to the level of board members. Various workers have analyzed business at a strategic level. Bergeron et al. [9] discuss the relationship of strategy, structure and technology. They found that this relationship is a strong determinant for the performance of the company, at a given contribution of IT technology. Results are in agreement with results found in [13] and in [17] and [48]. At Board level, the introduction, application and management of IT technology can be split into two different principle challenges: 1. Applying "off-the-shelf' IT for the regular business processes. ERP, CRM and office suites fall into this category. Managing this type of IT in conjunction with BPR is not believed to be optimal yet [10] and [48]. This has been referred to as IT applied as a facility service, in this paper; 2. Applying functional IT Technology as essential aspect of the company's product or product development process, in this paper denoted as enabling IT Board members are not always capable of overseeing the underlying product technology at the level of strategic business planning [10]. Increasingly, board members have to handle applications and mergers. Integration of information infrastructures (i.e. IT as a facility service) of merging companies is generally seen as the key to a successful merge. A supportive framework for mergers has been proposed in [35]. Also see [5]. No study was found on IT as an enabler in relationship to acquisition and mergers. Only a single global IT relevant trend will be listed here: the need for process alignment and regular evaluation of the company's strategy, so as to prepare for and respond to external and internal change. This includes an increasing need for codes-of-conduct, governance etc. [48]. 2.5 Current State in Education CAx education in earlier days came with the applications and access to them [15]. Gradually, in the seventies and eighties, education in IT and CAx entered the academic curriculum of engineers and designers. CAx education for business people, for legal workers and for business administration is still sparse. Education in future sciences is more apt than ever, e.g. [44]. Future outlooks on futurist studies on IT-related developments were not found in literature. Dankwort underlines the virtualization of design: the trend of product design to be represented in virtual structures and models, almost exclusively. Indeed, the role of drawings, mock-ups, paper models, etc. vanishes. Nonetheless: sketches remain a preferred way to represent conceptual car designs, and rapid prototyping is still a very active field of research; Tovey resp. Campbell in [29]. Moreover, many workers proposed virtual analogs of the former clay, wood and paper models. By virtue of e-learning, education penetrates more and more rural areas, developing countries, etc. [1], [11], [28], [31] and [42]. Lin in [36] also stipulates that teaching people lacking technical background in information technology is regarded as an important but difficult challenge for the future. Most workers (e.g. Mokyr, Fountain) subdivide IT expertise in knowledge to design, to own and to use IT. According to literature, the barrier to own and use IT technology is sufficiently low for most people to benefit from IT technology. In the US for instance, women represent only 28% of the "designers" of IT technology but make up the majority of IT users, with some 57% [20]. Educating in CAx-technology has to respond to the demands of ever more professional and job profiles [15] and will be taking place partly in the universities, partly in the industry. Apart from IT-aspects, also product engineering, CAD- and FEA-related technology and a wealth of other aspects are involved. Many workers stress the importance to 'human factors'. 3 WHAT WE NEED FOR THE FUTURE 3.1 Methods and Approach In this section, an analysis will be made of 'missing' technology, supporting technology Technology Consumers Dominant Technology Consumers Followers Technology Providers Dominant I, A, S S, a Technology Providers Followers S, a S needed for the industry to adapt for the IT-trends Table 1. Roles and where to recruit signaled. This can be both engineering technology and auxiliary IT technology. Next, we determine what scientific knowledge is needed and in which form this is to be delivered. An Q-À-diagram, Figure 10 [40] frames this. This section starts, however, with an inventory of technology inspirers, actors and supporters roles, and typical technology development cycle times, building stones in our framework. 3.2 Definitions and Terminology In the following, by organizational entity, we mean a company, enterprise, governmental body, academic institution or any other entity capable of inspiring, acting or supporting3 a technological development. A precondition is an essential development condition that needs to be met for the development to be successfully conducted. Cycle times are elapsed times, durations, of technological developments. An inspirer is an organizational entity initiating, demanding, enforcing or otherwise causing technology development to happen. An actor is a conducting, acting organizational entity, designing, developing, realizing the technological development, or part of it. A supporter is a promoter, advocate, stakeholder (other than a shareholder), user, or otherwise in favor of the newly developed technology, prior to, during or after development. A distinction made at the company level is between technology providers and technology consumers, each of which groups are further subdivided in dominant providers and following providers and dominant and following consumers, respectively, Table 1. Dominance may also come from a collective or a community. 3.3 Inspirers Actors and Supporters In Table 1, inspirers ('I') are primarily projected among dominant technology providers and consumers. In the near term future, we foresee a removal of dominant industrial standards and more open standards, a development that already started, Section 2. Neither individual technology providers nor individual technology consumers are believed to be in a position to enforce a breakthrough alone, and as the table suggests, inspiring technological developments is a joint effort. Follower technology consumers can support, 3 Since cost and financing are no primary topic in this paper, financing is not encompassed here either, but of course does play an important role in this regard. How to Adapt Information Technology Innovations to Industrial Design and Manufacturing 437 and likewise, following technology providers can take part. Of course dominant parties can also choose to support, rather than adopting an inspiring role. Acting is primarily up to dominant providers (capital 'A'), particularly for 'kernel' CAx-technology, but contributions may come from specialized SME (small to medium enterprises) or from active participation by smaller consumers that co-develop (lowercase 'a'). 3.4 Cycle Times Development life cycle times for generic IT technology development vary from approx. 2 (e.g., Moore's law, Java Packages, ...) up to 20 years (e.g., Java, XML, radio protocols, ...). Dankwort in [15] shows that typical cycle times for CAx technological developments vary from 10 up to well in excess of 20 years. Development life cycles are frequently terminated prematurely, at the birth of a competing new technology. This implies that in the time span of the near term future as set out in this paper (Fig. 9): 1. Running generic IT developments may on average be followed by at most 1 more complete development and part of a 2nd one; 2. Running CAx developments may on average be followed by at most 1 more development. Reasoning is as follows. CAx developments take on average 15+ years and at time zero (now), running developments will have advanced halfway on average, taking another 7.5+ years to complete. The succeeding CAx technology will require another 15+ years on average to unroll. Assuming a small overlap of 10-20%, we see that in the near term time lap up to 2030, as set out in this paper, we may foresee running CAx developments to mature during the next couple of years and a next generation CAx developments to start, to mature roughly around 2030. Of course, in practice, developments form a more or less continuous time 1000000.0000 100000.0000 1 0000.0000 1000.0000 100.0000 10.0000 1.0000 0.1000 0.0100 0.0010 Fig. 9. Near term future with max. 1-2 generic IT development cycles (dashed lines) and approx. 1 CAx development cycle, at max. 1970 1980 1990 2000 2010 2020 2030 year ■Transistors on chip ^^»N cores ■ Fig. 11. Moore s law and the projected development of many-core processors N threads spectrum over time, but our goal here is to model the room for a single research and development program we have in the near term future. Also, reasoning this way, we immediately see that CAx developments typically take twice as long to be completed compared to generic IT developments. Also, remark that the model in Figure 9 expresses the room to program developments, not the actual industrial uptake. For the main goal of this paper: to find out how to adapt for IT trends to benefit optimally from them in design and manufacturing, the model suffices. 3.5 Missing Technology Inventory Having identified IT, design and manufacturing trends, we're now geared up to compile the 'missing' technology. This step comes down to collecting consequences of IT trends on design and manufacturing and defining possible actions and activities in response. For the research of this paper, this was organized using a brainstorm Fig. 10. Mapping between the Q- (how-come) -domain and the X-(how to)-domain, linking the domain offundamental (understanding) knowledge with applied (application) knowledge. session. This resulted in a long list of which Table 2 only displays a part. Once this list has been compiled, the next question is: how to program and schedule research developments that deliver this 'missing' technology. This is the subject of the remainder of this section and of Section 4. Development cycles determine room that we have for such a program and the roles defined help to assign actions and roles to parties involved. An example may illustrate this. For quite some years, the development of Intel CPU's (central processing units, the heart of a computer) and the development of Microsoft's operating systems are to some extent intermingled. On the one hand, exploiting hardware capabilities requires applications which require an operating system to run on and on the other hand operating system limitation are directly related to hardware limitations. Figure 11 displays the projected (source: Internet publications) development of Intel CPU's. Lower lines display the number or cores and threads. Moore's law predicts a doubling of transistors on chip every 18 to 24 months. Present generation CPU's have multiple kernels, giving rise to the term: many-core CPU's. Each core has its own resources and can carry out separate tasks. Intel turned to this strategy, mainly for technical reasons. The question is then: how to exploit many-core technology? A software counterpart might be to program using multi-threading programming techniques. A thread is a mini-process spawned by an application program that might typically be assigned to a single CPU-core. With CPU's having up to tens of cores, programmers lack paradigms and methods to design programs making clever use of so many threads. In response, Microsoft and Intel established two University Parallel Computing Research Centers to develop parallel computational algorithms and methods. This example shows a generic hardware IT trend, for which uptake is hindered because of missing technology. It also shows how coordinated response to program the development of the missing technology can take place in parallel, to have lifted the problem by the time such CPU's come to the market. Not yet announced, but well conceivable, CAx vendors might launch a similar initiative to adapt their applications to multi-threading on many-cores. Real time online change and release management as formulated earlier, might in fact require multiple simultaneous threads running. The end goal of having such and advanced change and release management tools available might support the reaching of more strategic goals, such as cross supply chain mass customization support and adaptive manufacturing. The Microsoft and Intel initiative (I) was timed looking at typical development cycle times (Fig. 9), as discussed above. The same (Fig. 9) says: CAx vendors should program their initiative now too. Brainstorming on the above described IT trends and their impact on industrial design and manufacturing resulted in Table 2. The following remarks apply: • Only inspirers are indicated in the table (rightmost column), actors and supporters have been left out, for clarity. They can be 'matched' following Table 1. Among LSE's (large scale enterprises), there is a desire to shift responsibilities and risk down the supply chain while increasing information and knowledge transfer upstream [26]. See for instance the example given in the Consumer Electronics industry (Fig. 8). This may affect the positioning of actors and inspirers; • Designers generally fall into the category CAx consumers, being primarily actors or supporters; • Legal, business administration, managerial etc. expertise is also needed in several developments. Table 2 clearly calls for a map of multi-discipline development tracks, exploring new collective thinking patterns. The framework to develop in Section 4 will show this indeed. Stevenson, in [45] analyses group thinking emphasizing that to liberate from "converged epistemology of the group", trans-epistemological thinking is needed: the exchange of new, often radical thoughts among scientists from various background. 3.6 Linking Science and Technology The rightmost column in Table 2 indicates whether any academic research is foreseen. Here, we adopt the Q-À-diagram technique (Fig. 10) interpreting the proposed notions in [40] with some freedom to make things work. The propositional Q-domain represents background science ("how come"), while prescriptive technique ("what/how to") is depicted in the À-domain with a mapping in between. Q-knowledge is more fundamental, À-knowledge more on application. Notice that science domains like IT, Design, Mechanical Engineering, etc. may both be represented in the Q- and the À-domain, much like Chemistry can be fundamental and applied in the process industry. Missing technology development (in response to IT trends) generally requires additional pieces of Q-knowledge from fundamental research, in order to assemble the complete À-knowledge needed. The Q^ À mapping can be through and in the form of collecting corollary knowledge, an established relationship ('a law'), a methodology, a paradigm, an algorithm, etc.; the academic products, say. The more recipe-like (how-to), the less likely, as how-to knowledge belongs to the À-domain, but hard boundaries will not de proclaimed here. Figure 12 shows the resulting Q-À-diagram of the missing technologies as in Table 2. Not all technologies and mappings have been entered, for readability. Relationships for three technological developments cases have been worked out in Figure 12. Case 1: Initiative: Virtual smart mock-ups. Design: more embedded logic and intelligence CAx curriculum expansion AI-based logic design tools ^^ Virtual smart mockups H More autonomous and smart objects, implies for the design that designers will apply more embedded logic and intelligence in their designs. The diagram shows four possible actions Table 2. Inventory of missing technology IT trend Impact Initiative/response/action Role and recruitment Split up of IT Near Field Communication (NFC) Web services Scattering storage Vanishing OS Geomapping paradigm Software re-bundling Mind mapping Fusion of knowledge, ontology, and MAS Design: More embedded logic and intelligence in design Manufacturing: fitting logic and function through form Design: low cost dispensable and disposable micro-devices Miniaturization Manufacturing: surface mounting and protection Manufacturing: web services for e-logistics and procurements of parts and for cross supply chain e-resource planning and management Design: consistent design repository, PDM/PLM. Minimizing risk for IP violation Design and Manufacturing: Synchronize application tools suite Design: adapt design paradigms for geomapping product and Open standards Design and Manufacturing: Manufacturing: process monitoring Manufacturing: life time usage support over the Internet, and functional reconfiguration Design: CAx tools suite re-bundling Manufacturing: re- bundling Design: involvement non-experts Design: more 'rich' and 'soft' information, preference scores, etc. in conceptual design CAx curriculum expansion AI-based logic design tools Virtual smart mockups Mind mapping in design 5. Smart parts catalogue 6. Universal parts bus/ wireless 7. Biodegradable construction material for disposable micro devices Smart parts catalogue Catalog complex logic hardware 10. protection foils (skin technology) 11. electromagnetic 'clean rooms' 12. standard for catalog part taxonomy 13. automated part supplier procurement/ automated e-trading 14. advanced stock optimization 15. cross supply CAPP/optimization 16. cross supply chain QC/QA 17. cross supply chain e-ERP 18. database alliance technology 19. trust/ deontic/ normative intelligence 20. message certificate technology 21. intelligent data integrity, patch, version and release control systems 22. roaming configuration management 23. compliance roles and access rights 24. agent-based compliance V&V 25. distributed document management 26. IP encryption technology 27. cross supply chain SOA/SaaS 28. pervasive product technology 29. enhanced PLM (short/long life) 30. self-organizing 'data rivers' and 'data precipitation' technology 31. product-middleware technology (protocols for reconfiguration, intercommunication) 32. product-as-an agent technology 33. product security (malware, etc.) 34. auto-service scheduling technology 35. reconfiguration technology 36. enhanced PLM 37. Generic interoperability technology at knowledge level 38. Shop floor-ERP 39. easy associative specification language 40. cognitive, non-verbal mental expression and association protocols 41. mind mapping e-consumer decision support systems 42. improved soft computing technology 43. new knowledge-intensive design paradigms 44. Design agent technology 45. Supervising agent technology 46. various open standards I: academic institutions I: academic/ CAx providers I: academic/ CAx tech providers I: academic/ CAx consumers I: CAx consumers I: generic IT providers I: academic/ generic technology providers I: CAx consumers I: generic IT providers I: academic/ CAx consumers I: academic/ CAx consumers/ housing and construction I: part suppliers/ academic I: generic IT providers I: academic/ CAx consumers I: CAx providers/consumers I: CAx providers/consumers I: generic IT providers I: generic IT providers I: legal/ generic IT providers I: legal/ generic IT providers I: academic/generic IT providers I: generic IT providers I: legal/ generic IT providers I: legal/ generic IT providers I: generic IT providers I: academic/generic IT provide I: generic IT providers I: academic/CAx consumers I: CAx providers/consumers I: academic/ generic IT providers I: academic/ generic IT providers I: academic/ CAx consumers I: generic IT providers I: CAx consumers I: CAx consumers/ providers I: CAx providers/consumers I: academic/ CAx providers I: CAx providers I: academic/ CAx providers I: academic/ CAx providers I: academic/ generic IT providers I: academic/ CAx consumers I: academic/ CAx consumers I: academic/ CAx consumers I: CAx providers I: academic/ generic IT providers/ CAx providers/ consumers in response, brought up during the brainstorm 2. session: 1. CAx curriculum expansion; teach student 3. designers the principles and advanced skills of designing with embedded logic and intelligence; Develop AI-based logic design tools for designers of customer products; Create virtual smart mockups that virtualize smart products in the design stage, with which an adequate impressions and simulation can be presented to designers; 4. Develop mind mapping methods for designers. In this Case 1, we work out Virtual smart mockups in greater detail, as an example. Basic question: how to program and schedule research such that we will have Virtual smart mockups at out disposal, within the next generation of CAx design tools? First of all, we will need a valid and complete system theoretical behavior model, so that research conclusions obtained from the Virtual smart mockup are representative for the real product being designed. That is: possible states (state space) must be identical and state transitions must be triggered by the same events and conditions and lead to the same stable state (possibly through identical non-stable states). This requires fundamental research from cybernetics and a fitting design CAx application in the applied domain. Observed product behavior (e.g. in response to environmental stimuli) needs to be verified and validated against this theoretical model, for which we need a validation protocol. So, in summary: 1. Cybernetics delivering a system theoretical finite state behavioral model plus a protocol for validation of the products behavior; Likewise, we find: 2. Humanoria delivering a paradigm to estimate the product behavioral model appreciation by consumers; is the product acting as human-expected? 3. Computer science delivering an algorithm to robustly mimic internal logic and to (re)play a simulated product behavior session in a behavior player; 4. Finally, Design and Engineering delivering a methodology to design and create the virtual mock-up. This somewhat simplified list of missing technologies already provides some interesting coordinated cross-discipline research and a development 'program'. Recall that the ultimate goal in this case was to prepare and benefit maximally from the generic IT trend of splitting up IT in IT facility services (not so relevant for designers) and pervasive embedded IT in smart objects (with great consequences for designers). Case 2: Initiative: Smart parts catalogue In a similar fashion, for manufacturing, more embedded logic leads to the demand (action) for more standard smart components; a smart part catalogue and a way to wire them. See the diagram below. Missing technology inventory: 1. Design and Engineering delivering an ontology to create a smart part catalogue; 2. Computer sciences delivering a web services-based computer-to-computer e-procurement framework; 3. Business Administration delivering trading models, e-payment models, supply conditions, and trading trust standards. Case 3: Initiative: Bio-degradable disposable micro-device construction materials Design: low-cost disposable micro-devices Missing technology inventory: 1. Ecology & Life Sciences delivering ecological knowledge on requirements such construction material should possess; 2. Chemistry, Physics & Mathematics delivering the composition of the basic substance for the material and the industrial process plant to compose the material on an industrial scale; 3. Computer Science delivering standards for computational conditions of the device (object) and environmental sensing characteristics needed; 4. Business Administration delivers an industrial business case. Recall that in this case, the ultimate goal was to prepare and benefit from the generic IT trend of advancing NFC and pervasive computing in smart objects and environments, and the demand to design and manufacture disposable biodegradable devices. П-domain X-domain Fig. 12. Q-X-diagram of the missing technologies cases 1-3 Figure 12 commonly depicts the Q-X-diagram of the missing technologies of these three cases. Per-discipline work packages can be taken from Figure 12 by zooming in on the arrows (flow of academic products) that leave each box in the Q-domain. Spelling out the whole Q-side yields the full research program. 4 A DEVELOPMENT FRAMEWORK Traditionally, academic and industry have their own disjoint development agendas. There are only few effective knowledge generation chains, the preferred solution. The key to successful interfacing between academic and applied research is in a painstakingly accurate and formal specification of the results to be delivered at the 'interface', i.e. in the central column in Figure 12. Figure 12 also shows interdependencies in the Q-knowledge developments. Fundamental research is generally taking place at universities; technological institutes are generally equipped to conduct applied research. Of course, dominant technology producers and consumers may also have own facilities and resources. Like in Table 2 and the following Figure 12, developments may be organized in work packages, assigning fundamental research parts to university, defining the output academic product and applying academic products in an evolutionary prototype application. EU Frameworks might be organized like that. The CERN-model also seems applicable. Like in car manufacturing, where concept cars are prototype applications to explore and demonstrate the state of technology, concept products, services and environments might emerge, in response to driving IT trends. The knowledge generation chain should stretch out to demonstration projects in which technology and societal consequences can still provide feedback on the development process. Demonstration projects can to large extent (but not entirely) be virtualized. 5 CONCLUSIONS The impact of IT technology change in design and manufacturing is significant and driving, but can be adapted for. This can only be done within the near term future time span within a joint academic-industry effort, in which dominant technology providers and consumer should fulfill an inspiring role. However, immersive technological innovation may threaten market positions of presently dominant technology providers and consumers alike. History reported enough examples of once dominant players no longer existing and only a collective innovation push may deliver the 2030 scenarios portrayed in this paper. Academic engagement and enrollment might be in the form of an open, for instance CERN-like initiative. Supervision and management of those initiatives require careful thoughts and the right conditions, as we learnt from evaluations of large EU programs. Designers might take full advantages of new IT technology underway, but current design and engineering curriculi need to be revised and enriched. Open standards may be both a means and goal. A deployment of (genuine) interoperability is critical in the whole palette of developments. This is seen as the ultimate touchstone for trans-epistemological shaping of the future to occur. Purely technologically speaking, interoperability IT-technology at data level is already in place. The difficulties arise from semantics and different understandings of the precise meaning of data. At present, CAx technology providers don't always assign priority to interoperability issues. Moreover, CAx technology providers generally seek to optimize performance of their technology through advanced but proprietary storage schemes, each with its own internal representation. In addition to this, on the technology consumer side, OEM's, generally large scale enterprises at the top of the supply chain, should no longer negotiate the use of tools and application suites with their preferred suppliers, but adopt new, open technology. The principle question is of course: how to capitalize on these opportunities? Not all 'IT progress' can be transformed into productivity increase and not all productivity increase is due to IT developments alone [18]. Industrial uptake is not immediate. Basically, however, IT has the potential to induce economic output growth [12]. Also, on a much smaller scale Claycomb et al. in [13] applying knowledge about the customer pays. Availability of the mere technology is generally not enough. Education and adaptive business strategies are preconditions for the technology to deliver its full potential, plus a research and development horizon spanning across the near term future, overlooking current and next generation developments. 6 REFERENCES [1] Albadvi, A. Formulating national information techology strategies: a preference ranking model using PROMETHEE method. European Journal of Operational Research, 153, 2004, p. 290-296. [2] Alvarado, M., Cantu, F. Autonomous agents and computational ntelligence: the future of AI application for petroleum industry. Expert systems, 26, 2004, p. 3-8. [3] Amaravadi, Ch.S. The world and business computing in 2051. Journal of Strategic Information Systems, 12, 2003, p. 373-386. [4] Attaran, M. Exploring the relationship between information technology and business process reengineering. Information & Management, 41, 2004, p. 585-596. [5] Aversano, L., Bodhuin, Th., Canfora, G., Tortorella, M. Technology-driven business evolution. The Journal of Systems and Software, 79, 2006, p. 314-338. [6] Smith, B. Against idiosyncrasy in intology development. B.Bennett and C.Felbaum. Proceedings of the FOIS 2006 . FOIS 9-112006. [7] Barthes, J-P.A., Tacla C.A. Agent-supported portals and knowledge management in complex R&D projects. Computers in Industry, 48, 2002, p. 3-16. [8] Benenson I., Torrens P.M. Geosimulation: object-based modeling of urban phenomena (Editorial). Computers, Environment and Urban Systems, 28, 2004, p. 1-8. [9] Bergeron, F., Raymond, L., Rivard, S. Fit in strategic information technology management research: an empirical comparison of perspectives. Omega, 29, 2001, p. 125-142. [10] Bjelland, O.M., Wood, R.Ch. The Board and the Next Technology Breakthrough. European Management Journal, 23, 2005, p. 324-330. [11] Breathnach P. Globalisation, information technology and the emergence of niche transnational cities: the growth of the call centre sector in Dublin. Geoforum, 31, 2000, p. 477-285. [12] Cette, G., Mairesse, J., Kocoglu, Y. ICT diffusion and potential output growth. Economic Letters, 87, 2005, p. 231-234. [13] Claycomb, C., Dröge, C., Germain, R. Applied customer knowledge in a manufacturing environment: flexibility for industrial firms. Industrial Marketing Management, 34, 2005, p. 629-640. [14] Coriat, B., Orsi, F. Establishing a new intellectual property rghts regime in the United States; origins, content and problems. Research Policy, 31, 2002, p. 1491-1507. [15] Dankwort, C.W. Holistic product development. Challenges in Interoperable Processes, Methods and Tools. International 5th Workshop on Current CAx-Problems, 2005. Aachen: Shaker Verlag. Berichte aus der Informationstechnik. [16] Dankwort, C.W., Weidlich, R., Guenther, N., Blaurock, J.E. Engineers' CAx education — it's not only CAD. Computer-Aided Design, 36, 2004, p. 1439-1450. [17] Earle, J.S., Pagano, U., Lesi M. Information technology, organizational form, and transition to the market. Journal of Economic Behavior & Organization, 60, 2006, p. 471489. [18] Feldstein, M. Why is productivity growing faster? Journal ofPolicy Modeling, 25, 2003, p. 445-451. [19] Fogliatto, F.S., da Silveira, G.J.C. Mass customization: a method for market segmentation and choice menu design. Int.J.of Production Economics, 2007. [20] Fountain, J.E. Constrcuting the information society: women, information technology and design. Technology in Society, 22, 2000, p. 4562. [21] Frederiksson, P., Gadde, L-E. Flexibility and rigidity in customizing and build-to-order production. Industrial Marketing Management 14, 2005, p. 695-705. [22] Frutos, J.D., Borenstein D. A framework to support customer-company interaction in mass customization environments. Computers in Industry, 54, 2004, p. 115-135. [23] Gomez-Perez, A., Fernandez-Lopez, M., Corcho, O. Ontological Engineering; with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web. London: Springer-Verlag, 2004. [24] Grenci, R.T., Watts Ch.A. Maximizing customer value via mass customized e-consumer services. Business Horizons, 50, 2007, p. 123-132. [25] Harrison, G.H., Safar, F. Modern E&P data management in Kuwait Oil Company. Journal of Petroleum Science and Engineering, 42, 2004, p. 79-93. [26] Hassan, T.M., McCaffer, R. Vision of the large scale engineering construction industry in Europe. Automation in Construction, 11, 2002, p. 421-437. [27] Holland, Ch.P., Shaw, D.S., Kawalek, P. BP's multi-enterprise asset management system. Information and Software Technology, 47, 2005, p. 999-1007. [28] Hollifield, C.A., Donnermeyer, J.F. Creating demand: influencing information technology diffusion in rural communities. Government Information Quarterly, 20, 2003, p. 135-150. [29] Horvath, I., Li, P., Vergeest, J.S.M. (Eds.). Proceedings of the TMCE 2002. Wuhan, HUST Press, 2002. [30] Huang, Ch-Y., Ceroni, J.A., Nof, S.Y. Agility of networked enterprises — paralelism, error recovery and conflict resolution. Computers in Industry, 42, 2000, p. 275-287. [31] James, J. Low-cost information technology in developing countries: current opportunities and emerging possibilities. Habitat International, 26, 2002, p. 21-31. [32] Jiao, J., Helander, M.G. Development of an electronic configure-to-order platform for customized product development. Computers in Industry, 57, 2006, p. 231-244. [33] Jiao, J., Zhang, Y. Product portfolio Indentification based on association rule mining. Computer-Aided Design, 37, 2005, p. 149-172. [34] Kasprzak, E.A., Lewis, K., Milliken, D.L. Steady-state vehicle optimization using pareto-minimum analysis. 983083. SAE Technical Paper Series. Warrendale, US: SAE International, 1998. [35] Ku, K-Ch., Kao ,H-P., Gurumurthy, Ch.K. Virtual inter-firm collaborative framework — an IC foundry merger/acquisition project. Technovation, 27, 2007, p. 388-401. [36] Lin, H. Fluency with Information Technology. Government Information Quarterly, 17(1), 2000, p. 69-76. [37] Lopez, J., Montenegro, J.A., Vivas, J.L., Okamoto, E., Dawson, E. Specification and design of advanced authetication and authorization servicces. Computer Standards & Interfaces, 27, 2005, p. 467-478. [38] Matthews, J., Singh, B., Mullineux, G., Medland, T. A constraint-based limits modeling approach to investigate manufacturing-machine design capability. Strojniski vestnik — Journal of Mechanical Engineering, 53(2007), 7-8, p. 462-477. [39] Moitra, D., Ganesh, J. Web services and flexible business processes: towards the adaptive enterprise. Information & Management, 42, 2005, p. 921-933. [40] Mokyr, J. The Gifts of Athena; Historical Origins of the Knowledge Economy. Princeton University Press, 2005. [41] Raghu, T.S., Vinze, A. A business process context for knowledge management. Decision Support Systems, 43, 2007, p. 1062-1079. [42] Ruiz-Mercader, J., Merono-Cerdan, A.L., Sabater-Sanchez, R. Information technology and learning: their relationship and impact on organisational performance in small business. Int J of Information Management, 26, 2006, p. 16-29. [43] Salvador, F., Forza, C. Configuring products to address the customization-responsiveness squeeze: a survey of management issues and opportunities. Int J of Production Economics, 91, 2004, p. 273-291. [44] Slaughter, R.A. Why is the future still a 'missing dimension'? Futures, 39, 2004, p. 747-754. [45] Stevenson, T. Will our futures look different, now? Futures, 32, 2000, p. 91-102. [46] Su, J.C.P., Chang, Y-L., Ferguson, M. Evaluation of postponement structures to accomodate mass customization. Journal of Operations Management, 23, 2005, p. 305318. [47] Tu, Q., Vonderembse, M.A., Ragu-Nathan, T.S. The impact of time-based manufacturing practices on mass customization and value to customer. Journal of Operations Management, 19, 2001, p. 201-217. [48] Warhurst, A. Future roles of business in society; the expanding boundaries of corporate responsibility and a compelling case for partnership. Futures, 37, 2005, p. 151-168. [49] Woerner, J., Woern, H. A security architecture integrated co-operative engineering platform for organised model exchange in a Digital factory environment. Computers in Industry, 56, 2005, p. 347-360. [50] Wu, D.J. Software agents for knowledge management: coordination in multi-agent supply chains and auctions. Expert Systems with Applications, 20, 2001, p. 51-64. [51] Zambonelli, F., Gleizes, M-P., Mamei, M., Tolksdorf, R. Spray computers: explorations in self-organization. Pervasive and Mobile Computing, 1, 2005, p. 1-20. Paper received: 28.2.2008 Paper accepted: 15.5.2008 A Feature-Based Framework for Semantic Interoperability of Product Models Ravi Kumar Gupta* - B. Gurumoorthy Indian Institute of Science, Centre for Product Design and Manufacturing, Bangalore, India The paper addresses the problem ofexchangingproduct semantics along with other product information such as shape. Exchanging the semantics/meaning associated with shape data enables manipulation of and reasoning with the shape model at higher levels of abstraction. The semantics associated with shape data can convey design intent, inter-relationships between entities in the shape and other data importantfor downstream applications such as manufacturing. As the product model does not support semantics, its use in other systems/domains leads to construction of new models. Using a single product model across the product lifecycle is beneficial from the point of view of maintaining integrity of the data and avoiding the effort in creating multiple models. The paper first identifies different types of semantic interoperability problems arising during exchange of product models in product development. These are: different terms referring to same shape, different representations for a shape, meaning of terms are context dependent and mismatch in entities supported in two. We present a one-to-many framework for exchange ofproduct information model, product semantics in particular. This framework is built using the Domain Independent Form Feature (DIFF) model as the representation of features in the shape model along with an ontology that captures the vocabulary in use in feature models. A reasoning module that can extract multiple construction /views of a feature has also been developed. This reasoning module is used to associate multiple construction paths for the features and associate all applicable meanings from the ontology with the DIFF model. Each CAD system can now use the semantics and construction history supported by it to further manipulate the product model. Results of implementation of the use of the above solution in exchanging product semantics with a commercial CAD system will be presented and discussed. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: information exchange, product lifecycle management, product data exchange, semantic interoperability 1 INTRODUCTION Given the sheer complexity and variety required in products today to meet the requirements of an increasingly savvy and aware customer, it is impossible for any organisation to manage the product development process without collaboration [1]. Collaboration across multiple locations, multiple domains/disciplines is required to be able to deliver the right product at the right time and right cost. For such a collaboration to be successful, not only data but information and knowledge must be exchanged. Product Lifecycle Management (PLM) is emerging as a "computational framework which effectively enables capture, representation, retrieval and reuse of product knowledge" across the product lifecycle to support such a knowledge-intensive product development environment [1]. If PLM as a solution has to include all phases in the product lifecycle and all the stakeholders, then exchange of data and information between the different phases and stakeholders becomes a critical element of PLM. Exchange of product information (including data) as opposed to data alone is a key differentiator of PLM over the earlier approaches. The motivation being that if product information is exchanged then, it is possible to have knowledge based solutions in each phase to reason about the information to arrive at decisions. Currently, exchange of data requires the use of dedicated translators or recreation of models. Product information (data at a higher level of abstraction) however is exchanged only through human intervention. With product development happening in multiple locations with multiple tools/ systems, semantic interoperability between these systems/domains becomes important. *Corr. Author's Address: Indian Institute of Science, Centre for Product Design and Manufacturing, Bangalore, India, 446 rkg@cpdm.iisc.ernet.in Semantic is the meaning associated with a terminology in a particular context and interoperability means the ability to work together to accomplish a common task. So, semantic interoperability of product model refers to automatic exchange of meaning associated with the product data, among application domains throughout the product development cycle. Application domain refers to any of the following - product design, manufacturing, ERP, CRM, and SCM. Semantic interoperability implies the existence of a common and shared understanding of the meaning underlying the information that is being exchanged [4]. In contrast to the common usage of the term "product semantics" in the design community, our interest is in the semantics of the product information that is being exchanged and not the semantics communicated by the product itself. Exchanging the semantics/meaning associated with shape data enables manipulation of and reasoning with the shape model at higher levels of abstraction. The semantics associated with shape data can convey design intent, interrelationships between entities in the shape and other data important for downstream applications such as manufacturing. As the product model does not support semantics, its use in other systems/domains leads to construction of new models. Using a single product model across the product lifecycle is beneficial from the point of view of maintaining integrity of the data and avoiding the effort in creating multiple models. Lack of semantic agreements is due to several reasons. Semantics associated with data and procedures is not explicitly represented and is often context-dependent. Mismatch in terms and meanings also arise due to independent development efforts often aimed at establishing proprietary naming and other conventions. Resolving the semantic mismatch in most domains requires the involvement of people. In the product development cycle several different domains (engineering design, industrial design, manufacturing, supply chain, marketing) come into play making the ability to exchange product data with semantics very critical. 1.1 Product Data Exchange Exchange of product data has undergone considerable evolution since the days of annotated engineering drawings. At that point the focus was to exchange primarily shape/geometric data between design and manufacturing. With the advent of computer aided design and drafting systems, exchange of shape models between different CAD/ CADD systems was required. Different approaches being used to handle the interoperability problem between product models are - a single CAD environment for all tasks, direct data transfer between different systems which requires (n(n-1)) translators for "n" tools. Another approach uses neutral file formats. This approach requires (2n) translators for "n" tools as depicted in Figure 1. Use of neutral format therefore became the preferred framework to solve the data exchange problem. The Drawing Exchange Format (DXF) is the defacto neutral format used to exchange 2D drawing data across different drawing tools. Then, Initial Graphics Exchange Specification (IGES), another neutral format, was introduced for exchange of geometry information between dissimilar systems. IGES however, is capable of transferring only the geometry of the product; the non-geometry and design intent are lost. Standard Exchange of Product data model (STEP, formally ISO 10303) evolved to interrelate all geometric and non-geometric data in a useful and meaningful way to represent product content model so that the complete description can be exchanged between CAD systems. Standard for Transfer and Exchange of Product model data (STEP) is at present most comprehensive standard to address the needs for exchange of geometric data. A major advantage of STEP (that is yet to be fully exploited) is that it is possible to develop standards for exchange of data between different domains in the product lifecycle. Analysis and manufacturing are two of the domains that have been handled so far. With the emergence of features technology in CAD systems the problem of exchanging feature models (that are geometric data at higher levels of abstraction) became important. Current art in exchange of product data By Direct Translators By Neutral Format Fig. 1. Product data exchange is at the level of exchanging feature models. Feature based product data exchange is emerging to accommodate design intent for data exchange. Features are capable of carrying constraints, parameters and application attributes. A working draft on construction history features has been developed by the STEP group [6]. Current art in exchanging part geometric data (shape models) does not solve the semantic interoperability problem as the shape model does not convey the product semantics. Similarly other model in Table 2 can be obtained by extrusion or sweep. 1.2.3 Meaning of a Term is Context Dependent The term condenser has a different meaning in heat transfer domain and the electrical engineering domain. Examples in product design and manufacturing are shown in Table 3. 2 LITERATURE REVIEW 1.2 Issues in Semantic Interoperability in Product Development In the present paper, the focus is restricted to exchange of shape models. Semantic interoperability arises due to the use of shape model in different systems and different domains. Different types of semantic interoperability problems arising during exchange of shape models in product development are first identified. 1.2.1 Different Labels Referring to Same Shape Two or more terms may refer to same shape in product development environment. A cylinder removed from one another cylinder which can be defined in terms of bush and circular hole referring to the same shape as shown in Table 1. Similarly one can define other models in the Table 1. 1.2.2 Different Representation for Same Shape Associated representation for a shape may be different. For example cylinder removed from one another cylinder can be obtained by revolution, sweep or extrusion as described in Table 2. Table 1. Shape has different labels As mentioned earlier the need for sharing and exchanging product data between various domains has been around for a while now [17]. Owen [10] and Pratt [12] provide a review of the work on exchange of data and features between CAD systems. Most efforts in exchanging semantics involve features. This is only natural as features evolved to carry semantic information about form, function and behaviour [2]. In this section we focus only on those efforts that address the exchange of product semantics using features. There have been several attempts in defining ontologies for features [5] and [14]. The focus here is to extend feature specification by using ontology of design concepts (as high level modelling entities) to link product function to shape. Most efforts in building feature ontologies have focused on Table 2. Shape has different representations Table 3. Semantics in product design and manufacturing [11] Term Meaning in particular context Solid modeling 2.5D machining Extrusion extruded geometry 2.5D machined object has S hap e has 3D geometric shape Has machining contour Cube cubical shape rectangular machining contour Block extruded object with cubical shape 2.5D object machined with rectangular contour capturing taxonomy and not on any reasoning based on the ontology [2]. Brunetti, et al. [2] propose the use of features to achieve a semantic interface to different CAx applications. They describe a conceptual framework of how an ontology of features and shape can be used to provide a semantic retrieval system or semantic interface to 3D modelling systems. The framework prescribes ontologies at different levels of abstraction namely, the model, features, constraints, topology and geometry that are available in a CAD system. The paper only describes the conceptual model and no implementation is described. Patil, et al. [11] also present an ontology based approach to enable semantic interoperability. They propose the use of an ontology defined in Product Semantic Representation Language (PSRL) as an intermediate layer between the two systems that need to exchange the product data and semantics. Semantic translation then becomes a problem of mapping from one system to the ontology in PSRL and then from this to the target system. The axioms and definitions that form the ontology in PSRL have to be a union/superset of the terms in the systems exchanging data. Therefore for every new system to be included, the ontology in the PSRL has to be extended with the new terms or labels not present in the ontology. Mostefai, et al. [8] propose an ontology based approach to enable collaboration. The proposed ontology supports queries on the product model across three views (design, assembly and manufacturing). They also mention the concept of equivalence between entries in the ontology that is similar to the first type of interoperability problem identified in this paper. In their approach the linkages between the entries in the ontology have to be specified and the ontology editor then uses these linkages to answer queries and establish equivalence. The ontology proposed would have to be significantly expanded for them to address the semantic mismatches identified in the present work. Subramani [16] describes another approach to exchange the product data via feature models. In this work, feature-volume based product data exchange is proposed. Feature-based modeling captures semantics and the designer's intent through parameters and constraints. This method transfers product data as feature volumes; feature volume contains feature faces and their attributes. STEP definition of faces and geometry is used to represent the feature volume. Construction history of the feature model is recreated using the face attributes. Unlike current methods for data exchange, the proposed scheme allows exact representation of 2D and 3D constraints through face classification and multiple construction procedures for each feature instance. The latter allows handling of situations where the receiving system does not support some of the procedures in the source system. Since individual feature volumes are transferred, constraint and parameter representation is preserved and validation of features with respect to the part model is avoided. The proposed method has been implemented using the eXtensible Markup Language (XML), which carries semantic representation. Presently, the use of XML schema has been proposed to enable the exchange of data between different systems/applications. Several XML schemas have already been proposed by researchers [7] and [16] and vendors (3DXML, PLM-XML, X3D). However, these focus only on enabling exchange of data and visualization of shapes. 3 OVERVIEW Our research is focused on enabling seamless exchange of product information (as opposed to only shape data) across the entire product lifecycle. As a first step to this goal, the present study aims at exchanging product semantics along with product shape. In an earlier work, a framework based on domain independent form features (DIFF) (Fig. 2) Fig. 2. Feature-base data exchange architecture [16] Fig. 3. Schematic diagram for semantic interoperability of product model was proposed to enable exchange of feature models between CAD systems [16]. In the present work, we present a one-to-many framework for exchange of product information model, product semantics in particular. This framework is built using the DIFF as the representation of features in the shape model along with an ontology that captures the vocabulary in use in feature models. Though Figure 3 shows the schematic for only one source and target system, there can be any number of target systems and source systems. Interface is used to select / read features and construction history of a product for a target system. Given the feature or construction history in the source system, feature volumes in the DIFF format can be constructed [15]. From the DIFF model, alternate labels for use in the target system can be identified. If there is no label matching (target system is new) then matching involves construction of DIFF models for all the labels available in the target system and then finding a match by comparing the DIFF models obtained with the DIFF model corresponding to the feature to be exchanged. In order to exchange construction history associated with the feature label a similar procedure is followed. A reasoning module that can extract multiple construction /views of a feature has been developed. This reasoning module is used to associate multiple construction paths for the features and associate all applicable meanings from the ontology with the DIFF model. Each target system can now use the semantics and construction history supported by it to further manipulate the product model. In case of mismatch in the labels of terms in the construction history, the correct construction path is identified by matching the corresponding DIFF features. However the present implementation has been done using an ontology editor in the interests of quick prototyping. A prototype of the ontology and the reasoning on the ontology has been built using the Protege ontology editor [13]. In the following section we first briefly describe the DIFF feature structure followed by a description of the tool developed. 4 DOMAIN INDEPENDENT FORM FEATURE (DIFF) MODEL Feature is defined in terms of faces and faces adjacency relationships. Features are viewed as formed by subtracting/adding a single solid-piece from/to a base-solid as depicted in Figure 4. The solid existing before subtraction or addition is referred to as the base-solid and the solid subtracted or added is referred to as solid-piece [9]. The created feature inherits the structure of the solid-piece. The classification of faces and faces adjacency relationships in the DIFF model is described in the following section. 4.1 Classification of Feature Faces The faces that form the closed shell are classified as shell-faces and the two faces which -QLBfl В ase-s olid + Solid-piece = Created feature (final solid) Fig. 4. Feature as formed by subtracting/adding a single solid-piece from/to a base-solid Fig. 5. Classification of feature faces close the ends of the shell are classified as end-faces as shown in Figure 5. Addition or subtraction of the solid-piece leaves an impression (feature) on the base-solid. The faces in the impression which did not exist in the base-solid before the addition or subtraction operation, are classified as created faces (newly created faces). The neighbouring faces of the impression exist in the base-solid before the operation, are shared by the solid-piece and the base-solid which are classified as shared faces (modified faces) as shown in Figure 5. The faces in the final solid associated with an individual feature are classified as follows: • Created shell faces (CSFs); newly created faces in the base-solid corresponding to the shell-face of the solid-piece. • Shared shell faces (SSFs); already existing faces in the base-solid corresponding to the shell-face of the solid-piece. • Created end faces (CEFs); newly created faces in the base-solid corresponding to the end-face of the solid-piece. • Shared end faces (SEFs); already existing faces in the base-solid corresponding to the end-face of the solid-piece. Since features are defined in terms of these four types of faces, feature definitions are consistent and machine-understandable. These four types of faces of each feature are stored in the feature model with face adjacency relationships. 4.2 Semantics of Product Model Feature definitions are structured to separate the generic content from the non-generic content. The overall form and shape of a feature are separated into type and shape. The type of the feature is specified by the generic type and the shape of the feature is specified by the cross-section of the feature. Class of similar features based on faces and face adjacency relationships are identified. Features, having similar types of faces and face adjacency relationships of a class, lie in that class. An instance of a class has same meaning as that of the class. Instances of such a class are created by specifying values for its parameters. A class of object is often called a family of objects, and an instance is a member of the family. A member of a class is referred to as feature/generic feature. We propose to define an ontology of form features in terms of the DIFF representation of a feature. Given any feature or construction history, the volume associated with the feature can be obtained and the DIFF representation of the feature volume captured. Once the DIFF representation of a feature is available, matching entities with the Numbers and arrange me nt of S SFs _г.. HOLES (0), SLOTS (1), -/ CORNER SLOTS (2), VIRTUAL CORNER SLOTS (n) 1 Numb er of 5bared, ends (SEFs) _. THROUGH (2), BLIND (1), _У DOUBLE BLIND (0), CLOSED(0) 1 Numb ers of SE Fs at each end of feature SINGLE END_FACE FEATURES —/ MULTIPLE END_FACE FEATURES Angle between adjacent faces f, DEPRESSION, -/ PROTRUSION 1 У RECTANGLE, TRIANGLE, CIRCLE, L, U, Tf I .... Cross-sectional shape of a feature Fig. 6. Hierarchy of classification criteria in feature definition [9] same DIFF representation can be searched to find the label or construction of interest. The ontology of features in the present approach is implemented using Protégé ontology editor [13]. Screen shot of protégé editor for DIFF model is shown in Figure 8 in the next section. Our objective is to enumerate the generic form features, the generic and non-generic content of form feature are separated as "type" and "shape". For example, in circular through-hole feature, generic aspects are two shared end-faces, and concave angle between adjacent CSFs. The non-generic content is the circular cross-sectional shape of the cylindrical created-face. Hierarchy of classification criteria in feature definition is depicted in Figure 6. 4.3 Features Classification and Feature Taxonomy A feature can be defined as a set of faces with adjacency relationship which enables association of knowledge. The four types of faces, described in DIFF model capture the form of the feature-solid and the feature creation process. Features are classified in terms of number of faces and faces adjacency relationships of the four types of faces which are characterized by the following four factors. 4.3.1 Numbers and Arrangement of SSFs Based on this factor, features are divided into four classes. These classes are defined as follows: Hole, zero shared-shell-faces: This case arises when the shell of the feature-solid is completely inside the base-solid. This class corresponds to features commonly referred to as holes. In the proposed taxonomy also, it is referred to as hole. Slot, one shared-shell-face: This class of features results from the coincidence of a single shell-face of the feature- solid with the base-solid. This class corresponds to features commonly referred to as slots, and in our taxonomy also, it is referred to as slot. Corner slot, two adjacent shared-shell-faces: This class of features results when any two adjacent shell faces of the feature-solid coincide with two adjacent faces of the base-solid. Since two faces meet at a corner we have named this class as corner slot in our taxonomy. The feature referred to as step in the literature, belongs to this class. Virtual corner slot, Three or more adjacent shared-shell-faces: These features result from coincidence of 3 or more adjacent shell faces of feature-solid with 3 or more adjacent faces of the base-solid. Though these features are not cited in the literature as individual features, their combinations are referred to as virtual slots and virtual pockets. This class of features is named as virtual corner slot in the proposed taxonomy. 4.3.2 Type of End Faces Each class (holes, slots and corner slots) is further divided into sub-classes through, blind, and double blind based on the type of the two end faces. Double blind, zero shared end-faces: This class corresponds to the set of features that are generated such that the two ends of the featuresolid are totally inside the base-solid and hence, there are two CEFs and no SEFs. This class is referred to as double-blind in our taxonomy. Blind, one shared end-face: This class of features is generated when one end of the featuresolid coincides with face(s) of base-solid and hence, there are one SEFs and one CEFs. This class is referred to as blind in our taxonomy. Through, two shared end-faces: This class of features arises when both ends of the featuresolid coincide with the face(s) of the base-solid and hence, there are two SEFs and no CEFs. This class is referred to as through in our taxonomy. Closed, no ends: When the feature-solid is a result of sweep about a closed path, such as toroid, there are no ends. There is no SEFs and no CEFs. Features of this class are referred to as closed features in the proposed taxonomy. The combination of the above two steps of classification results in generic types of features such as through hole, blind slot, double blind corner slot, etc... The variation in the number of SEFs at one coinciding end is broadly classified into single-shared-end-face (SSEF) corresponding to single SEF and multiple-shared-end-face (MSEF) corresponding to more than one SEF. 4.3.3 Cross-sectional Shape ofa Feature Based on Numbers of CSFs and SSFs Feature definitions are structured to separate the generic content from the non-generic content. The overall form and shape of a feature are separated into type and shape. The type of the feature is specified by the generic type and the shape of a feature is the cross-sectional shape of the CSFs and SSFs. Some of the common shapes are rectangle, triangle, circle, L, U, T and I. 4.3.4 Type of Angle Between Adjacent Faces Each class is further divided into subclasses, depression and protrusion based on the angle between adjacent faces of a feature. Depression; This class of feature has angle between two adjacent CSFs or adjacent CEFs and CSFs as concave. Protrusion; This class of feature has angle between two adjacent CSFs or adjacent CEFs and CSFs as convex. If CSFs are more than one then angle between adjacent CSFs is sufficient to answer whether a feature is protrusion or depression. If CSFs is equal to one then angle between adjacent CEFs and CSFs is required to answer protrusion or depression feature. 5 ONTOLOGY FOR DIFF MODEL The structure defined above is used to develop ontology of features. Protégé editor [13] is used to develop ontology for DIFF (domain independent form feature) model with semantics. A high level view of the ontology is shown in Figure 7. All features are classified in terms of the criteria described in section 4 and Figure 6. Figure 8 shows the class structure for the generic feature type (marked in the left panel). Some instances of the generic feature type are shown in the middle panel. The attributes that are associated with each feature instance and used in the reasoning are shown in the right panel. DIFF feature "Through Slot" (marked in the middle panel) with attributes' value (in the right panel) are also described in Figure 8. The construction history associated with the feature refers to the possible ways the feature can be modeled or constructed. The user defined feature subclass is a place holder for features with different labels and also for further extensions of the feature ontology to handle features that are not either described or shape based. Figure 9 shows the instances of user defined features (marked in the figure) such as those used in a product model as shown in Figure 10 (marked in the figure). A user defined feature is stored as new feature if the feature is different from the DIFF model. The DIFF structure can be obtained for such features as shown in Figure 11. A user defined feature "Boss-revolve4" is not there in the DIFF model. The feature "Boss-revolve4" is stored as new feature as well (marked Fig. 7. Structure of DIFF model and developed ontology Fig.8. Structure of DIFF model with classes, slots and feature instances CLASS BROWSER For Project: • 19арпаЮ2 Class Hierarcl A V • rHiMG SYSTEM-CLASS > Feature_Model » DIFF_Model (1) r • DlFF_Feature (11) ▼ • Generi c_Feature (6) ► • Feature_Type (16) • Feature_Nature (2) Ш X_Shape_of_Feature : г о Construe! ion • Construct ion_Operat or ew FeaUji-_ 1 User_Defmed_Feature ; -m S upe i classes Featured odet ♦ tjoss-extrutìe2 ♦ 0usB-re™lva4 ♦ Double blifld tornir slot ♦ lihlih 'I Types CLASS FDfTOH I for Class: • Usei_Defined_Fealure (instance of ; STANDARD- С. .. К > X Name Documentation Constraints 1 Us er^Defined _F eat u re User defined feature may be different tri terms of label of the Role feature or may have different 1 Concrete® »1 fHd-UIH 1 Template Siete Ä % Name 1 Cardinality Type Oth E Im Aangle_Betweer>_Adjacent_SSI- single Symbol allowed-vah M Angle_Between_Ad)acsnt_CEF single Symbol allowed-va h M Angle_Be1ween_Adjac e n1_CSF s single Symbol allowed-vali M Are_SSFs_AdjacBfit single Symbol allowed-va h M DIFF Features multiple (ОБ) In si ance of DIFF. M Feature Model single String default=Mo M Feature JD single inieger miiiimum- M Feature_Narne single String default=hht M Has_DIF_Feature required multi Insiance of DIFF default=Pei M Number_ofCEFs single Integer mtnimum=( M Number_of_CSFs single inieger mmimum=( M NumberjifSEFs single Integer minimum^ M Wumber_of_Shell_faces single Integer minimum^' Number_of_SSFs single Integer minimum^ M Shared_End_Face single Symbol allowed-va h Fig. 9. Instances of user defined feature in the figure). This feature has the same DIFF representation as the features "Through Corner Slot" and "Protrusion" (see screen shot in Fig. 11). Mismatches in feature labels between different applications are described in the next section. Mismatches in presentation /construction history are also described in the following sections. 5.1 Handling Different Labels Referring to Same Shape Given a feature from a host system (Fig. 10), the feature volume corresponding to each feature in the source system is used to identify its corresponding DIFF structure. Figure 12 shows the DIFF structure identified for one such feature say label "boss-extrude2". Using the query feature in the ontology editor, the label in the source system is first matched with the label (feature name) in the DIFF structure. The query for user defined feature "boss-extrude2" is depicted in Figure 13 which is used to find the same feature in the ontology (marked in the left panel). The feature corresponding to boss-extrude2 is present in the DIFF model with different label namely, "Rectangular double blind slot - Fig. 10. Example of user defined features [11] protrusion (boss)" (marked in the right panel). The other labels associated with this DIFF feature are now searched to check if there is a match with the target system. 5.2 Handling Features with Different Construction History/Representation As mentioned earlier, the construction history or representation of each DIFF feature is stored in the DIFF structure. Figure 14 shows the different construction possibilities for a particular feature. Let us take a construction method "Sweep_Blind_Hole_Protrusion" in DIFF model. The Figure 14 shows different constructions/ representations for "Sweep_Blind_Hole_ Protrusion" as "Pad1" and "Extruded_ BossBase1" (marked in the right panel). We know "Pad1" in Unigraphics and "Extruded_ BossBase1" in Solid Works which are equivalent to each other. Given a user-defined feature for which the matching features in another system have been identified, the next task is to resolve any mis-match in the construction process/representation of the feature. First it is checked if the target system supports any of the construction history associated with the DIFF feature corresponding to the feature being exchanged. Otherwise, for the different construction methods available in the target system, the DIFF representation is obtained and used to match with the feature being exchanged. Figure 14 shows the output for a query for other construction methods for a given feature. CLASS BROWSER For Project: • 1 Sapri.. Cla: Я, V * Ж - Feature_Model '1) DIFF_Model (1) T DIFF_Feature T Generic_Fe ► Feature_ Feature_ X_Shape_of T Construction Constructs [ New_Feature (3)| User Defined Feat Class New Feature Instance Tree Я. V X t ♦ hhhh i t-" Has_DIF_Feature ► ♦ Rectangular Through Hole I '.....ш User Defined Features ♦ Boss-revolve4 ' t ■ Has_DIF_Feature ► ♦ Protrusion ► ♦ Through Corner Slot L.....ш User Defined Features _ ♦ Double blind corner slot - protrus ■ Has_DIF_Feature ► ♦ Protrusion ► ♦ Double Blind Corner Slot ■ User Defined Features Fig. 11. Instances of new feature in the ontology Q boss-extrude 2 (instance of User... [. ][П|[Х|| Feature Name |boss-extrude2 I Number Of SSFs Are SSFs Adjacent 1| Ino Number Of SEFs Number Of CEFs □ 13 Number Of Shell F Number Of CSFs I 4| 1 з| Angl« Between Adjacent CSFs M т Angle Between Adjacent CEFs And CSFs - Has DIF Feature A ♦ ♦ Double Blind Slot - Protrusion Fig. 12. DIF Feature information extracted for user defined feature label "boss-extrude2" 6 DISCUSSIONS Using a single product model across the product lifecycle has been suggested to maintain integrity of the data which avoids the effort in creating multiple models. Product model is created only once in any modeling software. The same product model can be used for further manipulations and editions throughout the product lifecycle and can also be used among different vendors to share knowledge. We have identified different types of semantic interoperability problems arising during exchange of product models in product development. These are: different terms referring to same shape, associated representation for a shape ♦ Instantes * Queries [ |*#{Clas68s & Instances ||Ц** Instance Tree Z Foims Ì Search Refills Й1 Quei H Class A •• ■ Slet £ ■ In tog ci * • Feature Model ■ Number of SSFs E Class Л 1* ■ Slot Л ■ Symbol • Feslure Model ■ Are_SSFs_Adjac9nt ^ -F Class A Slot Д Integer • Feaiure_Mo instances » Quenes ■ я я Slot А ■ ■ - Endjaces Slot A i ■ End_taces Class >k ° Construction Class ■ Slot A ■ • DIFFRepresental ™ Feature Nature Class я* я Slot A ■ • DiFF_Reffesental Class & instances J** Instance Tree Ontoviz = Forms Jj Search Results |Э) я V is Sweep_Hiriđ_Hole_ Protrusion _ ■ . ExtrudedBossBasel Padi (Class2> Symbol Symbol ■ Shared_ShelLFac и ,. • Slot A Di FF_Repr esentai - Profile Symbol More Query Name Fewer Qear « Match All Match Any jFimj representation methods in different applications for "Sv * AM lo Query Li Query Library fi « X ^ lind al DIF Fe^ures with number of SEFs exactly one ? » Which model has Double Bl nil Comer SloP 1 Find representation similar to Pad1 with parameters End Faces as SEF1. CEF ■ i Find parameters tor "Sweep_Blind_Hole_Protnjsion" in DIFF representation^ ' и Find representation methods in different applications tor "Sweep_Etlind_Hole i pLidr' in Unigraphics and "ExtrudedBossBaseT1 in Solid Works which are equivalent to 'Sweep Blind Hole Protrusion" in DIFF model h—m Fig.14. Query "Find representation methods in different applications for "Sweep_Blind_Hole_Protrusion" in DIFF representation?" may be different, meaning of terms are context dependent and term with meaning is there in one domain may not be there in other domain. Once an ontology for a DIFF model for a product model (for any source system) is developed then the features and construction history for any target system can be obtained. There is no need to enumerate separate feature and construction history for a new system. The features and construction history for a product model can be obtained through DIFF model. We have presented a one-to-many framework for exchange of product information model, product semantics in particular as semantics associated with shape data can convey design intent, inter-relationships between entities in the shape and other data important for downstream applications such as manufacturing. Feature based product data exchange has been used as features (means geometric data at higher levels of abstraction) are capable of carrying constraints, parameters and application attributes. DIFF structure is described and ontology is developed that captures the vocabulary used in feature models. A prototype of the ontology and the reasoning on the ontology has been built using the Protégé ontology editor. The method is demonstrated to handle mismatches in labels and construction history using Protégé ontology editor. 7 CONCLUSIONS A new feature based ontology has been proposed to address the problem of semantic interoperability between shape models. In contrast to present art, the proposed ontology enables reasoning to handle situations where equivalence between terms is not already captured in the existing ontology. A prototype implementation that is able to handle mismatches in labels and construction history has been described. Handling other mismatches and incorporation of the feature model and ontology in the core product model [3] has been identified as future work. 8 REFERENCES [1] Ameri, F., Dutta, D. Product lifecycle management: closing the knowledge loops. Computer-Aided Design & Applications, vol. 2, no. 5, 2005, p. 577-590. [2] Brunetti, G., Grimm, S. Feature ontologies for the explicit representation of shape semantics. International Journal of Computer Applications in Technology, vol. 23, no.2, 2004, p. 192-202. [3] Fenves, S., Foufou, S., Bock, C., Bouillon, N., Sriram, R. D. CPM2: A revised core product model for representing design information. National Institute of Standards and Technology, NISTIR 7185, Gaithersburg, MD 20899, USA, 2004. [4] Heiler, S. Semantic interoperability. ACM Computing Surveys, vol. 27, no. 2, 1995, p. 271-273. [5] Horvath, I., Pulles, J., Bremer, A. P., Vergeest, J. S. M. Towards an ontology-based definition of design features. Proceedings of the SIAM Workshop on Mathematical Foundations for Features in Computer Aided Design, Engineering and Manufacturing, October 2223, 1998, Troy, Michigan USA. [6] ISO 10303. ISO/CD 10303-Part 111: Product data representation and exchange: Integrated application resource: Construction history features. International Organization for Standardization, 2004. [7] Lee, C.K.M, Lau H.C.W., Yu, K.M., Ip, W.H. A generic model to support rapid product development: an XML schema approach. International Journal of Product Development, vol. 1, no. 3/4, 2005, p. 323-340. [8] Mostefai, S., Bouras, A., Batouche, M. Effective collaboration in product development via a common sharable ontology. International Journal of Computer Intelligence, vol. 2, no. 4, 2005, p. 206-212. [9] Nalluri, S.R.P.R. Form feature generating model for feature technology. PhD thesis, Indian Institute of Science, Department of Mechanical Engineering, Bangalore, India, 1994. [10] Owen, J. STEPP: An introduction. Winchester, UK: Information Geometers Ltd., 1993. [11] Patil, L., Dutta, D., Sriram, R. Ontology-based exchange of product data semantics. IEEE Transactions on Automation Science and Engineering, vol. 2, no. 3, 2005, p. 213-225. [12] Pratt, M. J. Introduction to ISO 10303 - the STEP standard for product data exchange. Journal of Computing and Information Science in Engineering, vol. 1, no. 1, 2001, p. 102-103. [13] Protégé. Protégé ontology editor. Stanford University, 2007, http://protege.stanford.edu/ [14] Pulles, J. P. W., Horvath, I., van der Vegte. Beyond features: an ontology oriented Interpretation. Proceedings of the International conference on engineering design (ICED 99), August 24-26, 1999, Munich, p. 1761 - 1764. [15] Subramani S, Nalluri, S.R.P.R., Gurumoorthy B. 3D clipping algorithm for feature mapping across domains. Computer Aided Design, vol. 36, no.8, 2004, p. 701-721. [16] Subramani, S. Feature mapping, associativity and exchange for feature-based product modeling. PhD thesis, Indian Institute of Science, Department of Mechanical Engineering, Bangalore, India, 2005. [17] Szykman, S., Fenves, S.J., Keirouz, W., Shooter, S.B. A foundation for interoperability in next-generation product development systems. Computer Aided Design, vol. 33, no.7, 2001, p. 545-559. Paper received: 28.2.2008 Paper accepted: 15.5.2008 Enabling Interactive Augmented Prototyping by a Portable Hardware and a Plug-In-Based Software Architecture Jouke Verlinden* - Imre Horvath Delft University of Technology, Faculty of Industrial Design Engineering, The Netherlands Interactive Augmented Prototyping (IAP) combines digital and physical modeling means to support design processes. Although pilot implementations indicate a possible added value, practical use is hindered by the fact that no off-the-shelf solution exists.. Based on empirical studies and assessment of emerging technologies, this article introduces a projector-based IAP hardware platform called I/O Pad. Furthermore, a flexible software architecture is presented that supports a multitude of input devices and usage scenarios. In this architecture, an existing 3D CAD system is extended by a collection ofplug-ins. The plug-ins are responsible for managing specific elements of the interactive augmented prototyping process. A first implementation has been developed, using a small projector and a handheld PC, which proves the wireless versatility of the hardware platform. The proposed software architecture allows the designer to work in a familiar modeling environment yet includes powerful concepts from tangible user interfaces to support several types ofinteraction with physical components. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: augmented reality, interactive prototyping, software architecture, CAD 1 INTRODUCTION In supporting design and prototyping activities, Augmented Reality technology provides an appealing solution that combines physical and virtual reality. This combination might eliminate some of the problems associated to an entirely virtual or physical application. Several researchers have explored this concept of Interactive Augmented Prototyping, e.g. [2], [7], [14] and [23]. In an earlier publication [22] existing applications and enabling technologies were surveyed. The presented augmented prototyping systems showcased the power of tangible computing as natural, embodied interaction. Several augmentation techniques could be found in Milgram and Koshino's reality-virtuality continuum [13] to mix physical prototypes with digital imagery. At present, the required technologies constitute a wide palette of input, processing, and output principles; devices and algorithms that are unknown for traditional CAD developers or users. Furthermore, these solutions typically rely on custom-coded applications that are incompatible with existing CAD systems (both user interface and modeling capabilities differ). With some exceptions, typical Augmented Reality systems are bulky and prone to noise. This article presents a system to support IAP, based on three extensive case studies in which design processes were empirically followed. The proposed solution comprises both hardware and software and is targeted to enrich the design process. 1.1 Empirical Findings In order to obtain insight in the possibilities and limitations of current prototyping practice in industrial design, three design projects in different sub domains have been monitored: the design of a tractor, a handheld oscilloscope, and the interior for a museum [21]. These represent a range of industrial design engineering domains, which were considered to be susceptible to support by IAP (automotive, information appliances, and furniture design). Our objective was to produce a deep and accurate account of prototyping and modeling activities, with a primary focus on product representation and design reviews. The field studies were used as a starting point to compile characteristics and specific events *Corr. Author's Address: Delft University of Technology, Faculty of Industrial Design Engineering, Landbergstraat 15, 2628 CE DelftThe Netherlands, j.c.verlinden@tudelft.nl that influenced the design processes, which we grouped in four different perspectives: functionalist, interpretive, emancipatory, and postmodern. The four perspectives originate from organizational sciences and relate to different objectives that designers might have for a particular representation, respectively: i) efficiency, ii) increasing shared understanding, iii) influencing decision making, and iv) creativity [24]. This body of findings was then used to identify specific IAP functions, grouped per scenario as discussed in the following section. Furthermore, the present physical prototypes have been analyzed to identify non-functional requirements. Some of the requirements pertaining to hard- and software will be reiterated and expanded in sections 2 and 3. 1.2 Scenarios Based on the three case studies and the four perspectives, a number of IAP functions were identified. Main categorization aspect is a particular usage scenario, which differs among the domains and intended prototyping goals: • User studies - evaluating intermediate designs by using the prototype as a stimulus. The prototype also acts as an excuse to study the user in its natural habitat and to provoke comments on product specifications. • Exploration - probing various aspects of the design to diverge or understand underlying relationships; effectively creating design proposals, sometimes in combination with extensive simulation means. • Design review - making design decisions, discussing design alternatives and considering the strengths and weaknesses as perceived by different stakeholders. • Presentations to customers, higher management - inspire and possibly overwhelm distant stakeholders or public with (intermediate) results and possibly show user studies. At present, the collection includes over 29 functions, summarized in the appendix of this article. However, in developing IAP support, we do not strive to deliver a Swiss army knife that covers all of these functions in a single module. Instead, smaller subsets of functions can be linked to the situation at hand and this will determine what IAP hard- and software configuration most 1 www.opencascade.org appropriate; this customization will be discussed in section 4.3. 1.3 Related Work In earlier publications, we have selected projector-based augmented reality systems as the most likely candidate to support design [21]. The principle of projector-based augmented reality is treated in Bimber and Raskar [3]; it provides computational solutions to the challenges of projector-based Augmented Reality. Issues like registration of virtual and physical coordinate systems, calibration of colors and the simultaneous use of multiple projectors. However, the solutions are fragmented and not implemented in a single platform. Conversely, a number of Augmented Reality software architectures have emerged, for example Studierstube [17], Avalon [1], and Avocado [18]. These deal with position tracking, virtual camera updating and provide hooks to script interactivity. All existing software platforms focus on videomixing or see-through systems, and require adaptation to support projectors. Furthermore, the integration with modeling and simulation has not been addressed to a generic level. The main focus is to in support OpenGL based rendering libraries or X3D/VRML-based scene graph management. Story-based AR systems like DART [12], Geist [10], AMIRE [28] and the APRIL language [11] focus on playing narrative experiences in AR systems. All these solutions do not directly fit high-level CAD operations and model conversions, while engineering simulations have to be hard-coded which makes employment of for example injection flow-molding, finite elements analysis or fluid dynamics simulations difficult to implement and adapt. Similarly, middleware to run shared AR like Muddleware [26] focus on multi-user game playing and level of detail management and do not support interactive visualization and adaptation of objects. Several systems and architectures have been devised to support design activities by advanced 3D graphics interfaces, based on completely virtual paradigms. Although some of these are readily available like Open Cascade1, they will not be easily adopted by existing design Studios, who are committed to use a specific commercial CAD package that offers distinct features and a familiar user interface. 2 IAP HARDWARE To establish augmented reality for design, a growing selection of output, input, and physical prototyping has to be considered. A treatise of these enabling hardware technologies was published in [21]; as output means, our first preference is the projector-based display. On input and physical model making, a wide variety of options is available, none of which provided a complete solution. Based on the situation at hand, a final selection will have to be made. 2.1 Hardware Requirements In considering current design practice and future support scenarios, we identified the following requirements regarding IAP hardware: • Mobility: design reviews often take place at the location of client or at other stakeholders; the IAP apparatus should fit in two reasonably sized suitcases and should withstand unsupervised transport (by air, in trunk of a car). • Installment: during the execution of the scenarios presented at Section 1.2, installment of time and effort should be kept at a minimum (max 15 minutes, with self-starting facilities). If calibration is required, the system should provide step-by-step guidance. We acknowledge the constraint uttered by [26] that such devices should be self-contained units with no loose parts, which should auto-start if a failure occurs. • Fixation: position and orientation of the projector systems should be fixed without creating hazardous or erroneous situations. In the case that a projector is a hand-held system, it should have a facility to stay in a particular posture while the user can release it. • Portability: during use, the projector systems could be moved and this should be doable by one person. To enable this, the amount of cables should be kept at a minimum. • Time performance: as the IAP concerns both model inspection as model adaptation and simulation, the update frequency is important to keep an interactive 3D experience. The complete system must run at least at 10 Hz, with little lag time as possible • Accuracy performance: constrains on tracking accuracy and projector resolution are situation dependent. This also depends on the scale of the physical model and the level of detail of the projected information. This issue requires revisiting during evaluation. • Environment interaction: IAP systems should not adversely influence general environmental conditions, in particular regarding noise and lighting condition. As projectors typically contain fans, noise should be minimized in order to support regular conversations (max 30 decibels, the level of whispering). Furthermore, IAP systems might require a dimmed room, but it should still be possible to see and interact with other persons and objects in the environment. A minimum level of 200 LUX is allowed (a dimmed training room). • Projector performance: a lot of variation exists in projector specifications, like resolution, zoom range, field of depth and light intensity However, little can be constrained regarding these characteristics, as the application in depends strongly on for example distance to the object. 2.2 System Framework: the I/O Pad As a fundament for the hardware platform, we would like to adopt the paradigm of the I/O bulb as presented by Underkoffler and Ishii [19]. The I/O bulb (Input-Output bulb) views the input (camera or other sensors) and output (projector) as one single unit. This bulb can be switched on and off at will, can be configured in groups and so on. For example, each I/O bulb could perform a particular task: 3D modeling, simulation analysis or annotation management. In this fashion, dedicated projector modules can be viewed as a Data management&facilitation Fig. 1. Networked I/O pads physically addressable (i.e. tangible) component. As demonstrated by the so-called procams community2 (projector-camera systems), many algorithms and applications have evolved that can be employed in this setup including calibration of colors temperature, 3D scanning, and visual echo-canceling. We extend the I/O bulb concept, by including processing power and a touch screen interface, the result of which we would like to label as I/O Pad. As its name suggests, it is supposed to fit within the series of tangible user interfaces devised to blend physical and virtual realms like I/ O bulb and I/O brush [16]. The I/O Pad is a self-sufficient, untethered device (if battery operated). Collaboration of multiple pads is facilitated through wired or wireless network connections. In essence, our concept overlaps with iLamps [15]; both add handheld, geometrically aware projection and allow ad-hoc clustered projections. However, the I/O Pad differs in three ways: i) each I/O Pad contains a touch screen to interactively capture and display sketches and gestures from designers, ii) each pad is equipped with recording devices (webcam) to pick up discussions and usability assessment sessions, and iii) the I/O Pad network architecture encompasses a distributed structure to facilitate data sharing, dialogue management and in particular session recording, As shown in Figure 1, different Table 1. Characteristics of two Smart I/O Pads Handheld I/O Pad Large I/O Pad Projector LED-based, battery operated Silent standard video projector. Projector power 30 Lumen 3000 Lumen Projected resolution 800x600 pixels 1280x768 pixels Working distance from object 10-50 cm 100-300 cm Processing unit UMPC Tablet PC Touchscreen diameter 5-7 Inches 12-15 Inches 3D tracking Marker-based (ARToolkit) Active/passive infrared tracking (motion capture system, camera includes infrared canon) Estimate total weight 1 kg 2.5-3 kg 2 procams workshop: www.procams.org instantiations of the I/O Pads might be used concurrently. To support particular activities, some pads might be switched on or off or moved according to the user's whishes. I/O Pads can be small and portable, or they carry increased projection and computing power. Two extreme versions can be conceptualized, as specified in Table 1. For the handheld system, a small, LED-based projector seems the most appropriate; these can run on batteries and are almost silent. As a processing unit, an Ultra-Mobile PC (UMPC) is a good candidate; it contains a touch screen and in fact is a miniaturized PC that runs standard windows or Linux software. Due to the lack of computing power, a lightweight 3D tracking system should be selected, for example ARToolkit. This is an open source library for optical 3D tracking and tag identification that employs flat rectangular markers [9]; our field tests suggest it will perform well on the UMPC platform (approximately 20 Hz, 640x480 camera resolution). The larger I/O Pad is equipped with more powerful constituents, to offer improved projection, processing, and 3D tracking. Recent video projectors offer XGA or higher resolutions and produce over 3000 Lumen. As processing and interaction unit, we propose the employment of a high-end Tablet-PC with a touch screen option. Such Tablets harbor both active and passive touch technologies and can be operated by fingers and special pens. In the latter case, the tablet is pressure sensitive, which supports the natural expressiveness of designer's sketching abilities. To support 3D tracking and user events, this system can be equipped with an infrared camera and infrared lamp, as being found in typical motion capture systems like Motion Analysis and Vicon. By deploying retro-reflective passive markers in combination with active, LED-based tags, both fine-grained 3D component tracking and user interaction with physical components (by for example phidgets) will be supported. This I/O pad is meant to offer global lighting of a design/ environment, from a larger distance. Due to its weight, proper fixture like a professional tripod is essential. 2.3 I/O Pad Implementation Based on the hardware specifications described above, two I/O Pads have been Fig. 2. Pilot versions of I/O Pads: handheld (top) and large (bottom) I/O Pads developed, as depicted in Figure 2. The smallest version contains a LED-Based projector from Toshiba (FF-1), which weights 750 grams including battery. This projector is connected to an Asus R2H UMPC (900 MHz ULV Celeron processor, 500 MB RAM), which has a 7-inch passive touch screen. A Microsoft NX-6000 web camera delivers up to 2 Mega pixel resolution video images. This package weights approximately 1.5 kilograms. The larger I/O Pad is based on a standard video projector (Epson EMP-811) that has 2000 lumen and is capable to project at XGA resolution (1024x768 pixels). A Tablet PC delivers processing and a passive touch screen (HP TX1000, AMD dual-core TL50 processor, 1GB RAM, 12.1-inch screen). For Infrared motion capturing the system currently employs a Wii remote controller (also known as WiiMote), which is able to track 4 Infrared light sources simultaneously at a resolution of 1024x768 pixels at 100 Hz. This WiiMote game controller is connected wirelessly to the I/O pad by Bluetooth. A converted pen containing an infrared LED at its tip, plus a battery, and a switch serves as lightpen which can be tracked by the WiiMote in two dimensions; contact of the pen tip Physical prototypes / Location tracking \ Fig. 3. Workflow of the WARP system with the object surface can be reconstructed to a 3D point as the physical surfaces are known, given the exact location and orientation of the controller. This complete bundle weighs approximately 2.5 kg and requires a professional tripod to aim toward the area of interest. The user interaction is performed by operating the touch screens of the I/O pad, drawing on physical objects surface with the light pen, or moving the model and I/O Pad. 3 IAP SOFTWARE As stated in section 1.2, a growing collection of Augmented Reality software exists; yet the solutions are not focused towards design support and require extensive adaptation and configuration. In order to establish IAP, algorithms are necessary to connect input and output, and to allow a variety of modeling and simulation applications. In [20], we have devised the WARP framework to support the complete prototyping pipeline, shown in Figure 3. It encompasses both manufacturing (Generator) and usage (Simulator) of IAP. In this publication we elaborate on the second module. Based on the case studies, a number of functional requirements have been formulated and we will then propose elements of the resulting software architecture. 3.1 IAP Functional Requirements In devising a software architecture for IAP, the following requirements and were identified: • Operation of IAP should be compatible with the user interface and conceptual modeling/ simulation the designer is familiar with. • A wide range of options to calibrate the I/O pads will be offered (coordinate systems, color, optical distortions). • The architecture should be open for (future) 3D tracking and event sensing methods. • The system should auto-start and should offer a number of preset configurations that fit the scenarios. • The IAP can relate various physical components to virtual counterparts. The user should be able to attach and detach these in an easy fashion. • IAP will also recognize certain physical behavior as actions (gestures, button presses etc.), which can be connected to various functions. • The architecture will support recording all input events and the corresponding 3D models. Different levels of granularity might be selected to optimize recording performance (time, level of detail, channels). 3.2 WARP 2.0 Architecture The resulting WARP 2.0 architecture is shown in Figure 4. In the center, the IAP Session Manager is shown. It is responsible for setting up sessions at one or more I/O Pad. This includes model sharing, session recording, and configuration management. As stated in the final requirement, the recording function will combine the modeling history with discussion by recording video and audio as well. On the right, the set of input and output devices are shown. Processing of input signals and 3D tracking is performed by the a Tracker subsystem, which supports an arbitrary number of commercial and research position sensing devices It is based on (networked) data streams. The data flow based paradigm also enables easy recording of movement and configurations by storing the streams to persistent memory. Key ingredient in the IAP architecture is a third-party 3D modeler or simulation package, depicted on the left. Instead of creating our proprietary visualization solution, we want to exploit the fact that most designers already have some type of 3D modeling package like Catia, Solidworks or Rhinoceros. Most of these are capable render the virtual components in real-time, adjusted for the projector by means of configuring and maintaining a virtual camera. Furthermore, most modeling packages can be extended by scripting, macros or other automation mechanisms (like ActiveX). For supporting IAP, we have defined four plug-ins that need to be implemented for a particular package: i) Configurator, ii) 3D Viewer, iii) TUI Management, and iv) Watcher. The responsibilities of these are discussed below. The Configurator plug-in can be viewed as the local liaison of the IAP session manager- it is responsible for the local setup and execution of other plug-ins, loading/saving models and sharing this with other IAP instances. Furthermore, it offers an auto-start function and a GUI to arrange the IAP in line with the defined application scenarios and the related functions. The 3D Viewer is responsible to define and update a virtual camera that copies the internal and external parameters of the attached projector. Internal parameters include field of view, aspect ratio and projection center; external parameters correspond to translation/rotation of the projector and the scale of the virtual and real-world coordinate systems. In terms of 3D computer graphics concepts, these are being specified in two transformation matrices: a projection and model matrix [5]. In some cases, these transforms need to be mapped to different units for the CAD package (e.g. CATIA requires focal point instead of field of view). When the projector is moved, the virtual camera will to update the model transform accordingly, based on the input from the IAP session manager. Ideally, the 3D viewer plug in should sense alterations in projector zoom (focal point) and adjust the projection matrix accordingly. Furthermore, the 3D Viewer module is responsible for determining the appropriate field-of-depth and should be capable to adjust the focus of the projector when required (based on distance between the projector and objects in virtual space). The motion capture system will track individual physical elements including identification, position and possibly state (e.g. button press). The TUI-management plug-in is in charge of mapping these actions to the corresponding virtual components in the modeling or simulation package. This might effect in showing/hiding and translating/rotating objects but also steering additional virtual simulation modules (like physics behavior or screen navigation). The Watcher plug-in is responsible to support the recording functions, which can be either saved to file or streamed to a centralized session recorder through a network connection. This plugin offers a number of services, including capturing either screenshots, full 3D models per stage, or hybrid version of both based the modeling events the hybrid option could for example encompass capturing full 3D models after alterations of the model, and screenshots during model viewing. Furthermore, the update frequency can be set in time or event-based triggers. Specific interfaces are being determined at this moment. Storage of data at the IAP Session Manager will based on tuplespaces (also known as a blackboard). This type of associative memory is highly flexible and supports various mechanisms to share data with clients. In particular, the IAP Session Manager uses a publish-and-subscribe protocol in order to propagate events and data streams to those modules that have shown interest. Furthermore, the IAP Session Manager will offer a calibration routine to determine the projection matrix and to determine the (fixed) transformation between 3D tracking and projector positions. 3D viewer 3.3 Plug-in Factory Concept Although the plug-in architecture offers a lot of flexibility, it yields challenges towards implementation and maintenance of these plug-ins First, all modeling applications have their own automation solution which requires different (dialects of) scripting languages; for example Visual Basic for Applications versus C++. Second, each application offers a different set of operations and data structures, which evolve at each - typically annual - update. Configuration and version management needs to be addressed in the WARP architecture. A solution to this problem can be found at the Abstract Factory design pattern [6], as depicted in Figure 5 for the 3Dviewer and Configurator classes. Abstract classes for each plug-in type define its public interface and contain the basic functions which can be shared among instantiations; for each dialect, the plug-ins are subclassed to adapt for the particular version for example 3Dviewer_Solidworks. Second, a Factory class for each of the configurations is defined, based on the AbstractPluginFactory class. These factory classes are responsible to instantiate the actual plug-ins by means of Create() function calls. These instantiation calls will use the publicly available GNU make application3 to support file Projector Virtual prototyping modules Application Fig. 4. WARP 2.0 Software Architecture for Interactive Augmented Prototyping 3 http://www.gnu.org/software/make Fig. 5. UML diagram of Plug-in factories (illustrated are only 3Dviewer and Configurator plugins) deployment, shell scripting and compilation/linking in different languages. From personal experience, we found several ways to hide and show components in Catia; the most straightforward implementation worked, but resulted in poor performance (consuming 200 milliseconds for one cycle and extensive flashing of component wireframes). After putting some efforts in optimizing this operation, we switched to a less elegant but better working strategy by simply translating objects outside/inside camera reach. Selecting and implementing strategies requires tuning and such principal solutions should be encapsulated in the Factory classes, in order to share these basic functions among all plug-ins for that particular dialect. 3.4 Deployment and Calibration In the case of using a single I/O Pad, all processes outlined before can run on the same machine. In order to enable concurrent use of multiple I/O Pads, a networked system layout is necessary. An overview of a typical setup is shown in Table 2. As a main communications solution, we have selected OpenSound Control protocol [27], which is currently supported by emerging Tangible User Interface APIs. For each projector unit, a single application instance should run with its corresponding plug-ins. The same holds true for the Tracker stack, although these can be combined Table 2. Software deployment if using Smart I/O Pads Handheld I/O Pad Large I/O Pad Application Same on both, e.g. SolidWorks or Catia OpenTracker input Webcam (AR patterns) Motion capture system Data streams to recorder Webcam, projector position, user interaction with Pad 3D positions of components TUI Management Limited capability to run simulations Optional IAP session manager Preferred at Handheld. Only when computationally too heavy at handheld. Calibration required Transformation camera->projector coordinates Yes (fixed) Yes (fixed) Projector parameters Yes (fixed) Yes (variable) Camera internal parameters Yes (fixed) No Camera position/orientation No Yes (variable) through the data flow definitions. It seems logical to employ a networked file system on all I/O Pad systems, which is hosted by the same machine that runs the IAP Session Manager (of which only one instance should persist). This can be one of the I/O Pads or a separate server. Another dependency at this point is the fact that multiple physical component and event tracking is only supported by the larger I/O Pad, trough the motion capture equipment. Tracking such cues needs at least 2 IR cameras of which one is integrated in the large pad. Calibration for the I/O pads deals with a number of elements, most notably, the transformation between camera and projector world coordinate systems, defined by the projector internal parameters (field of view, center position), the camera internal parameters, and the vector (translation/angle) between the two components As both types of I/O Pads are equipped with different hardware, both require separate sorts of calibration, as summarized in the lower part of Table 2. The handheld unit typically requires one single calibration step, which can happen after assembly of the unit. The large I/O pad, however, requires calibration before each installation due to the registration of the IR cameras of the motion capture system and fluctuations in both projector focal distance (zoom) and color temperature. 4 FIRST APPLICATION AND DISCUSSION 4.1 Application and Findings The I/O Pad configurations specified in Section 2.3 were constructed, with a specific focus on the handheld system. An impression of this system in use is given in Figure 6. Four standard ARToolkit markers were placed around a simple object (a cup resembling a pyramid with its top cut off). The geometry was modeled in Catia and decorated by texture maps. The only interaction supported application was to combine the digital and physical models. At present, the fundaments of the WARP 2.0 software architecture have been developed, much effort was necessary to devise and implement calibration routines. The calibration procedure was developed and tuned for the simple object- the projection (u,v coordinates) of each of the 8 vertices (x,y,z coordinates) has to be indicated at the touch screen, the camera calibration algorithm specified in Faugeras [4, Chapter 1] is used to compute the projection matrix. In this simple application, the computing power of the small PC was sufficient. Although it is a bit bulky and heavy to lift for longer periods of time, the system was easy to transport and to move while running the system. In the empirical case studies, different applications were used: CATIA (automotive), Vectorworks (interior), and Unigraphics (information appliances). For reasons of availability, the minimal collection of target applications for WARP 2.0 has been set to CATIA and Solidworks. Although this is less compatible with the interior design domain, these still offer similar modeling functions while they are based on completely different representation and automation mechanisms. 4.2 Discussion Software-related issues When multiple I/O Pads are used simultaneously, synchronization between the running CAD or simulation packages is necessary. This can be solved in several ways, for example by application sharing (i.e. running the same synchronized instance on all pads), by model sharing among different applications (for example one pad dedicated to modeling and one to simulation) or by hosting diverse models of the same product. More investigations are necessary Fig. 6. Handheld I/O Pad in use to determine which option is the most appropriate. In order to record sessions, the required bandwidth to capture all video and application states can be large. As with traditional capturing systems, a tradeoff has to be made between quality and size/ bandwidth; this is even more in the case of using multiple I/O Pads concurrently. One probable option is to store data locally in real-time and to upload/share this after use. Projector-related issues In terms of luminosity, the projector is useful when it produces 50 to 100% more light at the surface than its environment4; this would equal 300 Lux at the given surface light of 200 Lux as presented in Section 2.1. Here, we have to take into account that the reflectance of the (white) object is approximately 80%. The small projector (Toshiba FF-1) has a measured power of 16 Lumen5; as Lux equals Lumen per square meter, the conversion has to take into account the maximum envelope of projection at a certain distance from the object. For the furthest distance of 40 cm, the projected area is 23.1 by 17.3 centimeters or 0.04 m2; this results in approximately 400 Lux light energy at the object's surface, or 320 Lux when corrected for reflectance; this is sufficient. In the case of the minimal distance of 20 cm., the area was 12 x 9 cm (0.01 m2), which results in 1185 Lux adjusted for 80% reflectance. In the case of the larger projector (Epson EMP 811, 2000 Lumen), in the wide-angle setting on a 1-meter distance, the projected area is 67 x 50 centimeters (0.33 m2); this yields approximately 4800 Lux corrected for reflection. In the furthest specified distance of 3 meters, the area is 1.99 x 1.47 meters (2.93 m2), resulting in 547 Lux. This means that effectively in all cases the projector brightness is sufficient. Field of depth - in experimental studies, we found little issues in using a single, fixed focus on the large projector in the range specified (1-3 meters). For the small projector, the focus remained acceptable in the following ranges: I) between 20 and 30 cm, ii) between 30-50 cm. Its focus ring is not easy to operate, at least not when the projector is assembled in the I/O pad. Instead, we could imagine a manual of automatic switch between these two ranges. The automatic option seems viable, as the distance between projector and object is constantly measured._ 4 http://www.dvmg.com.au/iti-f1.html 5 http://www.pcmag.com/article2/0,2704,2099318,00.asp Enabling Interactive Augmented Prototyping by a Po Noise - for cooling purposes, projectors are equipped with a fan that generates noise. For the small projector this is negligible, but for the larger this is an issue. New "whisper" video projectors are currently being marketed which have better characteristics, but are still not completely silent (yielding approx 28 Decibels). 4.3 Customization of Functions As mentioned in section 1.1, the basis interpreting the case studies was the framework of Critical Systems Thinking [8]. Its philosophy of Total Systems Intervention (TSI) will be used in customizing the IAP system towards a particular design situation. TSI has three phases: creativity, choice, and implementation. The creativity phase focuses on selecting a number of perspectives to assess the situation and to identify concerns and problems. During the second phase, a suitable intervention methodology is selected to deal with the problem at hand, originally based on the collection of methods of organizational sciences (covering functionalist, interpretive, emancipatory, and postmodern theories); the selected intervention is then implemented in the subsequent phase. In a similar fashion, the use of IAP should be preceded by a similar assessment and selection. The scenarios of IAP have been characterized to cover one or more of the four theories mentioned above, an overview is given in the Appendix. This ensures coverage of a number of concerns and issues during design when either product shape or product behavior plays a role. The application of TSI in selecting IAP functions can be shaped in several ways, for example as a wizard dialogue, as a flowchart or map or encapsulated in templates. 5 CONCLUSION Although the concept of interactive augmented prototyping allows possible benefits to design processes, there is a large threshold in employing this technique. In this paper, we have proposed a combination of hardware and software solutions. Based on empirical findings from three different design processes, functional requirements and usage scenarios were specified. This paper introduced a hardware solution called I/O Pad. Equipped with a video projector, optical 3D tracking, and a tablet PC, the system represents a fully equipped IAP system. Two versions were presented, a larger, more powerful and a smaller, more mobile I/O Pad. Multiple I/O Pads can be used concurrently. An initial implementation has been developed, using a LED-based projector and a UMPC. The first application of the I/O Pad proves the wireless versatility of the hardware platform. The software architecture called WARP 2.0 was proposed based on a plug-in architecture to empower existing 3D modeling and simulation applications and thus be compatible with existing design practice. The architecture was designed to connect several 3D tracking and sensor devices through a centralized IAP session manager to an arbitrary number of I/O Pads. The proposed software architecture allows the designer to work in a familiar modeling environment yet includes powerful concepts from tangible user interfaces to support several types of interaction with physical components. Secondly, the application of the Abstract Factory design pattern solves the configuration and version management of the plugins. Technical issues involve application sharing and projector characteristics. As development is still early, it has to be determined to what degree the usability of the system and the application IAP influences the design process at hand. The resulting I/O Pads will be tested in a series of field experiments, in varying domains of industrial design. 6 REFERENCES [1] Becker, M., Bleser, G., Pagani, A., Stricker, D., Wuest H. An architecture for prototyping and application development of visual tracking systems. Int. Conf. on 3DTV, 2007. [2] Bimber, O., Stork, A., Branco, P. Projection-based augmented engineering. Proceedings of International Conference on Human-Computer Interaction (HCI'2001), vol. 1, p. 787-791. [3] Bimber, O., Raskar, R. Spatial augmented reality: merging real and virtual worlds. A. K. Peters, Ltd., 2005. [4] Faugeras, O. Three-dimensional computer vision: a geometric viewpoint. MIT press, 1993. [5] Foley, J., van Dam, A., Feiner, S., Hughes, J. Computer graphics: principles and practice. 2nd Ed. in C. Reading: Addison-Wesley, 1995. [6] Gamma, E., Helm, R. Johnson, R., Vlissides, J. Design patterns, elements of reusable object-oriented software. Reading: Addison-Wesley, 1995. [7] Grasset, R., Boissieux, L., Gascuel, J.-D., Schmalsteig, D. Interactive mediated reality. Proceedings of AUIC2005, 2005, p. 21-29. [8] Jackson, M. Systems approaches to management. New York: Kluwer/Plenum, 2000, ISBN 0306465000. [9] Kato, H., Billinghurst, M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings of International Workshop on Augmented Reality (IWAR 99), 1999, p. 85-94. [10] Kretschmer, U., Coors, V., Spierling, U., Grasbon, D., Schneider, K., Rojas, I., Malaka, R. Meeting the spirit of history. Proceedings Conference on Virtual Reality, Archeology, and Cultural Heritage VAST'01, 2001, p. 141-152. [11] Ledermann, F., Barakonyi, I., Schmalstieg, D. Abstraction and implementation strategies for augmented reality authoring. Emerging Technologies of Augmented Reality: Interfaces and Design, Haller, Billinghurst&Thomas (eds), 2006, p. 138-159. [12] MacIntyre, B., Gandy, M., Dow, S., Bolter, J. D. DART: a toolkit for rapid design exploration of augmented reality experiences. Proceedings of UIST '04, 2004, p. 197-206. [13] Milgram, P., Kishino, F. A Taxonomy of mixed reality visual displays. IECE Trans. on Information and Systems (Special Issue on Networked Reality), vol. E77-D, 1994, no. 12, p.1321-1329. [14] Nam, T-J, Lee, W. Integrating hardware and software: augmented reality based prototyping method for digital products. Proceedings of CHI'03, 2003, p. 956-957. [15] Raskar, R., van Baar, J., Beardsley, P., Willwacher, T., Rao, S., Forlines, C. iLamps: geometrically aware and selfconfiguring projectors. ACM Trans. Graph. (SIGGRAPH) 22(3), 2003, p. 809- 818. [16] Ryokai, K., Marti, S., Ishii, H. I/O brush: drawing with everyday objects as ink. Proceedings ofConference on Human Factors in Computing Systems (CHI '04), 2004, p. 303-310. [17] Schmalstieg, D., Fuhrmann, A., Hesina, G., Szalavari, Zs., Encarnacao, L.M., Gervautz, M. Purgathofer, W. The studierstube augmented reality project. Presence -Teleoperators and Virtual Environments, vol. 11(1), 2002, p.33-54. [18] Tamberend, H. Avocado: a distributed virtual reality framework. Proceedings of IEEE Virtueil Reality 1999, p. 14-21. [19] Underkoffler, J., Ishii, H. Urp: a luminous-tangible workbench for urban planning and design. Proceeding's ofthe CHI'99 conference, 1999, p. 386-393. [20] Verlinden, J.C., de Smit, A., Peeters, A.W.J., van Gelderen, M.H. Development of a flexible augmented prototyping system. Journal of WSCG, vol. 11(3), 2003, p. 496-503. [21] Verlinden, J., Horvath, I., Framework for testing and validating interactive augmented prototyping as a design means in industrial practice. Proceeding's ofVirtual Concept 2006. [22] Verlinden, J., Horvath, I., Edelenbos, E. Treatise of technologies for interactive augmented prototyping. Proc. of Tools and Methods of Competitive Engineering, 2006, p. 523-536. [23] Verlinden, J., Nam, T-J, Aoyama, H., Kanai, S. Possibility of applying virtual reality and mixed reality to the human centered design and prototyping for information appliances. Research in Interactive Design, vol. 2, 2006. [24] Verlinden, J., Horvath, I. A critical systems position on augmented prototyping systems for industrial design. Proceedings of ASME-CIE 2007, 2007, DETC2007-35642. [25] Wagner, D., Schmalstieg, D. Handheld augmented reality displays. Proceedings of Virtual Reality Conference 2006, p. 321- 322. [26] Wagner, D., Schmalstieg, D. Muddleware for prototyping mixed reality multiuser games. Proceedings of Virtual Reality Conference, 2007, p. 235-238 [27] Wright, M., Freed, A., Momemi, A. OpenSound control: state of the art 2003. Proceedings of 2003 Conf. on New Interfaces for Musical Expression (NIME '03), p. 153-160. [28] Zauner, J., Haller, M., Brandl, A. Authoring of mixed reality assembly instructor for hierarchical structures. Proceedings of ISMAR'03, 2003, p. 237-246. APPENDIX: FUNCTIONS DERIVED FROM CASE STUDIES Scenario User studies Exploration Function Design review Originating Perspective Domain IA IA Presentations to customers or higher management To simulate usage (augmenting interaction on physical mockup) IA To record use and user reactions (keystrokes and performance, IA (non)verbal communication of users) As a conversational piece - projecting/capturing contexts and challenge the user Combining (manual) modeling physical shape with interaction design Inspire by projecting alternative component layouts (also of older and competing products) Combining existing physical model (chassis, engineering package) AD with virtual surface modeling Browsing through a selection of physical components and include AD some of these a virtual global concept Freehand sketching on physical surface AD Browsing through a collection of physical models and explore their ID placement in a global concept, ability to record/bookmark alternatives Project/adjust pedestrian flows interactively ID Facilities to add references to style elements in several information ID carriers, including verbal, textual, symbolic, properties, and so on. Freehand sketching on physical surface, integration with modeling ID Combining existing physical models with textures/materials ID exploration Internal discussion of design alternatives, capturing interaction and IA reflections (annotation) Freehand sketching on surface (captured with author +timestamp for later use) Present user studies: usage feedback, co-located events and subjective evaluations Presentation of design alternatives, capturing interaction and reflections and possibly design decisions (annotation) Presentations of design exploration scenarios to support reasoning and try to convince client Archiving and retrieving reviews (replay, overviews etc), allowing AD shared access Ability to prepare the model for discussions, by fixing/filtering ID items and by setting a small number of configurations Interactive display of colors/materials in focused areas only (similarID to colored doll in existing model) Combine physical model as an indexing tool for design details ID To present usage scenarios (pedestrian flows) ID Archival and retrieval of design reviews (replay, overviews etc), to ID be shared through network Abilities to add coarse budgeting and design requirements tools with interior design Present project status: design (alternatives), disciplines (design: industrial, interaction, engineering: electrical, mechanical, manufacturing) Present a summary of most interesting user feedback Present a variety of designs as a portfolio overview, either interactive or self running Present one particular product in its context and its specific (animated) features, kiosk mode F(Functionalist) E(Emancipatory) E,P(Postmodern) F,P IA, AD, ID F,P F,P IA IA AD, ID AD ID IA IA AD AD I(Interpretive),P P F,I,P I,E F,E F F,E I,E I,E I,E I,E,P I,P F,I I I,E F,I F,I I,E E F,E,P *Domains: IA= Information Appliances, AD=Automotive Design, ID=Interior Design. Paper received: 28.2.2008 Paper accepted: 15.5.2008 Understanding the Mechanical Properties of Self-Expandable Stents: A Key to Successful Product Development Daisuke Yoshino* - Katsumi Inoue - Yukihito Narita Tohoku University, Department of Mechanical Systems and Design, Japan A medical device of mesh-shaped tubular structure called stent is frequently used to expand the constric-tion of a blood vessel. The stent normally has the structure of longitudinal repetition of wavy wire parts and strut parts, and its mechanical properties such as bending flexibility and rigidity in radial direction mainly depend on the shape of wavy wire and construction of the strut. In the first stage of this paper, the mechanical properties of self-expandable stents are evaluated using a non-linear finite element method. The initial stent models are gener-ated in a 3D-CAD system, and their expanded shapes are predictedfirst. They form the finite elements for the evaluation of the mechanical properties, then the influences of stent shape on the mechanical properties are com-puted and discussed. In the second stage, a basic method for the selection of a stent is proposed from a view point of mechanics. This enables us to select useful stents that are well adapted to a patient's condition, though medical examination is necessary. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: self-expandable stents, modeling, 3D-CAD systems, finite element method, biomechanics 0 INTRODUCTION A stent is a tubular medical implement used for the treatment of stenoses developed in arteries. If the endothelial cells of a blood vessel are damaged because of stimulus by hypertension, diabetes mellitus, etc., fat accumulates thickly onto vessel walls, eventually causing arteriosclerosis. A stent is used as a measure for less invasive treatment of vasoconstriction lesions of such arteriosclerosis obliterans. Stents are classified roughly into the following two types: a self-expandable stent, which is extended independently with the characteristic of shape memory alloys if released from a sheath; and a balloon-expandable stent with no inflation capability itself, but which is instead expanded using a balloon catheter. The stent used presently in many procedures is referred to as second-generation; it consists of multiple wavy wire parts having a linear structure made of a perforated raw material circle pipe and folded up along the axial direction, with several strut parts connecting the wire parts. For stents used in most cases, their shape has been devised according to the experience of doctors and designers; the shape is subsequently revised based on evaluation of mechanical property and clinical trials using prototypes. However, such a procedure requires much time, and presents difficulties in quantitative evaluation of shape modification. Among many studies of stents that have been undertaken from a clinical perspective (for example, [1]), only a few have evaluated them based on their mechanical properties. Schmitz et al. [2] summarized the measurement of the most relevant mechanical and dimensional parameters for a given stent design. They measured longitudinal flexibility, radial stiffness, foreshortening due to expansion and so on. Duda et al.[3] listed weight, radial rigidity, insertion property into the body, and radiolucency as mechanical characteristics that must necessarily be evaluated for a stent. They further proposed an evaluation method and evaluated characteristics of commercially available stents. Mori et al.[4] conducted four-point bending tests of stainless steel stents, then obtained flexural rigidity data and investigated buckling mode in various structures. Whitcher [5] computed the stress state in a stent and proposed a procedure for its use for fatigue fracture prediction. Several studies of stent materials were also reported (for example, [6]). Some recent studies have replaced prototype experiments with those using finite element analysis, although those studies were intended only for evaluation of mechanical properties of a stent (for example, [7], [8], [9] and [10]). *Corr. Author's Address: Tohoku University, Department of Mechanical Systems and Design, 6-6-01, Aramaki-Aoba, Aoba, Sendai, Japan, yoshino@elm.mech.tohoku.ac.jp Based on the experience in medical use, the stents are required to satisfy several clinical demands; they are a) scaffolding property, namely sufficient radial force to ensure vessel patency, b) conform smoothly to the anatomy and not injure the arterial wall, c) track-ability and push-ability to reach and cross target lesions, and d) small risk of restricture and occlusions in use. Demand a) relates to the rigidity in radial direction after expanding, and demands b) and c) can be realized in case the stent is flexible, Flexibility of stent mounted in a catheter also influences the demands c). Therefore, the stent should be just the right rigidity in radial direction to maintain enough blood stream without damaging the vessel wall, while it has to be as flexible as possible to fit various anatomical bends. The mechanical properties such as rigidity and flexibility are mainly determined by the shapes of wavy wire and strut. Demand d) is rather complicated, and the microscopic condition of blood stream may affect the restricture in use. Nashihara [11] has developed a new self-expandable stent. It is called SENDAI stent after the name of city where he was studying. He drew a sketch of stent on a paper, and cut out the pattern to make a tubular paper model. It was bent or pressed to approximately estimate mechanical performances, then he modified the shape. Based on this trial and error experiments, he obtained a final shape. The biological effects of the SENDAI stent were examined in experiments with animals at Tohoku University School of Medicine, and it showed good effects available for clinical use. Unfortunately, the rigidity in radial direction and flexibility of the SENDAI stent were not quantitatively estimated when Nashihara developed the stent. This means there is no clue to improve the mechanical properties. It is our ideal to develop and design such medical equipment adapted Fig. 1. Two-dimensional drawing of self-expandable SENDAI stent precisely to each patient's symptoms and condition. The authors have studied design support systems and evaluation of mechanical properties of a stent, with the shape design of a self-expandable stent adapted to each patient's condition as the ultimate goal. A frame of computer-aided shape design support system has been presented using a 3D-CAD system and non-linear finite element method by Inoue et al. [12], then mechanical properties of the SENDAI stent is tried to evaluate by Inoue et al. [13]. In this paper, the conception and functions of the design support system are reviewed first. A subsystem for the generation of two-dimensional (2D) shape of the stent is developed to flexibly modify the stent shape and it is implemented in the design support system. The shape of strut part of original SENDAI stent has been slightly modified to improve the push-ability of stent. The mechanical properties of both original and new SENDAI stents are evaluated in this paper to clarify the influence Initial stent ф1.85 фЮ.О Fig. 2. Photograph of SENDAI stents !?!?! !?! !?! Fig. 3. Design support system for self-expandable stents of stent shape on their mechanical properties; the estimation of the influence is very useful for the design modification of stents. Furthermore, based on the obtained mechanical properties, a method for the selection of a stent suitable to patient's condition is proposed from a view point of mechanics. A force generated at the end of stent in blood vessel, which is caused by the straightening of blood vessel, is also calculated from a simplified beam model and the influence of flexural rigidity of stent on the straightening is discussed. repetition of wavy wire parts and strut parts as shown in Figure 2. It is called initial stent in this paper. The initial stent is expanded step by step by repeating insertion of a thick rod and annealing. The shape-memory effect is expressed in this process. Stents with various diameters are also shown in Figure 2. 2 DESIGN SUPPORT SYSTEM FOR SELF-EXPANDABLE STENTS 2.1 Shape Design Support System 1 MANUFACTURE PROCESS OF SENDAI STENT The 2D drawing of SENDAI stent is illustrated in Figure 1. Every wire part consists of 12 pieces of wavy wire and the strut part connects them by 3 bridge wires. The stent is made of Nitinol tube of 1.85mm in diameter, 40-80mm long and about 0.25mm thick. Slits are processed on the tube by a laser beam then it is electrically polished. The stent finally has the structure of longitudinal The design support system for self-expandable stents is shown in Figure 3. The left-hand side of Figure 3 shows manufacture and evaluation processes which are actually adopted in the production of SENDAI stent. As described above, the development, namely, 2D drawing of developed shape of stent is given first, and NC data are generated to cut a Nitinol tube by laser beam. The laser-processed tube is expanded step by step and heat treated. This is the shape memory processing. Expanded stents are bent and pressed Fig. 4. Main design variables for SENDAI stent to evaluate their performance in the next step, then the self-expandable stents are completed. Of course, the evaluation of performance is usually omitted after the trial manufacture is finished and the production of stents is made a good start. The right-hand side of Figure 3 presents a flow chart of the shape design and optimization to be proposed. A three-dimensional (3D) stent shape corresponding to a laser-machined initial stent is formed first from 2D development of the stent of initial shape using 3D-CAD. Next, the 3D-CAD model is divided into finite elements, expansion analysis by the large deformation finite element method is conducted, and the expanded shape of the stent is predicted. Stiffness analysis is conducted using finite element method based on the predicted expansion model. Then mechanical properties of the stent are evaluated from the obtained result. These are respectively equivalent to the manufacturing process, expansion process, and evaluation process in actual fabrication. NC data for processing are output to complete the processes after optimization by shape modification. Although processes subsequent to evaluation have not been completed at present, a 2D shape generation support subsystem that enables flexible alteration of a 2D development by changing the numerical value of design variables has been added to the original design support system [12]. This is useful also when creating 2D development at the first stage. Details of the 2D shape generation support subsystem and the 3D modeling of initial stents are described later. Repeating the procedures described above, the evaluation of mechanical properties and the shape modification with the 2D shape generation support subsystem are considered to allow design of a stent that is adapted to precise conditions of each patient. 2.2 Design Variables of SENDAI Stent Design variables considered to influence the mechanical properties are set up for 2D development, as shown in Figure 4, where lw and lb respectively represent the axial lengths of wavy wire and bridge, 0w and 8b are angles to the axial direction, tw and tb respectively denote the widths of the linear elements of the wire and bridge, and r. and ro respectively signify the inside and outside diameters of the wire end. The wire part of the SENDAI stent is of a cyclic form in which 12 loose S-shapes were put in order; it consists of arcs. In drawing, lw, 8w etc. are given, then the arc shape so that the linear elements continue is determined. The number of wires and bridges, represented respectively as nw and nb, and wall thickness t of the raw material tube are also used as design variables. 2.3 Subsystem for the Generation of Developed Shape of Initial Stents At great effort, the conventional DXF file used for 2D development of a stent was put into 3D-CAD (SolidWorks) for shape modification. Therefore, a 2D shape generation support subsystem is developed as a prototype to modify the 2D development of a stent flexibly. First, an arc and a group of straight lines representing the stent shape are determined with the end point of the wire as the origin, based on numerical values of the given design variables. A DXF file, a CAD format file, is output after drawing them. Next, the 2D development of a stent of initial shape is generated by importing this DXF file into 3D-CAD. The 2D shape generation support subsystem, which is constructed as a prototype, is implemented into a shape design support system, as shown in Figure 3, which has allowed enhanced efficiency of the stent shape design. In contrast to the conventional procedure using 3D-CAD, which required several hours for the shape change of a 2D development, this 2D shape generation support subsystem enables rapid shape change. 2.4 3D Modeling of Initial Stents 3D model of initial stent is formed from the 2D development of stent. This modeling is very Fig. 5. Flow of modeling of initial stent in three-dimensional CAD system important and complicated process. It depends on the functions of 3D-CAD system as well. The adopted flow of modeling is shown in Figure 5. The shape of stent is drawn on the surface of a tube using function of wrapping, then the initial stent model is generated on the tube by using the function of radial extrusion in a 3D-CAD system. Solid Works is used for this modeling. The solid models displayed at each process are also shown in the figure. The CAD data of the initial stent are output using the data transformation format Parasolid and sent to the pre-processor of solver to generate the finite element mesh. This format is better than IGES format in our trial and we used MSC.Patran for this purpose. Then, it is expanded to a diameter of stent by non-linear finite element method in MSC.Marc. 3 PREDICTION OF EXPANDED SHAPE OF STENT 3.1 Analysis for Expanding Stent Migliavacca et al. [14] conducted expansion and stress analyses of a balloon-expandable stent. However, no reports have described presumption of the shape of self-expandable stents and analysis using an expansion estimation model, as far as the authors know. In case of mounting a self-expandable stent in a thin sheath, or catheter, the stent is iced to shrink in diameter. It is inserted into a sheath in this condition; then the stent regains its initial shape by shape memory effect when it is released into a blood vessel. Therefore, the shrinkage and expansion in diameter should be reversible. The simulation described later estimates its mechanical properties at the expanded state and suggests shape modification for improving its mechanical properties. Subsequently, shrinking the modified shape indicates the final shape for laser processing. In addition, once the stent shape at the expanded state is obtained with sufficient precision by finite element analysis, the finite elements after deformation are useful also for simulation of mechanical properties. For those reasons, the prediction of the stent shape at the expanded state is of great importance. This study uses the updated Lagrange method for formulation of a nonlinear equation, adopts the Newton-Raphson method for solution-seeking, and carries out elasto-plasticity analysis using the finite element method to estimate the expanded shape of a stent. The actual mechanical properties of a shape-memory alloy are known to vary according to heat treatment conditions. Because the heat treatment condition of the SENDAI stent has not been disclosed, the authors can not measure the property. So, the material properties are assumed to be those shown in Figure 6, with the stress-strain curve of a Ti-49.8Ni alloy processed at 500! subjected to the tensile test by Miyazaki et al. [15] being referred to. Its Young's modulus is 28 GPa, the Poisson's ratio is 0.3, and the initial yield stress is 140 MPa. Expansion of stent is simulated as the forced displacement problem. The axial constraint is given at the wire ends as a boundary condition to avoid shortening of the stent length with forced radial displacement. Expansion analysis was conducted in this study starting from the initial shape with an external diameter of 1.85 mm, with the mesh regenerated when outside diameters of 4, 6, 8, and 10 mm were reached, for precision improvement, and for reduction of the time necessary for analysis. In addition, radial forced displacement was given as 100 increments in one expansion-analysis session. Strain Fig. 6. Assumed stress-strain relationship for stent material 3.2 Prediction of Expanded Shape of SENDAI Stent The two objects of analyses are the SENDAI-L80, the new model of SENDAI stent, and ST2621, the conventional model. Figure 7 illustrates their 2D development. Table 1 shows their typical design variables, which clearly reveal that the cyclic S-shapes of both wires are almost identical, but that a great difference pertains to their bridge part connecting wires. The ends of two adjacent wires are connected with a gap of two wire diameters in ST2621, although there is no gap in the SENDAI-L80. The initial stent model of SENDAI-L80 with 6 wire parts and 5 strut parts is shown in Figure 8. 65,340 10-nodes tetrahedron elements are used for this modeling. Figure 9 presents estimated results of expanded shapes of ST2621 with an external diameter of 4 mm and SENDAI-L80 of 8 mm, with comparison to real stents. Table 2 shows the estimated errors of expanded shapes. Because wire Fig. 7. Difference of bridge part between SENDAI L80 and ST262 Table 1. Design variables of ST2621 and SENDAI-L80 Design variables ST2621 SENDAI-L80 Number of wires, n„, 12 12 Number of bridges, nh 3 3 Wire length, lw [mm] 3.37 2.96 Bridge length, lh [mm] 0.88 0.85 Wire angle, 0„ [deg.] 22.8 24.4 Bridge angle, Qh [deg.] 53.5 25.0 Wire width, r„. [mm] 0.14 0.14 Bridge width, tb [mm] 0.11 0.17 length lw and bridge length lb are constraints, no shape error is considered for these. However, a difference occurs in the circumferential gap of wire ends in ST2621; the wire part is expanded linearly compared to that of a real stent in SENDAI-L80. Nevertheless, the shape error to the real stent does Fig. 8. Initial stent of SENDAIL80 divided into 65,340 finite elements Fig. 9. Estimated results of expanded shapes, with comparison to real SENDAI stents Table 2. Error of simulated shapes Fig. 10. Deformation of stents due to uniform radial pressure not exceed 8 %, as shown in Table 2. Therefore, as described above, it is confirmed that this procedure can estimate the stent shape in the expanded state. The resultant expanded shape will be used for the model of evaluation of mechanical property. 4 EVALUATION OF MECHANICAL PROPERTIES OF STENTS It is natural that a stent, as a product, has strain and stress states of its own depending on its manufacturing process. Nevertheless, it is not easy to measure them accurately. Assuming neither initial strain nor stress, the mechanical properties of a stent are evaluated. 4.1 Radial Stiffness The center section of the stent is fixed in the axial and 8 directions; radial pressure is applied to the outside of the stent elements. Then the compressive deformation of stent is analyzed. The state of deformation is shown in Figure 10. The SENDAI-L80 maintains a round shape and deformed almost uniformly, whereas the ST2621 shows greater deformation at the center than at both ends; it has no fair circular section. Eight nodes at about equal intervals for one wire element are chosen; radius reductions at 96 points of 12 center wires are averaged to express the representative radius reduction. The relationship of the obtained stent radius reduction to pressure is presented in Figure 11. The ratio of radial pressure applied to the outside surface of a stent p to radius reduction of the stent Ar defines the stiffness in the radial direction of a stent K as: K =P Kp Ar (1) the relationship between radial stiffness obtained using Eq. (1) and the stent diameter is shown in Figure 12, which indicates that radial stiffness falls as the pressure increases in both stents. This decrease is attributable to the nonlinearity of radius reduction shown in Figure 11, which appears clearly in the stiffness drop of ST2621. The radial stiffness of the SENDAI-L80 is greater than that of ST2621. Radial pressure engenders hoop stress in the stent. The decreased stent diameter is considered to occur because this hoop stress folds up the wire part and decreases E 'S 110 CL ■š 100 . a 90 (О (О О) с 80 5: 70 60 Я 50 Г ■ i ■ i ■ i i i . i • i i i . i . _ 7 : - 'О г I SENDAI-L80 - -О- p=0.01MPa - -О- р=0.02МРа J - • 1д1 р=О.ОЗМРа - р=0.04МРа - I Зу р=0.05МРа : - ■ ST2621 - • р=0.01МРа : - 3 р=0.02МРа - ■ , .i.i.i , , . i . i . I . i . - 4 6 8 Stent diameter ds ю mm 12 Fig. 11. Relation between radial pressure and radius reduction Fig. 12. Relation between radial stiffness and stent diameter the angle of the aperture at the wire end. In the SENDAI-L80 with short wire length lw, the force applied to a wire part decreases, so that the effectiveness to reduce the angle of aperture at the wire ends decreases. This decrease is inferred to suppress an increase in the radius reduction and has contributed to the rigidity improvement of SENDAI-L80. The above consideration has revealed that the wire length lw affects the radial stiffness of a stent. 4.2 Flexural Rigidity The left end of a stent is fixed, and the axial nodal force equivalent to the bending moment is applied to 12 nodes at the wire ends in the right end of the stent. This distributed load is given as divided into 100 increments, and bending deformation is analyzed. The shape after analysis is shown in Figure 13. Both ST2621 and SENDAI-L80 demonstrate great bending at the bridge part. The relationship between the obtained bending moment and curvature is presented in Figure 14. Unnatural increase of bending moment is observed at 1/p = 0.025 of SENDAI-L80 d = 10 mm. It is caused by the contact of wire ends of adjacent wire parts at the compression side of stent. This is unfavourable property for clinical use, and Fig. 13. Deflection of stents due to bending moment an improvement should be considered. Numerical analyses and experiences suggest that the placement of adjacent wire parts, namely, the angle of bridge element 8b considerably influences on avoiding the contact of wire ends. The following equation defines the flexural rigidity of a stent Kb: a- M Kb=M (2), where the bending moment applied to the stent is M, and the radius of curvature of the central axis of the stent is p. The relationship between the flexural rigidity of a stent obtained by Eq. (2) and the stent diameter is shown in Figure 15, which suggests that the flexural rigidity changes as the curvature changes in both stents. This relation originates in the nonlinearity of the relation between the bending moment and curvature, as shown in Figure 14. Obviously, the increase of flexural rigidity at ds = 10 mm and larger curvature is caused by the contact of wire ends, as discussed above. The flexural rigidity of SENDAI-L80 is greater than that of ST2621. Deformation of the bridge part is observed as a factor influencing flexural rigidity. The angle between end surfaces of adjacent wire parts of stents subjected bending moment is defined as the bending angle фь; its relationship to the stent diameter is shown in Figure 16, which indicates that the bending angle of ST2621 is always greater than that of SENDAI-L80 for every stent diameter when the bending moment is constant. The reduction of the angle of a bridge element 8b decreases the element length; consequently, it improves the flexural rigidity of a stent. Furthermore, the width of the bridge part tb of the SENDAI-L80 is greater than that of theb ST2621, which also enhances flexural rigidity. The above consideration has revealed that the width of a bridge part tb and the angle of a bridge element 8b strongly affect the flexural rigidity of a stent. 4.3 Influence of Shear Deformation on Flexural Rigidity The load normal to the stent axis causes bending deflection as well as shear deformation in a stent. One end of a stent is fixed, and deflection including the shear deflection is evaluated on the conditions that apply nodal forces in the vertical 0.02 0.04 Curvature 1/p Fig. 14. Relation between bending moment and curvature Stent diameter Fig. 16. Bending angle between end surfaces of adjacent wire parts of stents subjected bending moment Fig. 15. Relation between flexural rigidity and stent diameter Fig. 17. Deflection at the loaded end of stent fixed at the other end direction to 12 nodes at wire ends of the other end. This deflection is represented by 5. Figure 17 presents the obtained deflection 5 at the center on the load surfaces. Deflection 5 consists of bending deflection and shear deflection. Deflection with the additional constraint of revolution of a load surface to the same boundary condition as that above is estimated. The obtained deflection is regarded as shear deflection, and is expressed as 5 , . ' A shear Deflection incorporating only the deformation by bending 5bending is obtainable from the following equation using deflection 5 and shear deflection 5shear: Z _ £ _ £ (3) Z bending Z Z shear ^ ' ' The relationship of the obtained bending deflection, shear deflection and stent diameter is shown in Figure 18. The obtained deflection is normalized with flexural rigidity EI, where E is the elastic modulus of stent material, and I is the moment of inertia of cross sectional area of stent. The flexural rigidity is computed from Figure 14 using the obtained 5bending. Figure 18 demonstrates that shear deflection increases with the increase of stent diameter and applied load. Large ratio of initial stent is caused by small bending deflection. Shear deflection occupies 23-45 % of the bending deflection for every stent diameter, and the effect by shear becomes significant. When placing a stent at the bend of a blood vessel, the blood vessel wall receives an intensive load from the stent end; in contrast, the stent receives an intensive load at its end. It is known as straightening of blood vessel, and it is discussed in detail later. Therefore, shear deformation of a stent cannot be disregarded. 5 SELECTION OF SUITABLE STENTS BASED ON MECHANICAL PROPERTIES A stent should have sufficient stiffness for expansion of a narrowed section, and follow flexibly to a vessel wall. However, because neither patient conditions nor blood vessel shapes are Fig. 18. Relation among bending deflection, shear deflection and stent diameter identical, selection of a stent that is adequately adapted for each patient's condition becomes important. Selection and/or design of suitable stent depend a lot on the sense of a doctor and medical engineer. In this paper, for the purpose of design and manufacture of suitable stent for patient's condition, the concept of selection is proposed0in view of mechanics. The patient's conditions, namely, a lesion of the stenosis, etc. as well as the diameter of blood vessel to secure blood stream are decided by a doctor. Referring to above-described information and analyzed mechanical properties, suitable stent diameter for securing blood stream is selected in the following section. 5.1 Estimation of Stiffness of Blood Vessel To select a stent that is adapted precisely to a patient's condition, it is necessary to estimate the stiffness of a blood vessel. The stiffness is clinically presented by the following equation using stiffness parameter ß: Table 3. Stiffness parameters of various arteries No. Artery Stiffness parameter ß 1 Femoral 11.2 2 Internal carotid 11.15 3 Common carotid 5.25 4 Vertebral(Intracranical) 15.82 5 Vertebral ( 1 ntracran i cal ) 13.75 6 Vertebrale Extracranical) 7.58 7 RCA 36.3 8 LAD 50.8 9 LCCA 38.3 Fig. 19. Variation of diameter of human arteries due to internal pressure ln(P/P ) = ß 'D Dr -1 (4), RCA, LAD and LCCA: right coronary, left anterior descending coronary and left circumflex coronary arteries where Pi represents internal pressure, Ps is the arbitrary reference internal pressure, Do is the blood vessel outside diameter at loading Pi, and Dv is the blood vessel diameter at P. = P. The above equation holds in the physiological blood pressure range. The stiffness parameter is easy to treat clinically because it is not influenced by internal pressure. The value of ß depends on the part of blood vessels as shown in Table 3 (from [17] to [20]). Figure 19 indicates the relation between the pressure ratio P./P and distension ratio D /D obtained from Eq. is o v 1 (4). Although the stiffness of a vessel wall is presumed to increase by arteriosclerosis, negative results are also reported. Therefore this study assumes that the stiffness parameter is constant and uninfluenced by arteriosclerosis. 5.2 Determination of Stent Diameter and Rigidity in Radial Direction for Securing Blood Flow Figure 20 presents the model of a narrowed section of blood vessel. D is the outside diameter v of a blood vessel of normal state, t is the wall thickness, Dc and Lc are the diameter and length of a narrowed section, and D is the inside diameter ' r after stent placement. The relationship between the internal pressure and the outside diameter of a femoral artery (ß = 11.2) is obtained from Eq.(4) as shown in Figure 21, in which curves A-C indicate blood vessel characteristics of the narrowed state. Here we assume that the thickness of vessel t = 1 v mm and narrowed diameter Dc = 2 mm is expanded to D = 3.8 mm near a normal state. The axis of Expanded stent Fig. 20. Model of expansion of stenosis in artery by stent internal diameter of blood vessel is easily obtained as shown in the figure, then the remedial diameter Dr is placed on this axis. The intersecting point of the line through Dr = 3.8 mm and the curve C of Dc = 2 mm gives the internal pressure P. required for the expansion of the blood vessel. The equation to calculate the pressure required for expansion is derived from Eq. (4) as follows: 'Dv + (Dr - Dc ) P = P exp Dv --1 = 75 mmHg = 0.01 MPa (5). Figure 22 indicates the variation of outside diameter of SENDAI stent due to external pressure pe. In consideration of the fact that the whole stent exterior supports a vessel wall, the external pressure applied to the stent pe must be equal to P.. Therefore, marking the internal pressure obtained above on the vertical axis of Figure 22, and the stent diameter after insertion in the vessel on the horizontal axis, the stent diameter that is necessary for medical treatment is determined from the intersecting point of the lines in the figure as ds = 4 mm. Although selection of the stent diameter is basic, if the rigidity to secure the blood stream is not enough or the blood vessel is subjected to excessive stress, it is necessary to redesign the stent shape to make the rigidity adequate. For example, axial length of wavy wire is modified to increase or decrease radial stiffness. In addition angle of a bridge element is modified to increase or decrease flexural rigidity. Design approach to make axial variation of radial stiffness of stent adequate is also considered. Ideally, a suitable stent shape should be obtained by precise analysis considering interaction between stent and blood vessel such as a report [16]. But this precise analysis is very complicated. Therefore, this section proposes a simple method of selecting a suitable stent in view of mechanics. 5.3 Estimation of Force due to Straightening of Blood Vessel When a stent is inserted in a curved blood vessel, the originally straight stent is forced to bend Fig. 21. Pressure-diameter relation of femoral artery Fig. 22. Pressure-diameter relation of SENDAI stent a little. As the reaction, the blood vessel is subjected to concentrated forces at the ends of stent in addition to the internal pressure required for expansion of the narrowed blood vessel, and as the result the vessel is somewhat straightened. This is called the straightening of blood vessel. Obviously the force due to straightening of blood vessel depends on the flexural rigidity of stent; the higher the flexural rigidity is, the larger the force acts. Too large force will damage the vessel wall, and in this sense, the straightening is an important measure for evaluating the performance of stents. Although the straightening should be evaluated by precisely solving the deformation of Fig. 23. Model of stent placement in curved blood vessel as a combined beam blood vessel combined with stent inside, it is considerably complicated problem. So, the force applied on a blood vessel at a stent end is estimated based on a simplified problem to confirm that it is not exceed the limit of force on the vessel wall. A blood vessel and an inserted stent are modeled by a initially curved beam and a straight beam, respectively, and approximated by a statically indeterminated beam as shown in Figure 23. Considering the symmetry, the beams are fixed at their center. Conditions required for this decision are the radius of blood vessel curvature at the narrow part p, inserted stent length L, the flexural rigidity of the stent EIs = kb, in addition to the blood vessel stiffness EI = k. v v v Initial deflection of the blood vessel in the state where no stent is inserted, 5., and deflections of the stent and blood vessel in the state where the stent is inserted, 5s and 5v are given as the following equation: S. = P 1 - cos S v = F (Ls/2)3 Ls/2 P , F (LJ 2)3 3kv S s = F (Ls/ 2)3 = F (Ls/ 2)3 3EsIs 3kb (6) (7) (8). This is equal to the deflection of the stent when the stent is placed in a blood vessel. It gives: Ss = S. -Sv (9). Figure 24 shows the relation between concentrated load and deflection of SENDAI stent of ds = 4 mm, which is selected in the preceding section, obtained from Figure 17. An end of stent is fixed and the force is applied at the free end. Marking the deflection 5 = 4.5 mm on the horizontal axis, the force acting on the vessel wall due to straightening is obtained from the intersecting point as F = 0.024 N as indicated in Figure 24. In case of the limit of force on the vessel wall F,. . = 0.03 N, limit 7 the force due to straightening is smaller than the limit and the condition not to damage the vessel wall is confirmed. When we assume the limit of applied force Filmtt which causes no damage to a blood vessel wall, the equation that the flexural rigidity of the stent should satisfy is derived as follows: Flirnit (LS/ 2)3 kh <■ 3S (10). hendiiyg Deflection 5 mm Fig. 24. Estimation of force acting on a vessel wall due to straightening The flexural rigidity obtained from Eq. (10) may be used to roughly estimate whether the selection of stent is acceptable or not by comparing it with the flexural rigidity of existing stents evaluated in Chapter 4. It is necessary to design a new stent that reduces its flexural rigidity with the application of mechanical property assessment considered in this paper when the limit of force applied to a blood vessel Flimit cannot be fulfilled with the existing mechanical properties. 6 DISCUSSION ON DESIGN AND SELECTION OF SUITABLE STENTS Conventional design approaches accumulate improvements with design based on a designer's sense and evaluation of clinical trials to achieve a practical application. This system is trial-and-error-based design, which requires much time and labor to achieve improvement. It is therefore difficult to satisfy medical requirements. Correlation between medical requirements and mechanical properties must be clarified. The effect of design factors on mechanical properties must be evaluated quantitatively to reflect the medical requirements in a novel stent shape. The design support system described in this paper can evaluate the effects of design factors on mechanical properties quantitatively as a routine flow. Presumption of extended shape and mechanical property evaluation with sufficient precision allows medical requirements to be reflected correctly in the stent shape. Furthermore, efficient shape design facilitates rapid shape change and improvement. For that reason, it is considered to contribute to reduction of working time and cost. This paper does not step beyond suggesting a selection guideline with materials mechanics approximation. Establishment of a proper method of stent selection is anticipated, along with understanding the shape of the blood vessel containing the stent, the loaded condition, and the blood vessel based on more precise mechanical analysis. This enables the design of a stent that fits the symptoms of each patient. This is considered to bring about useful effects for the treatment of arteriosclerosis. 7 CONCLUSION A shape design support system based on 3D-CAD and CAE was presented in this study, with the shape design of self-expanding stent adapted for a patient's condition as the final target. Using the functions of the system, a design method of the initial stent shape using this process, estimation of its expanded shape, assessment of its mechanical properties, and discussion of the influence of stent shape on those mechanical properties were conducted. In addition, a method of selecting a suitable stent to patient's conditions was proposed from a view point of mechanics. The results are summarized as follows. 1. A two-dimensional shape generation support subsystem was developed as a trial. The system sets up design variables of a stent and can modify its 2D development flexibly. This was embedded in a shape design support system. The efficiency of model production of the initial stent was enhanced. 2. Large deformation elasto-plasticity analysis with the finite element method can estimate the expanded shape of a stent with error of about 8 %. 3. Finite element analysis of a stent was conducted using the estimated expanded shape; mechanical properties were evaluated for two models of SENDAI stents: ST2621 and SENDAI-L80. The result with a new model SENDAI-L80 demonstrated greater flexural rigidity and radial stiffness. 4. Factors influencing mechanical properties were discussed from the difference in the shapes of ST2621 and SENDAI-L80. Results revealed that the width and angle of a bridge element affects flexural rigidity considerably, whereas the wire part length affects the radial stiffness. 5. Results showed the necessity of adopting flexural rigidity considering shear deflection as well as conventional bending deflection when a stent is placed at the bend of a blood vessel. 6. The characteristic of blood vessel was estimated from literatures, and the stent diameter to secure blood stream was selected based on the analyzed mechanical properties of stent. 7. The straightening of blood vessel was analyzed based on the simplified model of combined beams with the flexural rigidities of blood vessel and stent. 8. A method of selecting a suitable stent was proposed based on mechanical properties. It enabled selection of a stent that is well adapted to a patient's condition. In addition, the concept of design and manufacture of suitable stent for patient's condition was discussed for the future work. 8 REFERENCES [1] Nobuyoshi, M. Frontiers in coronary stenting. Nankodo, 1999 (in Japanese). [2] Schmitz, K.P., Behrend, D., Behrens, W., Schmidt, W. Comparative studies of different stent design. Progress in Biomedical Research, 1999, p. 52-58. [3] Duda, S.H.,Wiskirchen, J., Tepe, G., Bitzer, M., Kaulich, T.W., Stoeckel, D., Claussen, C. Physical properties of endovascular stents:An experimental comparison. Journal ofVascular and Interventional Radiology, 2000, vol. 11, no. 5, p. 645-654. [4] Mori, K., Mitsudou, K., Iwata, H., Ikeuchi, K. Study on bending stiffness of stents. Transactions of the Japan Society of Mechanical Engineers, Series C, 2001, vol. 67, no. 662, p. 3078-3085 (in Japanese). [5] Whitcher, F.D. Simulation of IN VIVO loading conditions of nitinol vascular stent structures. Computers & Structures, 1997, vol. 64, no. 5/6, p. 1005-1011. [6] Igaki, K., Iwamoto, M., Yamane, H., Saito, K., Development of the novel biodegradable coronary stent (1st report, polyglycolic acid as the stent material). Transaction ofthe Japan Society of Mechaniccal Engineers, Series A, 1999, vol. 65, no. 639, p. 2379-2384 (in Japanese). [7] Takashima, K., Kitou, T., Mori, K., Ikeuchi, K. Simulation and experimental observation of contact conditions between stents and artery models. Medical Engineering & Physics, 2007, vol. 29, no. 3, p. 326-335. [8] Gay, M., Zhang, L., Liu, W. K. Stent modeling using immersed finite element method. Computer Methods in Applied Mechanics and Engineering, 2006, vol. 195, no. 33-36, 1, p. 4358-4370. [9] Wang, W.-Q., Liang, D.-K., Yang, D.-Z., Qi, M. Analysis of the transient expansion behavior and design optimization of coronary stents by finite element method. Journal of Biomechanics, 2006, vol. 39, no. 1, p. 2132. [10] Lally, C., Dolan, F., Prendergast, P.J. Cardiovascular stent design and vessel stresses: a finite element analysis. Journal of Biomechanics, 2005, vol. 38, no. 8, p. 15741581. [11] Nashihara, H., Ishibashi, T. A possible product design for minute medical parts (Development of self-expanding stent made of memory alloy). Journal of the Asian Design International Conference, 2003, 1, CD-Rom. [12] Inoue, K., Matsuoka, T., Masuyama, T., Ito, T. A shape design support system for self-expandable stents. Proceedings of the 15th International Conference on Engineering Design, 2005, vol. 1, CD-Rom. [13] Inoue, K., Ito, T., Masuyama, T. Evaluation of mechanical properties of self-expandable stents. Proceedings of the 5th International Conference on Advanced Engineering Design, 2006, CD-Rom. [14] Migliavacca, F., Petrini, L. A predictive study of the mechanical behavior of coronary stents by computer modeling. Medical Engineering & Physics, 2005, vol. 27, no. 1, p. 13-18. [15] Miyazaki, S., Ohmi, Y., Otsuka, K., Suzuki, Y. Characteristics of deformation and transformation pseudoelasticity in Ti-Ni alloys. Journal de Physique, 1982, vol. 43, no. 12-Suppl., Colloque C4, p. 255-260. [16] Paczelt, I., Baksa, A., Szabó, T. Product design using a contact-optimization technique. Journal of Mechanical Engineering -Strojniški vestnik, 2007, vol. 53, no. 7-8, p. 442-461. [17] Hayashi, K., Handa, H., Nagasawa, S., Okumura, A., Moritake, K. Stiffness and elastic behavior of human intracranial and extracranial arteries. Journal ofBiomechanics, 1980, vol. 13, no. 2, p. 175-184. [18] Hayashi, K., Nagasawa, S., Naruo, Y., Okumura, A., Moritake, K., Handa, H. Mechanical properties of human cerebral Arteries. Biorheology, 1980, vol. 17, no. 3, p. 211-218. [19] Hayashi, K., Nagasawa, S., Naruo, Y., Moritake, K., Okumura, A., Handa, H. Parametric description of mechanical behavior of arterial walls. Proceedings of Japanese Society of Biorheology, 1980, vol. 3, p. 75-78. [20] Igarashi, Y., Takamizawa, K., Hayashi, K., Ohnishi, K., Tanemoto, K. Mechanical properties of human coronary arteries. Proceedings of Japanese Society of Biorheology, 1983, vol. 6, p. 243-246. Paper received: 28.2.2008 Paper accepted: 15.5.2008 Methodical Development of Innovative Robot Drives Ralf Stetter1* - Andreas Paczynski1 - Michal Zajac2 'University Ravensburg-Weingarten, Germany 2University Zielona Gòra, Poland Strategies, methods, and tools which help design engineers in the development of complex mechatronic systems such as mobile robots are presented. The focus of this paper is a process for the interdisciplinary product development of these mechatronical systems. This process was developed and tested on the example of the product development of highly dynamic robot drives. The basis for this product development process is a streamlined (i.e. simplified) V-model, as it is known for managing software and mechatronical projects in official organizations. Advantages and disadvantages of a systematic procedure scheme are discussed and concrete recommendations are derived from the experience during the development of highly dynamic robot drives. The contribution of the paper is the reflection of a case study in the growing field of mechatronic design. The developed drive systems for mobile robots are aimed at a reduction of the complexity of drives for mobile robots. Such robots can be usedfor numerous tasks. The application of such mobile robots has already started with fully automatic lawn mowers and vacuum cleaners. © 2008 Journal of Mechanical Engineering. All rights reserved. Keywords: product development, mechatronics, methodical development, robot drives 1 INTRODUCTION The main characteristic of mechatronical products is the functional and/or spatial integration of subsystems from the engineering disciplines mechanical engineering, electrical engineering, and computer science. Innovative drive control systems for vehicles of all kinds have to combine the capabilities of subsystems of the different disciplines in order to achieve current performance objectives. However, even if the term mechatronics is now used for some years and even if elaborate methodologies for structuring the development of mechatronical products were generated, still little support is given to the individual engineer or manager. In this paper a product development process is analyzed in detail and concrete recommendations and hints for the content oriented planning and control of development processes of mechatronical products is presented. These recommendations and hints are based on the V-model for mechatronical products but their focus is on pragmatic answers and solutions for individual engineers or managers of small development teams. One type of vehicles with high market potential are mobile robots which have been developed and researched in academia for decades but which still have not be able to achieve the expected market success. Such mobile robots could potentially influence and assist nearly every area of human life, starting from household tasks to the support of physically impaired persons. It can be hypothesized that a main obstacle for the success of mobile robot is still their complexity and susceptibility to external conditions. The content of the product development process that was analyzed in the presented research work is the development of a highly dynamic robot drive. This drive is aimed at simplifying mobile robots and by this to enhance their robustness. 2 BACKGROUND REVIEW In this section the V-Model as the most prominent methodology for developing mechatronic products is discussed in detail. 2.1. Introduction A process model can be defined as a flow model used by professionals such as engineers and design managers as a tool assisting the management and organization of their systems or processes. In the field of engineering, for example, process models have been extensively used by engineers for their product or system development in order *Corr. Author's Address:University Ravensburg-Weingarten, Department of Mechanical Engineering, Postfach 1261, 486 88241 Weingarten, Germany, stetter@hs-weingarten.de to achieve more manageable and organized development processes. In a process model, the whole development process is decomposed into several single activities. Each of these single activities has its own logical sequences and the responsible person or department. Hence, the development process can be more transparent and controllable. Many types of process models are available for engineering product or system development. The examples of these process models are like VDI 2221: "Methodology for development and design of technical systems and products" and VDI 2422: "Design procedure for mechanical devices with microelectronics control". For mechatronics system development, a process model called V-model is suitable and generally the recommended one. 2.2 The V-Model as Process Model for Mechatronics System Engineering The V-model is a graphical representation of the system development lifecycle. It was adopted by Germany federal administration to regulate software development processes in 1997. After considerable adoption and modification, the V-model has been suggested by VDI Guideline 2206 as a "Design methodology for mechatronical systems" [34], [16] and [17]. Several researchers report current endeavors to apply and optimize this methodology for the product development of different mechatronic systems [1], [2], [6] to [9], [21] and [24]. Nowadays, the V-model has become a standard process model for mechatronic system development in many industrial companies. The V-model was chosen to be used in mechatronics system or product development because of its structure. As stated above, mechatronics is an interdisciplinary engineering discipline that combines essential elements or knowledge of mechanical engineering, electrical engineering, and computer science. In the mechatronics product or system development process, communication between the engineers is very essential in order to avoid misunderstanding in the product or system that is being developed. By using conventional process models for respective engineering disciplines, problems may occur in the last stage of the development processes since there is no interconnection between each section of the design. So unlike the conventional process model for mechanical, electrical, and computer science that have their own approach, the V-model organizes the development process by first working in the system level before splitting it into the respective disciplines for further concretizing. The developed product or system then will be integrated level by level. The validation and verification processes are done simultaneously with the integrating process to make sure that the product or system for each engineering discipline is suitable and compatible with each other. Hence, the V-model helps each engineer involved in the development process to have a rough idea about the whole product or system that is being developed before the individual engineers start working in their domain-specific level. 2.3 The General Structure of a V-Model Generally, the V-model can be divided into three main sections and is always described in V shape. It consists of the System Design at the left side, the System Integration at the right side and the Domain-specific Design at the tail of the V-model. Figure 1 shows the general structure of the V-model. The first step in designing using the V-model is by providing the requirements list of the system as shown at the top left side. A requirements list provides the specification or information about that particular product or system that is being developed. A requirements list also forms the measurement basis on which the later product is to be assessed. Based on the requirements list, a cross-domain principle solution that describes the main physical and logical operating characteristics is established. This stage of development is called System Design. At this stage, the overall function of that system is divided into several chunks called sub-functions. Each sub-function is assigned with a suitable operating principle or solution. On the basis of this jointly developed solution, further concretization takes place in the Domain-specific Design stage which is generally done separately between the domains involved. A thorough calculation, drawing, analysis, or simulation is carried out at this stage according to the respective domain. At the System Integration stage, results from the individual domains are integrated. Relations Fig. 1. General structure of the V-model [34] between sub-functions are taken into account as well as the verification and validation processes to assure product functionality, performance, quality, and economic value. The verification and validation processes are very important in order to make sure that the right product is being developed in the right way. The final result of the V-model is the mechatronical product of the developed system that is shown at the top right side. 2.4 Development Methodology of Mechatronics Systems According to the Guideline VDI 2206 The development methodology of mechatronical system according to the guideline VDI 2206 consists of two procedure schemes: • the general problem-solving cycle on the microlevel, and • the V-model on the macro-level. In this regard micro-level can mean sequences of proceeding steps lasting from few hours up to some months. In any case these sequences do not reflect the complete design of a mechatronical product but a specific problem within this product development. The notion macro-level names sequences of proceeding steps aimed at the complete development of a product or at least a major sub-system. Problem-Solving Cycle as a Micro-Level The VDI 2206 provides a general procedure for process steps on the micro level or methodology known as 'Problem-Solving Cycle'. It originates from systems engineering [13] as a guideline for a systems developer or engineer to be used during the problem solving processes along the development process of mechatronics system. This 'Problem-Solving Cycle' can be applied as a microlevel in the development process and is intended in particular to support the product developer engaged in the process to work on predictable, and consequently plan able subtasks, but also to solve suddenly occurring, unforeseeable problems. Figure 2 shows the organization of the 'Problem-Solving Cycle' according to [13]. The 'Problem-Solving Cycle' contains several stages: • The stages 'situation analysis' or 'adoption of a goal' are the initial stages of the 'Problemsolving Cycle'. The procedure to be chosen is based on the situation whether an existing structure is taken as a basis or an ideal concept is at the forefront. • The aim of the stage 'analysis and synthesis' is to find out several alternative solution variants. This is achieved by an alternation between synthesis steps and analysis steps. • In the 'analysis and assessment' stage the properties of the individual variants of a part solution or an overall solution are analyzed on the basis of the requirements imposed on them. Furthermore the assessment of the solution variants takes place on the basis of the assessment criteria defined during the Procedure based on actual state (existing structure is taken as a basis) initiation Procedure based on desired state (ideal concept is at the forefront) initiation Fig. 2. The Problem-Solving Cycle [13] and [34] Fig. 3. Running through a number of macro-cycles [34] formulation of a goal and search for a solution. The result of the assessment is a proposal or recommendation for one or more alternatives solutions. In the 'decision' stage a decision is made for the further development process whether the solutions are satisfactory or not. In the case that the solutions are not satisfactory, prior stages have to be repeated. • The planning the further procedure or learning is aimed at a continuous improvement cycle. The V-Model as a Macro-Level The VDI 2206 has recommended the usage of the V-model as a generic procedure (Macro-Level) for designing mechatronical systems. The general structure has already been discussed in Section 2.3. It is important to note that even on the macro level the V-Model does not necessarily represent the whole development process. On the contrary, a complete development process might consist of several re-runs of the V-model with increasing product maturity. This characteristic is highlighted in Figure 3. 3 DESIGN PROJECT In the analyzed development of a mechatronical product an innovative drive system for mobile robots was to be developed and built. Mobile robots and their drive systems have been successfully developed and built for some years [4], [5], [14], [18], [30] and [36]. The distinctive quality of this design project is the highly dynamic drive system. The innovative drive system that is already registered as a patent is based on the concept to use the torque of drive motors (more exactly the torque differences between wheels) to steer four independent axes of a robot. The principal design of a mobile robot with the developed drive system is shown in Figure 4. The example robot consists of four drive motors which are fastened on arms that may freely rotate. These arms have no drive or brake, only an angle encoder is attached at the end of each axle. These angle encoders measure the angle of the Fig. 4. Principal Design motor and the wheel with regard to the robot platform. The distinct characteristic of the innovative drive system is the absence of dedicated steering motors. By means of angle encoders applied at the four steering axes and highly dynamic control algorithms it is possible to steer the robot only by means of the four drive motors (compare Fig. 5). Each of the wheels on the short axle can be directed into the desired position by means of the torque applied on the wheel. This could take place sequentially for each individual wheel but also simultaneously, if the control allows different torque on all wheels. This characteristic allows simpler and simultaneously more robust mobile robot concepts. It is also a main advantage of this concept that the resulting robot is able to drive directly in any direction without time and space consuming turning maneuvers. Furthermore, a robot based on the dynamic drive system is able to turn around its own centre. This characteristic is very important if cameras or other equipment are mounted on the robot which can only be used in a certain orientation. The innovative dynamic drive system shares these advantages with Omni drive systems [5], but has reduced friction as well as easier controllability and offers the possibility to determine an exact position and orientation from an analysis of the angles of the steering axes and the angles of the drive wheels. Another intended characteristic of the developed prototype is the exclusive use of standard, state of the art components and interfaces, such as CAN Open. An application example as service robot is shown in Figure 6. The robot was realized in the University workshop and is currently being tested and improved (Fig. 7). Fig. 5. Individual adjustment of the steering angle Fig. 6. Dynamic drive robot (application example) 4 INSIGHTS In this section, concrete recommendations in the form of strategies, tools, and rather mundane hints for the development of mechatronical products are derived from the experience made in the development of the dynamic robot drive. This section is structured according to the V-model described in section 2 (Fig. 1). The first subsection deals with the planning and control of the whole development process. 4.1. Planning and Control of the Process Obviously, milestones and objectives on the system level can only be met, if the development process of a mechatronical product is planned and controlled on an interdisciplinary system level. This interdisciplinary planning and controlling can be considered the main challenge in mechatronical products. Theoretically, one could argue that the content of the tasks is not as important when planning and controlling those tasks on the abstract system level and that therefore the difference between a conventional product and a mechatronical, interdisciplinary system is not as important. However, in the project the sensible sequence of the different tasks of the different systems proved to be a difficult and crucial endeavor. In any product some subsystems influence many other subsystems (active subsystems) while other subsystems are mainly influenced by other subsystems (passive subsystems). This influence is not limited to the different disciplines. For instance, the decision for drive motors (brushless) required certain motor control systems. A well known method aimed at identifying the degree of influence of certain Fig. 7. Prototype of the robot influence on Which subsystems have an influence on which 1m e t s 2 m e t s 3 m e t s 4 m e t s subsystems? y s b y s b y s b y s b s s s s