Oi ®Đ CM ■k Lph 0 01 Ol o (M cö O s i=l im BJ.MH-mipni IIEV« Volume 20 Number 2 June 1996 ISSN 0350-5596 Informatica ^^International Journal of Computing and Informatics Domain-Specific Software Architectures Algorithm Design and Analysis ES for Nuclear Power Plants in Korea AI in Eastern and Central Europe Science of Consciousness, Tucson The Slovene Society Informatika, Ljubljana, Slovenia Informatica An International Journal of Computing and Informatics Basic info about Informatica and back issues may be FTP'ed from ftp.arnes.si in magazines/informatica ID: anonymous PASSWORD: FTP archive may be also accessed with WWW (worldwide web) clients with URL: http ://www2.ij s.si/"mezi/informatica.html Subscription Information Informatica (ISSN 0350-5596) is published four times a year in Spring, Summer, Autumn, and Winter (4 issues per year) by the Slovene Society Informatika, Vožarski pot 12, 61000 Ljubljana, Slovenia. The subscription rate for 1996 (Volume 20) is - DEM 50 (US$ 35) for institutions, - DEM 25 (US$ 17) for individuals, and - DEM 10 (US$ 7) for students plus the mail charge DEM 10 (US$ 7). Claims for missing issues will be honored free of charge within six months after the publication date of the issue. Tech. Support: Borut Žnidar, DALCOM d.o.o., Stegne 27, 61000 Ljubljana, Slovenia. Lectorship: Fergus F. Smith, AMIDAS d.o.o., Cankarjevo nabrežje 11, Ljubljana, Slovenia. Printed by Biro M, d.o.o., Žibertova 1, 61000 Ljubljana, Slovenia. Orders for subscription may be placed by telephone or fax using any major credit card. Please call Mr. R. Murn, Jožef Stefan Institute: Tel (+386) 61 1773 900, Fax (+386) 61 219 385, or use the bank account number 900-27620-5159/4 Ljubljanska banka d.d. Slovenia (LB 50101-678-51841 for domestic subscribers only). According to the opinion of the Ministry for Informing (number 23/216-92 of March 27, 1992), the scientific journal Informatica is a product of informative matter (point 13 of the tariff number 3), for which the t£ix of traffic amounts to 5%. Informatica is published in cooperation with the following societies (and contact persons): Robotics Society of Slovenia (Jadran Lenarčič) Slovene Society for Pattern Recognition (Franjo Pernuš) Slovenian Artificial Intelligence Society (Matjaž Gams) Slovenian Society of Mathematicians, Physicists and Astronomers (Bojan Mohar) Automatic Control Society of Slovenia (Borut Zupančič) Slovenian Association of Technical and Natural Sciences (Janez Peklenik) Informatica is surveyed by: AI and Robotic Abstracts, AI References, ACM Computing Surveys, Applied Science & Techn. Index, COMPENDEX*PLUS, Computer ASAP, Cur. Cont. & Comp. & Math. Sear., Engineering Index, INSPEC, Mathematical Reviews, Sociological Abstracts, Uncover, Zentralblatt für Mathematik, Linguistics and Language Behaviour Abstracts, Cybernetica Newsletter The issuing of the Informatica journal is financially supported by the Ministry for Science and Technology, Slovenska 50, 61000 Ljubljana, Slovenia. Elements of Software Architectures - The GenSIF/GARM Model Wilhelm Rossak and Vassilka Kirova Systems Engineering Laboratory Department of Computer and Information Science New Jersey Institute of Technology University Heights Newark, NJ 07102, USA Email: rossakQpluto. nj it. edu Phone: (201) 596-6597 Keywords: domain-specific software architecture, systems integration, software engineering Edited by: Matjaž Gams Received: August 24, 1995 Revised: January 10, 1996 Accepted: April 17, 1996 This paper presents the results of our work on defining a commonly acceptable reference model for the specification of domain-specific software architectures. GARM (a Generic Architecture Reference Model), as discussed here, has been developed within the framework of the IEEE TC on ECBS (Engineering of Computer Based Systems). GARM is also part of the ongoing GenSIF (Generic Systems Integration Framework) project at the New Jersey Institute of Technology. We start with a short introduction to the ECBS and GenSIF framework. We then proceed to discuss in detail the elements of our generic architecture reference model: concepts, rules, patterns and guidelines. Examples from existing architectures, used to derive the GARM model, are given to illustrate the architecture elements and to argue their applicability. Finally, we introduce our approach to develop an architecture specification tool, called d-ASPECT. 1 Introduction types of possible interaction and synchronization between component types. A global data mana- As systems become more and more complex, a strategy completes a typical architectural clean and comprehensive specification of overall model. system organization and behavior becomes a criti- The idea is that a unified architectural stancai aspect. Such a specification is especially useful dard, a software architecture, will support the if it is targeting a complete "application domain" development-cycle of each system in the domain [19], thus promoting software reuse, systems inte- by providing for consistent design principles and roperabihty, and standardization. By serving as application semantics. a generic, domain-wide system design model, this ^^^ j^ee eCBS framework (Engineering of specification becomes what we call a "domain ar- Computer Based Systems), the system architec- chitecture [20], [18]. jg ^j^g q£ ];najor components that An architecture, in our framework, describes form a comprehensive view of the engineering en- the organization of a system as a composition of vironment [14]; see Figure 1. The framework is interacting components. It achieves that by first organized as a pyramid, with the process model specifying a system's static structure, i.e. the va- and the system architecture forming the basis. In- lid types of components in the system and their formation describing the process and the architec- structural relationships. On basis of that struc- ture, and methods and tools, supporting all of the turai, model, the architecture then prescribes the other elements, are regarded as derived elements. Figure 1: The ECB S constituents It is important to understand that the architecture model must be as well defined and widely known as is the process model in order to establish a healthy balance on the basis of the ECBS pyramid. This balance will guarantee that process and architecture-oriented thinking co-exist with each other and enable an integrated approach to systems development. We also stress the idea of domain-wide thinking, as opposed to solutions that cover a single project only. As a consequence, the specification of a generic, domain-wide architecture is seen as the most important tool for systems integration [17]. On the other hand, it is domain-oriented thinking that allows for the future specialization of the system architecture, necessary to guarantee its applicabihty and usefulness in an adapted development process [18]. The aim of our work is to identify architectures as an important means in the development of integrated, complex systems. We focus on the software portion of systems development and strive for a domain-specific solution that can be seen as an instance of a more generic, conceptual architectural pattern (see also section 3 of this paper). Our special interest is the development of a sound theory for domain architectures that can be used to develop methods, techniques, notations and tools for architectures on a widely accepted and formal basis. As architecture engineering is a young field without an established vocabulary or clear understanding, our first step was to develop a commonly applicable Generic Architecture Reference Model (GARM). GARM structures and relates the elements typically found in most domain architectures. GARM is part of our work on a complete Generic Systems Integration Framework (GenSIF) [17], [18], [19] here at New Jersey Institute of Technology (NJIT). As mentioned above, GARM is also related to the systems engineering framework adopted by the IEEE TC on ECBS [14], GARM serves as a test-bed and is the basis of development for the architecture-focused research in our lab. The rest of the paper is organized as follows: In section two we give a brief introduction to NJIT's GenSIF framework and its basic constituents. This will set the stage for a general discussion of the architecture concept in section three. Section four focuses on the issues related to architecture specification and discusses our suggestions for an architecture reference model (GARM). Section five illustrates our approach to develop an architecture specification tool, based on GARM. A summary concludes our presentation and outlines current activities and projects. 2 The GenSIF Framework We are developing GenSIF [17], [18], [19], as a framework for studying the software development and integration issues in large-scale systems development efforts and as a discipline for (business) domain analysis, domain architecture engineering, and infrastructure acquisition. In the context of GenSIF, a business domain is defined as a part of the real world which is a self-contained space of human action with well defined functions and services offered to society [19], A specific set of technologies is typically used to support the business in such a domain. Banking, Avionics, and Telecommunications are all examples of business domains. The richness of activities and the diversity of operations in the domain trigger the development of a number of computer-based systems, e.g., ofBce systems, command and control systems, information services, image processing, automation, etc. The integrated functionality of these systems constitutes a so called "system of systems" (5^) [7]. The concept is the basic architectural principle used in GenSIF, where a system of systems is defined as a distributed set of independently developed and functioning systems, so-called "building blocks", which interoperate within the context of the given business domain. Our approach is focused on the delineation of new techniques and artifacts in the development process that will assure that a domain-specific collection of independently developed or already existing applications will interoperate in constituting an integrated system of systems. The software development process in GenSIF is based on three major elements of coordination and domain standardization, [18], see Figure 2: — Domain model - a semi-formal description of the domain, providing a comprehensive knowledge base and influencing the outcome of all development and integration initiatives in the domain, [19]; - Domain architecture - a domain-specific, unifying and coherent, multilevel software architecture, targeted at systems integration. It formahzes and systematizes the system developer's research and development experi- ence in the particular domain as well as customizes global engineering knowledge (established architectural styles) into a coherent domain-specific framework of concepts, design rules and implementation recipes. The architecture serves as a guiding plan in any development effort within the domain. It limits the design choices at lower design levels, but in return supports interoperability and reuse in the presence of separate development projects, [9], [10]; - Infrastructure - a uniform development and operation support environment. It provides a set of generic services to the building blocks and implements a set of common functions defined by the architecture. In principle, the purpose of architecture-driven development is to systematize the practical experience of designing complex systems into a comprehensive, well founded design discipline. The adoption of an architecture constrains the design space for each project in the domain, but in return supports its safe and rapid exploration. It serves as a frame of reference and conformity, and provides design templates for the building blocks of the targeted system of systems. This helps to meet the functional and extra-functional (quality-oriented) requirements from a domain-wide perspective. A deeper discussion of the architecture-driven GenSIF process model is outside the scope of this paper; please refer to [17], [18] for details. In the following sections, we will focus on the architectural element of the GenSIF framework. 3 The Domain Architecture Concept A domain architecture is a framework of concepts and analytical foundations. It supports the specification and design of software systems that belong to a certain application domain (see [19] for a definition of what we consider to be domain analysis). The architecture explicates a set of characteristics and system properties that qualify a system as a member of the type of integrated solution. These characteristics lead to a body of architec- 162 Informatica 20 (1996) 159-172 W. Rossak et al. upport Domain Model / 1 Extraction / Oomt/i/i Archaectufe ■■■ J Mega-System Complex Application Implement Models Implementations Support Infrastructure Development à Operation Environn^ \ _ Domain Sjieci^ Integrated Computer-Based System _ Figure 2: The GenSIF framework turai constraints to which a design must conform in order to stay within the architecture. The software architecture translates requirements, functional and extra-functional, into a framework of design concepts and constraints. Concepts define the elements from which a system model is constructed, while constraints are rules that restrict those elements and their compositions. Consequently, a particular domain architecture can be defined as a system design model that captures overall system organization and behavior in terms of a collection of components, their interactions and the results of these interactions, i.e., static and dynamic configurations (also referred to as persistent and transient configurations), [5], [15], [24], [10], Our proposed model of system organization and interaction, as used in GARM, can be specified in generic terms (conceptual level) and by using components derived from an actual application situation (application level). As a result, we distinguish between the following two levels of architecture: - Conceptual architecture - a computational and engineering model (we use here the same terms as are proposed in [15]) that captures organization and behavior of a class of systems with well-defined engineering properties, e.g., distributed systems with transparent message passing, object-oriented systems, etc. A conceptual architecture models a system of systems as a generic structure that follows its own "rules of existence" in terms of organization and behavior. Application- or problem-specific functions are not discussed in such an architecture. The focus is on generic properties and types of components, not on actual instances. Good examples for architectures on the conceptual level are found in standards for distributed systems [15], [1], [2], [4], most of them focusing on the aspect of component interaction. These standards describe the leading architectures (and related tool-sets) currently available in this field. Other examples for this more generic type of conceptual architecture are the OSCA^ architecture used by Bellcore [16], the DORIS architecture used by British Aerospace [23], and the AppHcation Machine approach proposed by Lawson [13]. In GenSIF/GARM, the conceptual architecture is the highest, and most generic, level of specification for the domain architecture. - Application architecture - an application-specific instance of the chosen conceptual architecture, where problem-specific functions are made explicit. ^OSCA is a trademark of Bellcore - Bell Communications Research An application architecture corresponds to the view of a system (of systems) as a solution to a specific "customer problem" within the domain. In an application architecture, components, system organization, and interaction are specific instances of their generic counterparts in the conceptual architecture. The OSCA architecture [16], for example, specifies a generic "data-layer building block" as one of the component types on the conceptual level. In a derived application architecture, many instances of this building block type can then be derived to steward, e.g., all the billing information for long-distance calls in NY and NJ. Other examples for this type of architectures are DSSAs (Domain Specific Software Architectures), as proposed in [8], and classical compiler decomposition structures, as described in [21], The main task on the conceptual level is to identify and specify the involved architectural concepts. On the application architecture level, the basic concern is to add the problem-specific element. This is done by instantiating the generic templates of the conceptual architecture into the application-specific structures of the application architecture, typically by adding problem-specific functions and data-structures. 4 Specifying Architectures 4.1 A User Profile Architectures are used to coordinate many different people, playing many different roles in a possibly wide application domain. To be able to speak about architecture specification, it is, therefore, necessary to establish a user profile by (re-)evaluating the role an architecture plays during system development, deployment, and application. Based on the GenSIF development process (discussed in [18]) and the related concept of two levels of domain architecture, we can identify the following three major groups which have a vested interest in software architecture, each one of them highlighting a different aspect the architecture needs to exhibit to be useful and complete: — The system of systems architect, working on the overall system composition. He/she is responsible for constructing, analyzing and maintaining the generic, conceptual architecture. His/her scope of work is the complete application domain, preparing an architectural standard for all projects in this domain. — The system designer, working in the context of a specific application problem and within the limits of a specific single project. He/she is responsible for appropriate instantiation of the conceptual model into a specific application architecture. He/she makes the step from the generic conceptual to the specific application-oriented level. — The tool manufacturer, working to supply a development and execution environment, an infrastructure, that implements a certain conceptual architecture and supports the derived application instances. To make a first step towards a consolidation of all the different aspects an architecture has to address, we have developed a Generic Architecture Reference Model (GARM), that describes the constituents and the organization of a conceptual architecture as we expect it to be applicable within the wider GenSIF framework and the system of systems approach. (Note that GARM is actually a sub-concept of the GenSIF framework - see section 1.) 4.2 GARM - An Architecture Reference Model GARM defines the core of an architecture as a collection of four entity types: - Concepts, - Rules, - Principles (design-patterns), and - Guidelines. Concepts and rules are fundamental to each architectural model, while principles/patterns and guidelines are useful annotations. We will briefly discuss each of the architectural elements in the following sections. 4.2.1 Concepts Concepts establish the basic architectural vocabulary. They specify the properties the architecture must exhibit and list the basic primitives the architecture provides as generic design templates. Architectural concepts in GARM include the following elements: - Architectural properties, - Components, and - Connectors. Components, connectors and their use in architecture configuration [6] specify the constructive elements the architecture provides, while properties focus on quahties and design goals. Properties are achieved by chosing a set of components and connectors which support them. Properties - Architectural properties are goal statements that are derived from the characteristics of the problem domain. They initiate and direct the design process, constitute metrics for determining the quality of architectural designs, and form a grid of aspects that add logical structure to architectural specifications. The conformance to the architecture, typically tested via a formal review, guarantees that the specified system (of systems) will exhibit the architectural properties, given that all further transformations of the conceptual specification are correct. Typical quality-related properties belong to the so called extra-functional aspects of systems Hke security, performance, reliability, etc. Other properties can define constructive or operational goals, related to, the way the system of systems is organized and supposed to operate. Typical examples for this type of property are transparency of access, location, replication, concurrency and migration, or autonomy, composabi-lity and scalability. (Note that this is a discussion of a reference model; the question of notation of properties is not the focus of this presentation.) Properties lead directly to the specification of rules and component/connector types that support them. See the section on components/connectors below and section 4.2.2 on rules. Components - Components are the basic building blocks of any computer-based system. They represent the nuclei of computation and allow the designer to aggregate functions and data into higher order units. Components also represent the sources of activity in the system and, as such, constitute the elements that form the designer's mental model of a system in operation. Prom a constructive, language-oriented point of view, architectural components are denotations that represent the encapsulation of processing, data, and state-based elements. In GARM/GenSIF, components are characterized by functional (service) abstraction, type, interface, and composition (aggregation/decomposition) relations. Components embody an abstraction of the services and data structures that have been identified in the application domain. These services are usually independent from the technology and algorithms implementing them. During architecture specification a distinction can be made between so-called common and application-specific components [15], Common components support the architecture's internal operation, ensuring certain properties, and are not really visible to the end-user. They are fully defined in the conceptual architecture (and not just generic templates that need to be instantiated). The services oifered by these common components are typically supported directly by the infrastructure as so-called architectural services or common functions [11], [15], [3]. A typical example for a directly mapped property is location transparency. It introduces the need for a directory or trading "common" component and is usually implemented in the infrastructure as a speciahzed service, e.g., as in [15] as the "trader". Application-specific components are end-u^er oriented and implement a set of functions and data structures that reflect directly processes and information found in the application domain's business world. In the end-user's perspective, they are less abstract than common components. Application-specific components can use common components and the infrastructure as a basis to implement their behavior. In the conceptual architecture, application-specific components are not fully specified; their specification is a part of the instantiation process that transforms a conceptual into an apphcation architecture. However, application-specific components must be an instance (specializations) of one of the component types provided in the conceptual architecture. An example of an application-specific component would be a memo-organizer for the shipping department of company X. This component would serve the purpose to write, store, retrieve, and manipulate office memos within the department. As a component of the architecture, it must exhibit the architectural properties all components have in common, e.g., location transparency, and uses infrastructure services and common components, in this case trading services and components, to implement these properties. Component types are essential for a conceptual architecture's specification. They serve the purpose to specify interaction and structure in a system of systems on a conceptual level, i.e., without having to know all application-specific components that will be instantiated somewhere in the future in one of the many projects in the domain. They also help to organize common components according to similarities in their behavior and structure. Component types are generalizations of functional and structural patterns found in the domain. They represent generic classes that implement important properties, like location transparency and secured message passing, but still need to be further refined and specialized (instantiated) to be useful as components in an application architecture. The concept of component types is, of course, close to the class concept used in object-oriented analysis and design. For example, Bellcore's OSCA architecture [16] divides every system (of systems) into only three possible types of components: data-layer, user-layer, and processing-layer building blocks. Data-layer building blocks manage the semantic integrity of data that is of importance to the whole company, so-called corporate data (see [16] for a detailed definition of "corporate data"). They are the stewards of globally relevant information and have the exclusive right to pro- vide access and update functionality for corporate data. (All types of building blocks can also harbor private data, not accessible by other building blocks.) User-layer building blocks provide interaction with the system's environment, especially humans. They allow an end-user to access functions provided by the system of systems that is formed by all accessible building blocks. The processing layer, formed by the processing-layer building blocks, implements all functionalities that do not belong to either the data-layer or the user-layer. This includes typically functions to support management and operation of business, fike sorting, retrieval, etc. Using the OSCA architecture, it would be possible to refine the memo-organizer example from above: A data-layer building block is spe-ciahzed into a "document-storage-and-retrieval" component type; the application-specific memo-organizer component is then a further refinement of this type (and, therefore, also a refinement of the original data-storage type). The document-storage-and-retrieval component can be fully specified on the conceptual level, because it is common to ALL applications in the domain. The application-specific memo-organizer component is derived later, during a SPECIFIC application development (project). It does not exist on the conceptual level. (Please refer also to our discussion on the two levels that are exhibited by a GARM conform architecture, as presented in section 3.) However, once a system of system is deployed (has gone through the project-specific instantiation process and programming), both components, "document-storage-and-retrieval" and "memo-organizer", exist side by side. They share the data-layer classification and, as a consequence, the basic functionalities, structures and properties the type specifies and supports. Component type and service abstraction dominate the job the architecture's designer has to tackle. Once a useful classification (typology) has been established, the focus shifts to the problem to specify each type and each identified common component properly. As in classical software engineering, the task to specify each single component that is used in the typology can be achieved by specifying the com- ponent's interface by listing externally visible behavior (functions, services), data-structures, and the guaranteed quality of service (performance, quantities, level of security, etc.). In other words, a component interface in GARM provides a set of so-called ports (function and data types), at which interaction between the component and its environment can occur. A discussion of interface notation, checking, etc., goes beyond the scope of this paper. However, we would like to mention that some of the well known formalisms that apply to lower level system design have been successfully re-applied on the architectural level during our case-studies. One of the major problems is, however, the appropriate specification of quality of services in the interface. If desired by the architecture's designer, architectural components may also have internal structure and are then regarded as composite entities. This imposes the requirement to support composition-decomposition abstraction in architectural designs, a fact that is reflected in GARM by providing "PART-OF" relationships between component types. What makes the component discussion on the level of conceptual architectures different from normal system design is the focus on identifying generic types of components, instead of plunging directly into the specification of application-specific components. In GenSIF/GARM, system of system interaction, structure, and behavior are specified in generic terms via the architecture. Domain wide solutions are developed and a supporting infrastructure is provided before application-specific solutions are implemented. Connectors - GARM sees a system of systems (at any level of decomposition) as a collection of interacting components. The connectors are the glue that binds the components of the system in operation together. Interactions are facilitated by connectors. Therefore, to complement the specification of components on the conceptual level, associated (types of) connectors must be specified explicitly in the conceptual architecture. In GARM, we characterize architectural connectors by domain/service abstraction, type. external specification (interface), and composition. Basically, all characteristics we have discussed for components hold for connectors: the necessity to specify the quality of interaction (and the problem to find a good notation to do that), the possible nesting of connectors to form higherlevel composite connector types, and the distinction between common connectors and appUcation-specific refinements of generic connector types. It would even be possible to regard connectors simply as an always existing, special component type. However, while components model the functionality and information contained in a domain, connectors are typically the embodyment of communication paths and protocols. Furthermore, in a system of systems, and that is what we are dealing with in GARM/GenSIF, interaction, communication, interoperability and security are the essential concerns. As a consequence, GARM treats connectors as a separate (but similar) species, so to say as "first class citizens". See the paper by M. Shaw for a deeper discussion of this topic [22]. As with components, the identification of types of connectors is the major issue during conceptual architecture design. By developing connector types that cover all wanted (not necessarily possible!) interactions in the domain and intero-perate smoothly with the established component types, the major portion of the creative part of architecture specification is covered. Connector types are usually abstractions of communication protocols in the domain. In this case, typical for rather narrow domains, connectors are highly specialized and well defined elements of the architecture, similar to the common components discussed above. An example for such a common connector, jf we return to the domain of office documents and work-flow systems, is a document sending arid receiving mechanism that guarantees timing of delivery. This connector also must incorporate knowledge about document structures to provide services like the adding of time stamps to the document (sent, received), the re-formatting of documents (depending on the receiver), and the automatic up-dating of transmission logs. Most existing conceptual architectures specify connectors based on the generalization of tech- nical characteristics of communication and are applicable to a much wider variety of domains, by listing high level connector types. A good example for a connector typology is given in the DORIS architecture used by British Aerospace [23]. DORIS targets architectures in the real-time and, especially, aerospace oriented domain. The DORIS principle is that intermediate data, representing connections, should be specified explicitly. Furthermore, the only way to handle that data is to destructively or non-destructively write or read it (see [23] for a detailed specification). This leads to four major types of connectors in DORIS: signals, pools, channels, and constants. Each one of them is seen as an available implementation of a communication protocol, where a pool, for example, is characterized by destructive writing and non-destructive reading (reference data). The decision which form of connector should be included in a specific conceptual architecture depends, of course, on the domain the architecture is servicing and the level of control the architecture engineer wants to exert over the projects in the domain. The more specific and limited the given set of connectors is, the more control is exerted via the conceptual architecture, because less variation is possible in derived application architectures. In general, it holds that the higher speciahzed the connector and component types are, the more precise will be the specification of what correct and incorrect configurations are, and the more rules will have to be used to govern the use of components and connectors. 4.2.2 Rules While concepts specify the basic primitives that are available in an architecture, rules capture and make explicit the semantics that govern the use of architectural concepts and their possible combination into higher order structures. Thus, rules are generalized design decisions that constrain the constructive architectural concepts and their relationships (static or dynamic) in order to support architectural properties. In GARM the following types of rules for conceptual architectures have been identified: - Structural Constraint Rules (SCR) - determine correct composition of component and connector types on a generic level ( "one-level-configuration-rules" ) and also precisely define the application of IS-A and PART-OF relationships ( "layered-structure-formation-rules" , "instantiation-rules" ). - Operational Constraint Rules (OCR) - provide for the specification of pre- and postconditions of an operation to ensure that it performs correctly. An operation might include the execution jof functions in different components and the firing of associated connectors, and is roughly comparable to a thread of execution in a distributed system. - Stimulus-Response Rules (SRR) - constrain behavior by specifying when and under what condition an operation will be triggered. A typical example for a structural constraint rule would be to limit the interaction between specific types of components. A variation we have used in our case-studies was to add to our flavor of the OSCA architecture [16] a set of rules that allows for message passing between user-layer building blocks and processing-layer building blocks, processing-layer building blocks and data-layer building blocks. However, we outruled direct communication (message passing) between user-layer building blocks and data-layer building blocks. An example for a simple stimulus-response rule would be a situation where each message that is received by a component must go through sender authorization checking before any functions of the receiving component are activated. Rules are explicit and strictly enforced. They form a "system law" that constrains the freedom a designer has to make design choices, but support at the same time reuse of high-level design concepts and facihtate tool support via a standardized infrastructure. The major tool to enforce rules, as for most architectural concepts, is a formal review by a team of experts. If rules have been specified using a formal language, a machine supported process might be possible. (We have been using a fairly informal, BNF-oriented language so far and have combined this approach with reviews.) A system that violates an architectural rule may still operate as required and meet its obligations, but would still be considered incorrect because of its violation of the architectural model. Explicitly specified rules build a sound analytical foundation and ease the analysis and accreditation process. Together with architectural concepts they form the minimal necessary, mandatory part of a conceptual architecture specification. 4.2.3 Design Patterns In order to ease the use of concepts and rules in daily application, GARM supports also non-mandatory "annotations" of architectures, in the form of design patterns and guidelines. Any system of systems and any composite component in an architectural specification is a configuration of components and connectors. Consequently, support for configuration specification, going beyond sometimes rather abstract and fragmented rules, is one aspect that should be dealt with in every comprehensive architecture reference model. A good way to do this is to pre-define examples of valid and useful configurations, so-called patterns. Patterns are compiled design experience. They suggest useful structures for meeting particular needs, while staying within the boundaries of architectural concepts and rules. The use of predefined patterns provides for high level re-use and makes the life of system designers simply easier. A pattern may be static, in terms of a topology that is invariant at execution time, or may be defined in terms of dynamic mechanisms which change the configuration, such as object binding and unbinding in a distributed system [15]. An example for a (static) design pattern is the popular Model-View-Controller (MVC) structure used in so many object-oriented designs [6]. This example reflects nicely the educational character of design patterns, which preserve and prepare practical experience for the system designer. A pattern specification on the conceptual level is a template, using component and connector types, from which configuration instances can be derived during the transformation step from conceptual to appHcation architecture. The representation of a pattern depends on the representation chosen for components and connectors. We have been using mostly object-oriented notations in our case-studies. By subscribing to a specific set of patterns and avoiding others, styles can develop within the boundaries of a single conceptual architecture, a fact that further improves the chances for reuse, standardization, and tool support. 4.2.4 Guidelines Guidelines are also captured design experience, but expressed in an even less restricted way than patterns. Guidelines address problems that are typically encountered by a given architectural style, pattern, or set of rules, and provide hints how these problems can be avoided. They help to "stay out of trouble" when using a specific conceptual architecture and constitute the accumulated "folklore" regarding the use of a conceptual architecture over time. An example for an architecture related problem situation could be that during times of high system load messages between components get lost, or delayed. If we have a rule that enforces sender authorization-checking in each component receiving a message, that rule might very well be the reason that a bottleneck is created in most implementations of the message handhng mechanism. Possible hints at a solution (guidelines) would be to increase the size of the queue, execute authorization in parallel with other component functions, etc. (Remark: It is of course not possible to drop authorization, if it has been stated as a rule and is, therefore, mandatory.) 4.2.5 Another Look at GARM The following set of extended BNF statements illustrates our generic architecture reference model - GARM - as we have discussed it so far. A more detailed specification of the architecture model, based on a refined set of so-called architectural elements, is published in [12]. A conceptual architecture CA is a'lour-tuppel. While concepts and rules are mandatory, patterns and guidelines remain optional. (We use the meta-symbol {...} to indicate > 1 repetitions.) CA=(Concepta, Rules, [{Pattern}], [{Guideline}}) Basic architectural concepts are properties, components (Comp) and connectors (Conn). Components and connectors are described by the level of abstraction their services provide for the domain (common or appHcation-specific), their place in a lattice of component types (typical for the domain), an interface, and their internal structure (possible decomposition): Concepts=([{Property}}, {Comp}, {Conn}) Property =:[Quality-Related / Constructive} Comp=(SAbs, Type, Interface, [Structure]) Conn=(SAbs, Type, Interface, [Structure}) SAbs=[Common j Application-Specific] Type= "domain-dependent" Interface=({Function}, [{Data-Structure}], [Quality]) Structure={Part-of-Relation} Rules are divided into structural constraint rules (SCR), operational constraint rules (OCR), and stimulus-response rules (SRR); see section 4.2.2: in terms of architectural properties and concepts, and by capturing architectural semantics in a collection of rules which constrain allowed configurations and their operational meanings. The practical application of our architecture-based framework is supported by our prototype for a domain architecture specification tool, called (Ž-ASPECT. d-ASPECT is a test-bench for our research on systems integration and domain architectures specification, and is like GARM a part of the Gen-SIF project [11]. It is under development as an open distributed system using the development and operational environment provided by ANSA-ware 4.0 [3], an infrastructure that supports the ISO ODP architectural standards [15]. Figure 3 presents a simplified block-and-line diagram of the tool's architecture. The tool's architectural design is based on cHent-server principles and uses a central data repository for data exchange. Access to the central data repository is possible only through the servers. They act like the pubHc services of a data-capsule, isolating the internal design of the data repository from the client processes. In this way, they also protect the repository from side-effects and unauthorized access. There are four servers in rf-ASPECT currently under development or in their initial test phase. They reflect the major participants in an architecture specification (see section 4.1): - a conceptual architecture server Rules=([{SCR}], [{OCR}], [{SRR}]) SCR=[ConfigurationR / StructureR / InstantiationR] — an application architecture server Patterns are structures of interconnected component and connector types. Guidelines list a set of possible answers for an architecture related question: Pattern= CONFIG URE ({Comp}, { Conn} ) Guideline=(Question, {Answer}) 5 rf-ASPECT - The Tool Prototype GARM supports a multitude of software design tasks by establishing an architectural vocabulary - a code generator - a verification server The conceptual architecture server allows for the specification of the high level, conceptual part of the domain architecture. This server is the tool of the system architect who targets a generic architecture specification that will be compatible with the application domain. It is responsible for maintaining type and decomposition-aggregation hierarchies of architecture component types. The application architecture server takes a conceptual architecture as input and allows the individual system designer, working on a specific project within the domain, to instantiate his/her 170 Informatica 20 (1996) 159-172 W. Rossak et al. 'Appfirstinn \rLhltLcture elicili AppliLii[i(m ^rchlteLtiire Clitni' DATA Application 'Arcliitecturc > Server ; Instantiated Architecture Elements Templates Templates Integrated Aspects View Rufes Arch, Rules Server Domain Model Conceptual Architecture Server Verification Server Conceptual jArchitecture Client Legend: Implemented component Under imptemenlation Future work Figure 3: d-ASPECT - The tool's architecture apphcation architecture from the chosen conceptual architecture. Thus, it helps mostly to manipulate the domain specific instantiation of component types. The code generator maps the combined architectural specification (conceptual and application architecture) to the selected infrastructure. Depending on how well the architecture and the infrastructure match, source code can be generated. We expect to be at least able to generate codetemplates. This is what we currently do, using the ANS Aware "stubc" precompiler [3]. The verification server, currently under development, will perform different analysis steps, e.g., composability analysis for a hybrid-style system or schedulability analysis for a real-time system. Research to support these tasks is still ongoing. Two more servers are planned for the future: One will support the mapping of the domain model to a set of business rules that can be used during architecture specification. Another one will help to check compliance to architectural rules and allow to trace rules through all levels of component and connector specifications. However, both of the planned servers will require extensive research before they can be implemented. End-users interact with the data repository by using client processes that connect to the appropriate server. A system designer, for example, has his/her own client system that manipulates the tool interface and translates end-user requests into messages to the application architecture server. Using a distributed client-server model also provides the capability to have multiple clients active concurrently. 6 Summary Our paper has examined the use and specification of software architectures for the development of complex, integrated, domain-specific software solutions, as outhned in the GenSIF framework currently under development at NJIT. The elements and ideas of our Generic Architecture Reference Model, GARM, have been discussed. In the GARM model, concepts and rules form the backbone of an architecture's specification and can be enriched by using design patterns and guidelines. GARM's architectural elements are used to represent and manipulate the architecture model in a user-friendly, construction oriented way and serve as the interface primitives in our prototype implementation of (i-ASPECT, an architec- ture specification support tool. We are using GARM in our day-to-day research to compare existing architectures, to build our understanding of what should be contained in an architecture specification, and to reverse- and re-engineer existing architectures with the goal to obtain a cleaner specification. We also think about issues of notation and representation and run our case-studies by specifying our own conceptual architectures, most of them variations of the OSCA architecture [16] that has served as a starting point and test-bench for our ideas. GARM is also our main contribution to the ongoing discussion on architectures in the IEEE TC on ECBS (a simplified version of GARM was accepted as a working model in the Stockholm meeting in May 1994). A proof-of-concept project on an industrial scale, with one of our external partners, is curreritly in the planning stage Acknowledgements We would like to express our gratitude to Harold (Bud) Lawson for the guidance and encouragement he has provided over the last years. The authors also wish to thank Maarten Boasson from Hollandse Signaalapparaten, Henk Obbink from Philips Research, John Mills from Bellcore, and Hugo Simpson from British Aerospace for the many hours of discussion and their valuable remarks on our architecture reference model. References [1] ANSA Reference Manual, Vol.A, Architecture Projects Management Limited, UK, 1989. [2] ANSA Architecture Report - The ANSA Computational Model, Architecture Projects Management Limited, UK, 1991. [3] ANSAware 4.0 Documentation and Tool, Architecture Projects Management Limited, UK, 1992. [4] Newman, D., "Strategic Planning for Distributed Object Management", Object Magazine, June 1994, pp. 74-77. [5] Garlan, D., Shaw, M., "An Introduction to Software Architectures", in Tutorial Notes: Architectures for Software Systems, ICSE15, Baltimore, Maryland, May 17-21, 1993, pp. 61-99. [6] Garlan, D., Allen, R., Ockerbloom, J., "Exploiting Style in Architectural Design Environments", Software Engineering Notes, Vol. 19, No. 5, ACM Press, December 1994, pp. 175-187. [7] Eisner, H., Marciniak, J., McMillan, R., "Computer Aided System of Systems (S2) Engineering", in Proceedings of the 1991 IEEE/SMC International Conference on Systems, Man, Cybernetics, Charlottesville VA, IEEE, Computer Society Press, October 1991, pp. 531-537. [8] Graham, M., "The Domain Specific Software Architecture Program" in Proceedings of DARPA Software Technology Conference, Los Angeles CA, 1992, pp. 204 - 210. [9] Kirova, V., Rossak W., "Some Thoughts on Architecture Engineering", in Proceedings of the Computers in Engineering Symposium, ETCE'94, New Orleans LA, USA, January 1994, pp.59-68. [10] Kirova V., Rossak W., and JoloUan L, "Software Architectures for Mega-System Development - Basic Concepts and Possible Specification", Proc. of the IEEE Third International Conference on Systems Integration, Sao Paulo City, Brasil, August 1994, pp.3845. [11] Kirova, V., Rossak, W., Lawson, H., "Software Architecture: An Analysis of Characteristics, Structure, and Application", in Proc. of the First International Workshop on Architectures for Software Systems, in cooperation with ICSE-17, Seattle WA, April, 1995, pp. 166-176. [12] Kirova V., Rossak W.: "ASPECT - An Architecture SPECification Technique: A Report on Work in Progress", Proc. of the Workshop on Systems Engineering of Computer Based Systems, IEEE ECBS TSC, IEEE Computer Society Press, Los Alamitos CA, March 1996, pp. 220-227. [13] Lawson, H.W., "Application Machines: An Approach to Realizing Understandable Systems", The Euronaicro Journal, Vol. 35. No 1-5, September 1992, pp. 5-10. [14] Lawson, H. W., "Introducing the Engineering of Computer-Based Systems", in Proceedings of the 1994 Tutorial and Workshop on Systems Engineering of Computer-Based Systems, IEEE Computer Society Press, Los Alamitos CA, May 1994, pp. 2-8. [15] Basic Reference Model of ODP - Part 2: Descriptive Model, ISO/IEC JTC 1/SC 21, interim revised CD text, 1992. [16] The Bellcore OSCA Architecture, Bellcore - Bell Communications Research, Technical Advisory, TA-STS-000915, Issue 3, March 1992. [17] Rossak, W., Ng, P.A., "Some Thoughts on Systems Integration - a Conceptual Framework", International Journal of Systems Integration, Vol. 1, No. 1, Kluwer Academic Pubi., Dordrecht, The Netherlands, 1991, pp.97-114. [18] Rossak, W., Zemel, T., Lawson, H., "A Meta-Process Model for the Planned Development of Integrated Systems", International Journal of Systems Integration, Vol.3, No.3, Kluwer Academic Pubi., Dordrecht, The Netherlands, 1993, pp. 225-249. [19] Rossak, W., Zemel, T., "Integrative Domain Analysis via Multiple Perceptions", Informatica, Vol.17, No.2, 1993, pp. 117-136. [20] Rossak, W., Kirova, V., Jololian, L., Lawson, H., Zemel, T.: "A Generic Model for the Use and Specification of Software Architectures", IEEE Software, to appear. [21] Perry, D., Wolf, A., "Foundations for the Study of Software Architecture", Software Engineering Notes, October 1992, pp.40-52. [22] Shaw, M., "Procedure Calls Are the Assembly Language of Software Interconnection: Connectors Deserve Firs-class Status", in Proc. of the Workshop on Studies of Software Design, May 1993. [23] Simpson, H.R., "Architecture for Computer Based Systems", Proc. of the 1994 Tutorial and Workshop on Systems Engineering of Computer-Based Systems, IEEE Computer Society Press, Los Alamitos CA, May 1994, pp. 70-82. [24] Wiederhold, G., Wegner, P., and Ceri, S., "Toward megaprogramming", Communications of the ACM, Vol. 35, No. 11, November 1993, pp. 89 - 99. Neuro-Quantum Parallelism in Brain-Mind and Computers Mitja Perus National Institute of Chemistry, Lab. for Molecular Modeling and NMR Hajdrihova 19 (POB 3430) SLO-1001 Ljubljana, Slovenia Phone: ++386-61-1760-275 or ++386-61-1760-314 Fax: ++386-61-1259-244 or ++386-61-1257-069 E-mail: mitja.perus@uni-lj .si Keywords: neural network, attractor, quantum, consciousness, mind, coherence, wave-function collapse, pattern-correlation. Green function Edited by: Jin Šlechta Received: January 5, 1996 Revised: May 16, 1996 Accepted: May 22, 1996 Specific characteristics of mind and computers (synergetic and neurocomputers, especially) are reviewed. The characteristics are presented which future computers would have to possess in order to be treated as mind-like. It is argued that for human consciousness a coherence of neural, quantum and virtual (attractor) levels are necessary. Mathematical analogies of the neural-network-theory and quantum mechanics are discussed. These analogies may be a mathematical framework for a research of multi-level cognitive isomorphisms involving complex systems of neurons/synapses, subcellular structures, quantum "elements" (spins etc.), and attractors, because the principles of their collective dynamics are level-invariant. 1 Mind-Computer: - evolution of virtual structures and correspon- A Comparison biological "media" [1, 21, 7], including thermodynamical constraints [37]. Mind is a sort of computer: it is an information processing system. However, it is also much Here we will also be interested in any possibility of ^ . ^ .. . creating computers with mind-like qualities using more than that. It is easy to cite many speci- ..n■^ i , , , . „ , . ■ r • 1 1- , • , -, 1 artificial neuronal and quantum devices, he characteristics oi mind discussed m philosophy addition to and psychology, but neglected by artificial intelligence: e.g., - high computational power already reached by , ,1. ■ r • • computer technology, - phenomenal quaha: experiences or vision, audition, olfaction, etc. [6]; _ logic-based computer-reasoning using seman- . , ij. roTi tic networks and specific rules, - consciousness including seli-awareness [31|; ^ ' - subjectivity and self-concepts (the "I"); " ^^^ ^^^^^ mechanistic, architecture of neurocomputers [29], — associative contextuahty par excellence [27, 30]; - meaning; evaluation; judgments [6]; - simple categorization and pattern-recognition abilities, etc. [15], - of contemporary computers, the mind-like high flexibility and fuzziness [34]; computers of future will need: - complex-system-architecture with self-organization and adaptation abilities [15]; - very much higher degree of indirect interaction with the environment; - very high complexity overriding some critical values of parameters controUing the phase transitions of the system's structure [16]; - well-differentiated and large hierarchical organization with strong intra-layer co-uphngs and symmetric inter-layer structures; - specific responses to environmental stimuli with formation of reversible isomorphic states; - formation of attractors-gestalts of high order [17]. The first step in this direction are: — neurocomputers based on neural network models [12, 22], especially attractor neural networks [Ij; — Haken's synergetic computers based on models of complex physical systems [15]; — artificial life simulations; etc. Neural and synergetic computers already provide a good basis for: — perceptual categorization and feature extraction; — pattern-recognition as specific pattern-formation or selective pattern-reconstruction (invariant under translations, rotations, scaling, deformations, etc.); — content-addressable memorizing, including specific and selective memory-recall (or forgetting); — complex interaction-patterns providing the strong context-dependence; — multiple-knowledge concept: integrating various views; — reflective and recursive processing, also constituting fractal properties (scale invariance). Associative neural networks of the Hopfield type provide [20] are the simplest model of a symmetrical complex system. This simplest model includes the system of basic elements (formal neurons), and the system of connections (formal synapses) which represent strengths of interactions between two connected neurons, or the correlation between them. Symmetrical means that the same signal-summation-process is going on in every neuron, and that every synapse changes its strength according to the same Hebbian "learning rule" (i.e., proportionally to the correlations of neuronal activities) [14]. In such a network parallel-distributed neuronal patterns which act as attractors are formed [1]. Attractors represent categories or gestalts which are isomorphic to some environmental objects. Gestalts are some qualitative unities arising from collective system-dynamics which cannot be reduced to the sum of activities of the constituting basic elements alone. Patterns-qua-attractors thus represent some mind-like representations. They are not only some collective neuronal states, but also encode specific information. Whenever a given object occurs in the environment, reconstruction of a specific neuronal pattern-qua-attractor is triggered. Actually, a superposition of the sensory stimulus-pattern and the most similar memory-patterns (coded in the matrix of synapses) is formed in the system of neurons. The system of neurons is a carrier of that information which is currently being processed (i.e., which is the most important in that specific context or those circumstances, which is mostly correlated to the state of environment). So, the pattern of neuronal activities represents the object of consciousness. The neurons, and whole patterns, are constantly interacting, and their activities reflect each other; but in spite of feedback-based interaction-paths, this recursive dynamics is, it seems, not enough for the "real" consciousness as a unified self-referential system-process par excellence. 2 Consciousness: Need for the Quantum It seems that also a very large and complex neural network would not be sufficient for consciousness. There are indications that consciousness is connected with quantum phenomena [24, 28, 36, 39, 40]. The main reasons for this hypothesis are the following: — Quantum systems are the microscopical basis of all physical processes and of biological or psychophysical processes also: all the classical world arises from the overall quantum background. — Quantum systems transcend even the division of particles and waves, or interactions, or fields [9, 23]. Quantum systems, especially sub-quantum systems, are holistic in nature [5] - they cannot be satisfactorily analyzed into interacting fundamental elements, but act synthetically as indivisible parallel-distributed processes. As such, they are good candidates for correlates of the unity of consciousness. — Neural networks with their rigid neurons and synapses, in spite of their very subtle virtual processes (often not enough understood) [33], seem to be too mechanistic, too discrete and too deterministic for consciousness and phenomenal qualia, i.e., qualitative perceptual experiences. On the other hand, all thought-processes, including consciousness, seem to arise from complex-system-dynamics. Objects of consciousness and stream of conscious thought seem to be represented in some physical or at least informational (virtual) "medium". That "medium" has to be a complex-system which only is enough flexible, fuzzy, adaptive, and has good self-organizing and recursive abilities. Because the mathematical formalism of the neural network theory is confined to the collective system-dynamics, it remains to a large extend valid also for complex systems of other basic elements [32]. So, our formal neurons and formal synaptic connections are not necessary biological neurons and synapses (or, in models, artificial neurons—processors). There are various candidates in biological systems which may be modeled in a neural-network-like way and have, hypothetically, a relevant role in micro-cognition processes: — dendritic trees where a dendrite (which is a part of a neuron) has a similar summation-task to that of a neuron (K. Pribram [17], B. MacLennan [25]); - subcellular structures: cytoskeleton, especially microtubules (they are, for example, located in neuron's axons), which may have a role of an interface between the neural and the quantum level (S. R. Hamerofif [18], R. Penrose [28]); - presynaptic vesicular grid—a paracrystalline hexagonal lattice in the pyramidal cells (J. C. Eccles); - webs of quantum particles (e.g., electrons) with their spins (A. Stern [41]); - a "continuum" of sub-quantum "beables" (J. S. Bell, B. J. Hiley) or "hidden variables" (D. Böhm); etc. [4, 19] There are several compact alternative theories which have been proposed as models of various cognitive levels using mathematical language: - holographic and holonomic brain theory (K. Pribram) [17]; - quantum neuro-holography (W. Schempp [35]); - synergetic computers by H. Haken extended to quantum domain; - generalized-laser-theory of the brain as a "hot" (sub)cellular automaton (J. Šlechta) [37, 38]; - Bohm-Hiley ontological interpretation of quantum mechanics including the notions of implicate order and sub-quantum holomove-ment ("vacuum-processes" in old terminology) [9]; - Everett's "many-worlds" interpretation of quantum mechanics [11] and the quantum computer by D. Deutsch [10]. 3 Coherence of Neural, Quantum and Virtual Processes The unintentional consciousness ("conscious-ness-in-itself", "pure" consciousness), especially transcendental (mystical) consciousness, may be associated with the quantum field, or better, with the "overall sub-quantum sea" [13] - the holo-movement by Bohm/Hiley. On the other hand, intentional consciousness (consciousness about some object of consciousness) cannot be associated merely with a specific quantum-informational state. If a specific mental representation is processed under control of consciousness, this specific representation, which is associated with a pattern of neural activity, is coupled or correlated with a specific quantum eigenstate which was explicated by the "wave-function collapse". The "wave-function collapse" is a transition of the quantum state from a state described by a linear combination of many quantum eigenstates to a "pure" state which is one eigenstate (one eigen-wave-function) only. Simply to say, a superposition of many "quantum patterns" is transformed into a single "quantum pattern" only. Collapse of the wave-function means a selective projection from subconscious memory to the conscious representation which was expHcated from the memory. There are two possible versions of memory and memory-recall: the quantum one (just mentioned), or the classical neural one. Using another point of view, subconsciousness-consciousness transitions may also be related to laser-like phase transitions from individuahzed to collective dynamics [37, 38] when a certain threshold is exceeded. Memory may be a parallel-distributed pattern of the system of synaptic connections, but it may also be more subtle ("parallel worlds" of the many-world interpretation of quantum theory by Everett [11], implicate order of Bohm/Hiley, etc.). Mind is necessarily a multi-level phenomenon, because we cannot totally divide intentional consciousness from the object of consciousness which may be an internal virtual image or a real external object. If unintentional consciousness is of quantum nature, virtual representations are associated with neuronal patterns, and external objects are of classical nature, then intentional consciousness is necessarily a combined QUANTUM-NEURAL COHERENCE which is furthermore correlated with a classical state in the (external) environment. There is still a question of relationship between quantum-informational states and virtual representations. Virtual representations are usually based on neuronal patterns which act as attrac-tors. However, attractor is a contextual gestalt- structure which cannot be reduced to the neuronal pattern (which represents attractor's kernel) alone. So, virtual structures are attractors which thus over-build their constitutive material basis. They represent complex webs of relations and ratios; a neuronal pattern is an attractor only if it is more stable and more dominant in the system-dynamics than the neighboring neuronal configurations in the configuration-space [34 . Quantum mechanics governed by the Schrödinger equation does not exhibit attractors, but they are formed in the case of the "wave-function collapse". In that case, because of the interaction of a classical macroscopical system (either measurement apparatus, or environment, or our sensory apparatus) with the quantum system, the wave-function "collapses" and a specific quantum eigenstate (a "quantum pattern") occurs as an attractor. So, there are quantum virtual structures also, and they cannot be reduced to a quantum eigenstate alone, because they occur only as a result of interaction with a classical system. Thus quantum virtual structures are (re)constructed as a result of so-called quantum measurement where the "measurement apparatus" may be our sensory and associative neural system directly, or a machine which is then observed by that neural system - that's an indirect way. In both alternatives the "wave-function collapse" occurs as a result of a specific interaction with a classical system. The probability of the "collapse" is very much higher if the interaction is /inow/ecJf/e-based (as in the case of a radio: if we know the right frequency, we are able to receive the associated information). So, again, virtual structures cannot be reduced to the corresponding state of a neural or quantum "medium", although they are closely related to it! Virtual states are always non-local, or parallel-distributed, respectively. They cannot be measured, or can be measured only indirectly— over states of their corresponding neural or quantum "ground". For the sake of modeling and analysis we indeed have to distinguish neural, quantum and virtual levels, and environmental influence also. In an "organic synthesis" they are, of course, involved in an united process, including environment. That united process is called (intentional) consciousness. Our hypothesis is that intentional consciousness (we are conscious of some object of consciousness) emerges from collective system-dynamics which is triggered from environment (see also [37, 38]). It is a result of evolution that neurobiological systems are capable of such a specific self-referential response to stimuli from external and internal environment. On the other hand, having in mind mystical and meditational experiences [34] we can argue that a deeper processual background of these specific information-processes is unintentional (pre)consciousness which is as fundamental as various material processes [4]. Slechta's work [36] is pioneering in proposing detailed "mechanisms" for quantum micro-cognitive processing in a mathematically well-formalized way. As a complement to Slechta's original approach, this author is proposing a more system-theoretical view of these "mechanisms". 4.2 Neuronal State Is a Superposition of Neuronal Patterns —> Quantum Wave-Function Is a Superposition of Quantum Eigen-Wave-Functions A neuronal configuration q may be described as a linear combination of neuronal patterns Vk {k = 1,... ,p where p is the number of patterns represented simultaneously in the combination). Similarly, a wave-function can be described as a linear combination of eigen-states ijjk ("quantum patterns"): q{rj) = ^c,(t)vk(f) (1) fc=l ^(f,t) = (2) fc=i 4 Mathematical Formalism of the Quantum Theory is Analogical to That Describing Associative Neural Networks We have presented some reasons to investigate parallels between quantum processes and neural-network-processes. Several mathematical analogies will now be discussed. 4.3 Ortogonality and Normality Both sets of vectors, Vk and ifik, have the properties of ORTHOGONALITY and NORMALITY. 4.4 Coefficients of the Series: Synergetic Order Parameters 4.1 Neuronal-State-Vector <— Quantum Wave-Function In neural network theory the state of the system of neurons is described by q(f, t) which denotes the activity of an individual neuron (located at f) at time t. Neuronal patterns v are special neuronal configurations q which represent some information. In quantum theory the state of the quantum system at location f and time t is described by the wave-function ^{r,t) [23], Quantum Probability Coefficients Ck are the quantum probability coefficients and Ck are the neural order parameters. In the linear combination each pattern is represented by a corresponding coefficient. The coefficients describe how much a specific pattern is represented in the actual state of the system, or how probable it is that the corresponding pattern will be recalled (reconstructed, exphcated from the superposition of many patterns). Thus, the time-dependent coefficients encode quantitatively the meaning of their patterns. They describe how strong a role a given pattern has in contextual system-dynamics. Mathematically, coefficients describe projections (in terms of scalar products, in Hilbert space, for example): Ck{t) = {vk,q) = Jj Vk{f)*q{r,t)drdt, (3) Ck{t) = {^k, = // t)drdt. (4) 178 Informatica 20 (1996) 173-183 M. Perus Asterisk denotes the operation of complex conjugation. If variables Vk or ijjk are real, we may delete the asterisk. 4.5 Spatio-temporal Integration of Neuronal Signals -f—> Feynman's Version of the Schrödinger Equation The dynamical equation for one neuron is essentially a spatio-temporal integration of signals from all other neurons which are connected to that neuron. So, the state of a neuron at position r2 and time t2 is given by J-weighted summation of all signals and the whole history of signals^ : q{r2,t2) = jj J{ri,ti,r2,t2)q{ri,ti)dfidti (5) where J{ri,ti,f2,t2) is the strength of an individual synaptic connection. J's may be the transmissions of real synaptic connections between two neurons (spatially separated, but realized at the same time) or correlations between states of different neurons at different times, represented at least virtually. In non-relativistic quantum mechanics the dynamical equation is the Schrödinger equation which is here written, analogously to the neural equation (5), in Feynman's form [19]: (6) where 1,^2,^2) constitutes the Green function or propagator of a quantum system [2]. The propagator G is a matrix which describes a parallel-distributed transformation of the whole system from an initial state to the final state. The system transforms itself into a new state by exhibiting numerous internal interactions between its constitutive "quantum points" (some mathematical "basic elements" of the system). Informationally, sijch a transformation ((5) or (6)) is association. 4.6 Encodings of Pattern-Correlations: Synaptic Transmissions Green Functions The kernels of dynamical equations (5) and (6) are given as a correlation function. The transmission of an individual synapse is determined by the Hebb learning rule as a correlation between its two neurons participating in various patterns Vk- k=l P or J(fi,f2) = Y^Vk{fi)vkif2). (7) k=l Similarly, the Green function [2] is given as a sum of auto-correlations of individual "quantum patterns" ipk- G(fi,ti,f2,t2) = iJ2'^kiri,ti)'lpk(f2,t2) k=l P or G(fi,f2) = i^iJkin) ipk{r2)- (8) fc=i 4.7 Relativistic Analogy Analogy of Subsection 4.6 can be extended to relativistic domain as well. In the relativistic case the role of G is reahzed by the S — matrix [2]. Its elements are: P 2 Sin,ti,f2,t2) = -«^^^^(fi, il) fc=ii=i if t2 > tr, p A k=l if t2 < ti. (9) is always greater than ti—because of the "arrow of time". There is, of course, an essential difference between equations (7) and (8): the later one includes the imaginary unit i. That prevents us from claiming that a quantum system is a very complex sort of "neural network". Here we shall not enter into discussion of the role, backgrounds and origins of the imaginary nature of quantum equations. We shall also not discuss any possible proposals for a replacement of complex-valued quantum equations with a system of real-valued equations which would support the search for further analogies with the neural-network-theory. In spite of this we are able to argue that there are very interesting functional and mathematical analogies between quantum and neural levels, although they are not perfect. In neural networks the correlations between patterns are important for memory. In quantum mechanics the phase differences between different parts of the wave-function are important [3]. They control the time-evolution of probability distribution involving interference of the contributions of different stationary eigen-wave-functions. Thus, changing the phase relations between eigen-wave-functions is analogous to the learning-pTocess in neural networks where synaptic correlation-matrix "absorbs" new pattern-correlations. This is also similar to holography [35]. 4.8 Neuronal-Pattern-Reconstruction <—> "Wave-Function Collapse" The most important neuro-quantum analogy is the following. Pattern-reconstruction in a neural network = Y^Ck{t)Vk{f) qif,to) =Vkoir) (10) is very similar to the "collapse of the wave-function" in a quantum system k=i . (11) Neural-pattern-reconstruction and wave-function collapse are results of a transition from the implicate order ^ (with latent, implicit, inactive, potential information only) to the explicate order (carrying manifest, active, realized information) [5]. The implicate order represents a combination of very many possible states or processes. It is analogous to a set of so-called "parallel worlds" or parallel sub-branches of the general wave-function proposed by Everett [11]. Explicate order, on the other hand, represents a state or process which is at a moment physically actualized—it is "chosen" from a set of potential (implicate) states, or is a result of their optimal "compromise". 4.9 Fourier-like Analysis A short calculation, presenting correspondence rules between above equations, can be performed first for the neural case (12), and then for the quantum case (13): qir2,t2) = JJ J{n,ti,f2,t2) q{ri,ti) dridti /•/•/ P \ = // q{n,ti) dridti Vfc=i / = ( // Vk{r\,ti) q{ri,ti) dndti p q{r,t) = Ck{t) Vkir,t) or k=l q{f,t) = j2^k(t)vkin- (12) fc=l For reasons presented in section 3 this is a very important feature of cognitive processes. Processes (10) and (11) are both a result of the influence from the system's environment. The environment selects those neural/quantum pattern which is the most similar (or is correlated) to the state of environment. In the last step we have used definition (3) for time-dependent patterns (Eq. 12, last row, left). For stationary case we delete the time integral and get Eq. 12, last row, right. On the other hand, ^Bohm's term. = 11 G(fi,ti,r2,Ì2) Wuh) dndh < p \ ^iJkiri^h) i)kir2,t2) ^{n,ti) dndti vfc=l / = J2Mr2,t2) (^11 Mri,ti)'^{ruh) dndh = f2 ^kit) Mr,t) or in = È k-l = Ckit) Mn- (13) fc=i 4.10 Dynamics of the State Functions The following version of dynamic equations can be derived from the above neuro-synergetic equations (here somewhat generalized): q{r2,t2) = / p \k=i / = Y^hvk{r2,t2) (^jj Vk{fi,ti)q{n,ti)dndh p = ^ ^kCkVk{f,t) or k^l p = >^kCk Vk{r). k=l (14) H {^Ck Mf)^ = Y,CkH Mn k = Y.CkEk Mr)-k (15) In the last step we have used the stationary Schrödinger equation H ipki^) = Ek tpki^) where Ek is the energy-eigenvalue. We thus obtain = -TT.EkCkMn or k=i p = -T EkCk^k{r,t) (16) k=l We have inserted a generalized version of the Hebb rule (7) into the time-dependent equation (5). Afe is the so-called attention parameter of the pattern Vk, and is an eigenvalue of the matrix J with eigenvectors Vk- In the second step we have used the definition (3) for time-dependent patterns. For stationary case we could delete the time integral. On the other hand, we can make the following quantum calculation starting from the time-dependent Schrödinger equation with the Hamil-tonian operator H: where Ek has a similar role as Afc. 4.11 Uncertainty Principles and Duality In the neural network theory there are uncertainty principles (Daugman [8], Gabor [26]) which are similar to the famous Heisenberg uncertainty principles of quantum mechanics. An interesting neural analogy of the Heisenberg uncertainty principle is represented by an inability of simultaneous determination of patterns in the system of neurons q and of patterns in the system of interactions (or connections) J. We are unable to explicate a pattern in a system of neurons, and to explicate a pattern in the system of connections at the same time. Only one pattern which is temporarily realized in the system of neurons is explicated. All the others are present only implicatedly (implicitly) in the system of interactions, or in the dynamics itself, respectively. This is similar to the cognitive situation; we can be aware of one pattern only which has been extracted from memory. Other memory patterns remain unconscious and implicit. Thus, the so-called position (x-) representation of quantum theory can be approximated by the system of neurons q. The so-called impulse (p-) representation can, on the other hand, be associated with the system of interactions J which regulates all transformations of the network-state. Although only some basic mathematical analogies were presented here, numerous other parallels can be found between the neural and the quantum processing. They show that there may be a subtle "division of labor" and an isomorphismlike relation and cooperation between the neural and the quantum levels. These levels may be in a sort of fractal-relationship. The first main difference between the physical and psychophysical processes is that the quantum system itself is not intentional (does not carry any mental information), but mind-brain is intentional (carries specific mental contents). The second difference is that the quantum system itself does not have a relatively independent environment, but brain does. Therefore the brain models its macroscopical environment in a specific and flexible manner by using the biological neural network as a macro-micro-interface and a (subconscious) preprocessor for an unified conscious experience which involves neuro-quantum coherence. 5 Super-Computers Exhibiting Multi-Level Coherence We have argued that mind-like processing needs neural or/and quantum virtual structures (attrac-tors). There is no reason why complex computers would be unable to form virtual structures. Indeed, every collective state of a complex system may constitute a specific gestalt (a specific virtual unity) which cannot be reduced to the state of constitutive elements of the system alone. The problem is the formation of a specific isomorphic (e.g., fractal) multi-level coherence. The practice of computer simulations of neural nets shows that we can govern the artificial neuronal level implicitly by regulating the artificial virtual level explicitly. If our dynamical equations for neurons and synapses regulate the patterns only, the attractors always accompany the neural dynamics implicitly! Neural dynamical equations (represented in the computer program) are differential equations (with local range of vahdity), but attractor-structures may be mathematically described by variational calculus (with global range of validity). We do not necessarily need both mathematical descriptions—one is sufficient. Thus, we may organize one level and the others will automatically follow. This is the reason, why the reductionist approach usually works for all practical purposes, and this is a chance for advanced system-based artificial intelligence. Conventional computers and human analytic mind are limited by the Godei theorem. On the other hand, the human mind is able to synthesize different levels, which refer to each other, into a coherent whole. The mind is also capable of synthesizing a thesis and an antithesis simply by "not taking the difference too seriously", by "making a compromise", by merging incompatible informational subspaces into a higher-dimensional space, by transcending the hierarchy of well-defined levels using a process of restoration of symmetry (like in spectroscopy after turning magnetic field off). Actually, all sufficiently complex systems are capable of symmetry-breaking (formation of differentiated structures) or symmetry-restoration (integration of different aspects into a coherent unity or gestalt); they can also be artificial. It is true that human mind is also not able to solve the liar paradox. It is not able to integrate the concepts of liar and his self-referential sentence into a static (stable) solution, but it is able to "think about it". So, the reaction of human mind to such paradoxes is not decay of the model-structure (except in schizophrenic minds), but formation of a dynamic coherence. The resulting coherent state is quasi-stable and only temporary, but useful for ever-changing life-situations, because it is adaptive. Artificial consciousness of hypothetical quantum computers of the future will, of course, never be identical to human consciousness. It will be only a model of human consciousness, but it may be a good replica, identical for all practical purposes due to Turing test. The hypothetical artificial consciousness will (at least for a very long time) be only an extension of human consciousness, always coupled to it. Humans will still project their conscious interpretations into the performance of their replica-machines. The physical states of these hypothetical machines will be informational only due to the coding-rules given by humans. So, computer-consciousness will not be autonomous, because its contents will always be interpreted by humans - until these futuristic computers-robots would become self-maintaining. Till then, we cannot talk about "real" computer-consciousness in any case, because involvement of human informational interpretations necessarily means "projection of human consciousness" into computers. So, as long as computers will be our tools they will merely be a sub-branch of human consciousness, an isomorphic co-processor. 6 Conclusion It was shown that associative or attractor neural networks and quantum networks reahze similar collective dynamics. Our computer simulations [34] of neural networks realize efficient information processing. Using presented mathematical analogies with quantum networks, we can try to establish similar information processing in quantum systems also. Neuro-quantum parallelism may thus represent the crucial element of natural and artificial informatics. Acknowledgments TEMPUS-grant (JEP-4310-93) for my neural-network- and cognitive-science-studies in London and discussions with Dr. B.J. Hiley (Physics Dept., Birkbeck College, London), M. Škarja, Professors B. Golii, A. Kodre, L. Pieman and P. Prelovsek (Faculty of Mathematics and Physics, Ljubljana) are gratefully acknowledged. References [1] D. Amit: Modeling Brain Functions (The world of attractor neural nets); Cambridge Univ. Press, Cambridge, 1989. [2] J.D. Bjorken, S.D. Drell: Relativistic Quantum Fields; chapter 6: Propagator Theory; McGraw-Hill, New York etc. [3] D. Böhm: Quantum Theory; Constable and Co., London, 1954. [4] D. Böhm: Wholeness and Implicate Order; Routledge and Paul Kegan, London, 1980. [5] D. Böhm, B. Hiley: The Undivided Universe (An ontological interpretation of quantum theory); Routledge, London, 1993. [6] F. Brentano: Psychology from an Empirical Standpoint; Routledge and Kegan Paul, London, 1973. [7] Y. Burnod: An Adaptive Neural Network: the Cerebral Cortex; Prentice Hall, London, 1990. [8] J.G. Daugman: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by 2-D visual cortical filters; Journal of Optical Society of America A 2 (7) (1985) 1160. [9] P.C.W. Davies, J.R. Brown (Eds.): The Ghost in the Atom; Cambridge Univ. Press, Cambridge, 1986. [10] D. Deutsch: Quantum computation; Physics World (June 1992) 57-61. [11] B.S.DeWitt, H.Graham (Eds.): The Many-Worlds Interpretation of Quantum Mechanics; Princeton Univ. Press, Princeton, 1973. [12] E. Domany, J.L. van Hemmen, K. Schulten (Eds.): Models of Neural Networks (Series "Physics of NN"); Springer, Berlin, etc. [13] A. Goswami: Consciousness in Quantum Physics and the Mind-Body Problem; Journal of Mind and Behavior 11(1) (1990) 75. [14] H. Gutfreund: From Statistical Mechanics to Neural Networks and Back; Physica A 163 (1990) 373-385. [15] H. Haken: Synergetic Computers and Cognition (A Top-Down Approach to Neural Nets); Springer, Berlin etc., 1991. [16] H. Haken: Advanced Synergetics (Instability Hierarchies of Self-Organizing Systems and Devices); Springer, Berlin etc., 1987. [17] H. Haken, M. Stadler (Eds.): Synergetics of Cognition, Springer, Berlin etc., 1989. [18] S.R.Hameroff: Quantum coherence in microtubules: a neural basis for emergent consciousness? ; Journal of Consciousness Studies 1(1) (1994) 91-118. [19] B.J. Hiley, F.D. Peat (Eds.): Quantum Imphcations (Essays in Honour of David Böhm); Routledge, London, 1987. [20] J.J. Hopfield: Neural networks and physical systems with emergent collective computational abilities; Proc. Nat. Acad. Sci. USA 79 (1982) 2554-2558. [21] T. Kohonen: Self-Organization and Associative Memory; Springer, Berlin etc., 1984. [22] A. Krogh, R.G. Palmer, J. Hertz: Introduction to the Theory of Neural Computation; Addison-Wesley, Redwood City etc., 1991. [23] L.D. Landau, E.M.Lifsic, (V.B. Berestecki, L.P. Pitajevski): Lehrbuch der theoretischen Physik, Band IIL Quantenmechanik / Band IV: Relativistische Quantenmechanik; Akademie-Verlag Berlin, 1967, 1970 (English ed. also). [24] M. Lockwood: Mind, Brain and the Quantum; Blackwell, Oxford, 1989. [25] B. MacLennan: Information Processing in the Dendritic Net; Report 2"'' Ann. Behav. and Comp. Neuroscience Workshop, Washington, 1992. [26] B. MacLennan: Gabor Representations of Spatiotemporal Visual Images; Tech. Report CS-91-144, 1991. [27] J.L. McClelland, D.E. Rumelhart, PDP research group: Parallel distributed processing (Explorations in the Microstructure of Cognition)—vol.1: Foundations, vol.2: Psychological and Biological Models; MIT Press, Cambridge (MA), 1986. 28] R. Penrose: 1) The Emperor's New Mind (concerning Computers, Minds, and Laws of Physics); Oxford Univ. Press, London, 1989. 2) Shadows of the Mind (A Search for the Missing Science of Consciousness); Oxford Univ. Press, Oxford, 1994. [29] P. Peretto: An Introduction to the Modeling of Neural Networks; Cambridge Univ. Press, Cambridge, 1992. [30] M. Perus: Neural Networks as a Model of Brain Processes; Anthropos 5-6 (1993) 146177 (in Slovene). [31] M. Perus: Concepts of Holistic "Models" of Consciousness (Bohm's Quantum Implications, Holograms and Neural Networks); J. Critics of Science 174 (1995) 11-22 (in Slovene). [32] M. Peruš: Analogies of neural and quantum processing—Consequences for cognitive science; in: P. Pylkkänen, P. Pylkkö (Eds.): New Directions in Cognitive Science, Proceed. Int. Conf., Lapland/Finland, 1995 (pp. 115-123). [33] M. Peruš: Synergetic Approach to Cognition-Modeling with Neural Networks; in: K. Sachs-Hombach (Ed.): Bilder im Geiste (pp. 183-194); Rodopi, Amsterdam/Atlanta, 1995. [34] M. Peruš: All in One, One in All (Brain and Mind in Analysis and Synthesis); DZS, Ljubljana, 1995 (in Slovene). [35] W. Schempp: Bohr's Indeterminacy Principle in Quantum Holography, Self-adaptive Neural Network Architectures, Cortical Self-Organization, Molecular Computers, Magnetic Resonance Imaging and Solitonic Nanote-chnology; Nanobiology 2 (1993) 109. [36] J. Slechta: On a Quantum-Statistical Theory of Pair Interaction Between Memory Traces in the Brain; Informatica 17 (1993) 109-115 (and other ref. within). [37] J. Slechta: 1) The Brain as a "Hot" Cellular Automaton; 2) On Molecular Foundations of the Thermodynamics of the Thought Processes in The Brain; Proc. 12*'' Int. Congress on Cybernetics, Namur, 1989 (pp. 862-869; pp. 870-877). [38] J. Slechta: On Quantum-Statistical Theory of the Brain—Brain as a Generahzed Laser; in: J. Rose (Ed.): Cybernetics and Systems (pp. 919-924); Thales, Lytham St. Annes, 1987. [39] E.J. Squires: Quantum theory and the relation between the conscious mind and the physical world; Synthese 97 (1993) 109. [40] H.P. Stapp: Quantum Propensities and the Brain-Mind Connection; Foundations of Physics 21(12) (1991) 1451. [41] A. Stern: Matrix Logic and Mind (A probe into the unified theory of mind and matter); North-Holland/Elsevier, Amsterdam etc., 1992. [42] A.P. Železnikar: On the Way to Information; The Slovene Society Informatika, Ljubljana, 1990. Techniques for Algorithm Design and Analysis: Case Study of a Greedy Algorithm William F. Klostermeyer and Maria Muslea Dept. of Statistics and Computer Science P.O. Box 6330 West Virginia University Morgantown, WV 26506-6330 E-mail: wfk@cs.wvu.edu Keywords: algorithms, NP-completeness, dominating set Edited by: Rudi Murn Received: November 14, 1995 Revised: January 26, 1996 Accepted: May 14, 1996 Six different implementations of a greedy dominating set algorithm are presented and analyzed. The implementations and analysis illustrate many of the important techniques in the design and analysis of algorithms, as well as some interesting graph theory. 1 Introduction One of the basic tenants in the design and analysis of algorithms is that various implementations of the same high-level algorithm can have different performance characteristics. In this paper we present six implementations of a greedy dominating set algorithm. The "big Oh" running time of each is analyzed. The implementations and analysis illustrate many of the important techniques in the design and analysis of algorithms. Recall that a dominating set of a graph G = (V, JS) is a set £) C y such that each vertex in V is either a member of D or adjacent to at least one member of D. The objective of the dominating set problem is to find a minimum cardinality dominating set. However, this is a well-known NP — complete problem [1], which means it is unlikely that an efficient (i.e. polynomial time) algorithm can always succeed in finding a minimum cardinality dominating set. Thus, we would like to be able to devise a polynomial time algorithm that guarantees a dominating set that is always "close" to the minimum size. A generally accepted definition of "close" is that the ratio of the size of this approximate dominating set to the size of a minimum sized dominating set is at most some fixed constant, for all input graphs. It was shown in [2], that no such polynomial-time approximation algorithm can exist for the dominating set problem unless P = NP, which is thought hi- ghly unlikely. Thus, in practice, we often resort to "heuristic" algorithms for problems such as dominating set. Heuristics often yield good solutions in practice, though pathologically bad solutions can also result. 2 The Heuristic The greedy method is useful in designing algorithms. Greedy algorithms are iterative and are characterized by a locally optimal, or "greedy", choice at each iteration. For many problems, such as shortest path and minimum spanning tree, greedy methods can yield the optimal solution. A simple greedy algorithm for the dominating set problem does the following at each iteration: add the largest degree vertex in the graph to the dominating set and remove it and its neighbors. A formal statement of the heuristic is as follows. Note however, that no mention is made of implementation details such as data structures. The algorithm is formally stated below. input G=(V, E); D :=empty set ; while V not empty do let V be vertex in V of max. degree D:=D + {v} V:=V - {v} - {w I w adjacent to v} E:= E - {(w, u) I w adjacent to v> end; In other words, the algorithm chooses at each step the vertex that has the largest degree- which thus "dominates" the most other vertices. This vertex and its neighbors are then removed from the graph and the process continues until no more vertices remain. 3 The Naive Implementation A first attempt at implementing many algorithms is the naive one. In this case, we begin by computing the vertex degrees and storing that information with each vertex. In terms of data structures, an array or list could be used to store the vertex degrees. Each step chooses the largest degree vertex by scanning the entire list of vertices and proceeds to remove it and its neighbors from the graph. The details of the algorithm are described below. Following the algorithm, we shall discuss the running time of each step. We use V to denote the number of vertices and E the number of edges. Algorithm 1. Naive Solution 1. Compute Vertex degrees 2. repeat 3. remove vertex with max degree 4. for each v adjacent to majc do 5. remove v from G 6. for each w adjacent to v do 7. degree(w) = degree(w) - 1 8. remove edge (v, w) end for end for until G is empty We can see that line 1 takes 0{E) time; line 3 takes 0{V) time per iteration; line 7 takes 0{E) time in total (i.e. over the life of the algorithm); and hne 8 takes 0{E) time in total. Summing the costs shows the running time to be O(y^) in the worst case. this section, we use a heap [3] to store the vertices according to vertex degree-with the maximum vertex degree being at the root. A heap is a type of priority queue, a generic type of data structure that is used to maintain a set of items by priority so that the item with highest priority can be easily accessed. When the maximum degree vertex and its neighbors are removed from the graph, they must also be removed from the heap, at a cost of 0(log V) per deletion, in the worst case. Furthermore, the degrees of the vertices adjacent to the maximum's neighbors must be updated), which affects the heap. A simple analysis indicates that roughly one update (called a decrease-key [3J) may need to be performed for each edge in the graph. This indicates a total running time of 0(V logV + E log V). The algorithm is formally stated below. Algorithm 2. Using Heaps 1. Compute all vertex degrees 2. Build Heap based on vertex degrees /* keep a pointer from each vertex to associated node in heap */ 3. repeat 4. remove max element from heap 5. heapify; 6. for each v adjacent to max do 7. remove v from heap 8. remove v from G 9. for each w adjacent to v do 10. degree(w) = degree(w) - 1 11. update heap accordingly 12. remove edge (v, w) 13. end for 14. end for 15. remove maa from G until G is empty Step 5 of course take O(logy) time each time that step is performed. Step 7 requires 0{V log V) time in total and Step 11 can be seen to take 0(E log V) time over the course of the algorithm. 4 Choosing a Data Structure 5 Combining Two Algorithms The naive implementation does not really use any data structures other than those that come with the graph and an array to store the vertex degrees. Choosing an appropriate data structure is a key step in designing efficient algorithms In The careful reader will notice that for sparse graphs, 0(V log V+E log V) is faster than olv^). But when V^/logV € o{E), i.e. when the graph is dense, the naive algorithm is actually faster than the heap algorithm. We can have the best of both algorithms by simply counting the number of edges and making the economical choice: ÌÌ E < V'^/logV, we run the heap algorithm, otherwise we run the naive algorithm. This "combination" algorithm has a running time of 0{MIN{V logV + E log F, V^)). Combining two distinct algorithms that behave differently is a powerful design technique. High Powered Data Structures The most costly part of the heap algorithm is the fact that 0{E) decrease_key operations had to be performed in the worst case, each costing 0{logV). The advanced student of algorithms will be aware that a data structure called Fibonacci heaps [4] is able to perform the same operations as ordinary binary heaps, with the added bonus that decrease_key operations take 0(1) unit time each. Technically, the decrease_keys operations take 0(1) unit time on average over a sequence of heap operations [4]. This form of analysis is known as amortized analysis. Amortized analysis essentially averages the cost of operations over a long sequence of operations. This method allows us to offset a costly individual operation with an inexpensive operation somewhere else in the sequence. For example, suppose we perform one decrease_key of cost log V and log F - 1 de-crease_key operations of cost one each. Typical "worst-case" analysis may lead us to conclude that the sequence runs in time 0(Iog^ V). Whereas amortized analysis will charge one additional unit of cost to each of the log F — 1 inexpensive operations and deduct a total of log V — 1 units of cost from the amount charged to the expensive operation. Thus, in our accounting, each operation costs 0(1), leading us to conclude that the sequence has a running time of O (log V). Chapter 18 of [3] contains a very readable introduction to amortized analysis. Replacing heaps with Fibonacci heaps as our data structure gives a running time of 0(VlogV + E). However, Fibonacci heaps are quite complex and tend to hide a large constant in the 0() notation, and as a result are generally not very practical [3]. 7 Combinatorial Tricks The impracticality of Fibonacci heaps leads us to attempt to design a more clever algorithm using binary heaps. Two design and analysis techniques come into play. First, a careful observation of the heap algorithm's behavior reveals that a vertex may have a decrease-key operation applied to it more than once on a given iteration. This seems quite wasteful and is a viable place where we could optimize our algorithm. This is done by keeping a count variable for each vertex that counts the number of its neighbors that are deleted during a given iteration. A single decrease-key operation will be applied to a vertex (if necessary) which decreases its degree by the appropriate amount. The code follows, annotated with running times per operation. The input is assumed to come in the form of an adjacency list, G. 0. G':=G; /* create a copy of G */ 1. compute vertex degrees 2. büild heap 3. build adjacency list for G' with the following fields: type list_element is count deleted list end; adj_list integer :=0; boolean:=false; ptr to linked list of vertices; array[1..V] of list_eleinent ; Q : queue ; /* Q stores vertices deleted during a given iteration of line 4 */ 4. while G not empty do 5. remove maximum element from heap; also remove it from G; 6. for each v adjacent to max in G do 7. adj_list[v].deleted := true; 8. remove v from heap; 9. remove v and its incident edges from G; 10. add v to Q; 11. end for; 12. for each v in Q do 13. for each w in adj_list[v] .list do 14. if not adj_list[w].deleted then 15. adj_list[w].count++; 16. end for; 17. end for; 18. for each v in Q do 19. for each w in adj_list [v].list do 20. if (not adj_list[w].deleted) and (*) (adj_list[w].count > 0) then 21. decrease_key (w, adj_list[w].count); 22. adj_list[w].count:=0; 23. end for; 24. remove v from Q; end for; end; In terms of the individual steps, line 4 iterates 0{V) times; line 5 requires O(log V) time for each remove; lines 6-7 require 0(V) time in total; line 8 takes O(logy) per remove; line 9 takes 0(E) time in total and line 10 takes 0(1) time for each "add." Likewise, lines 12-17 require 0{V+E) time in total. Line 18 iterates 0{V) times and line 19 iterates 0(E) times in total. Finally, line (*) requires 0(log F) time per decrease_key. We shall now make use of the second technique to show that this algorithm runs in 0(V log V+E) time. The idea is to use combinatorial analysis to show that the optimization trick employed actually sped up the algorithm. In particular, we claim that only 0(V+E/ log V) in total of "then" statements indicated by (*) above, are satisfied. This implies only E/logV decreaseJcey operations are performed, thereby giving the desired running time. The proof of this claim follows. Proof. Assume without loss of generahty that G contains no isolated vertices and E > V — 1. Otherwise the Oiy \ogV) time bound clearly holds. Suppose more than 4(F-FJ5/(log F)) decrease-keys are performed-we shall call each a "decrease". It is clear that a vertex is "decreased" at most once during each iteration of the main "while" loop, by our use of the count variable. Let X C F be the set of vertices decreased at least once in the course of the algorithm. There are two cases. Case 1) yViog2y >£;. Summing the number of decreases over all the vertices in X gives W -I- 4£'/ log V. Compute the average of the number of decreases for each vertex in X. Partition X into two subsets Xi and X2; where X2 contains the vertices in X which have at least the average. The sum of the decreases of the vertices in X2 is at least as large as the sum of decreases of the vertices from Xi. Thus the sum of the decreases in X2 is at least 2V -I- 2El log V. We claim no vertex in X2 has more than 2V/ log V decreases. Suppose otherwise, i.e. some w e X2 has k > 2V/logV decreases. Then at least k vertices were chosen as "max" before w. Furthermore, on each iteration of the "while" (one for each "max") only one decrease can be applied to w. In addition, note that no edge in G can cause more than one decrease. This implies w must have degree at least 2V/ log V. After the first decrease, w must still have degree at least 2F/logy - 1, since it still has that many decreases left to be performed, and each must come from a different edge. Then after i decreases have been performed on w, it must have degree at least 2F/log y — i. At each iteration of the while, the vertex chosen as "max" has degree at least as large as w in the current graph. Then the number of edges in G is at least log V i=i (1) Which is a contradiction. Thus each vertex in X2 was decreased at most 2V/\ogV times. Dividing the total number of decreases in X2, 2V + 2El log V, by the maximum number of decreases of any vertex in X2 shows that X2 contains at least (ViogV + E)IV vertices. Examining summation (1) more closely reveals that at least F/log F vertices in G have initial degree of at least V/\ogV. Each of these "max" vertices has degree at least y/logV, and thus each removes V/iogV vertices (plus itself) from G when it is chosen as the "max." Vertices so removed from G cannot be members of X2, since they could not have been decreased at least 2F/ log V times (i.e. they were removed in the first F/logF iterations of the "while"). Hence log F logF' But this implies V2/iog^V < V, which is not true for F > 16. Case 2) V^log'^V 2 + 2ElV\ogV. Arguing in a similar fashion as Case 1) gives that there are at least 1 + E/ViogV vertices in (? — X having degree at least 1 + E/FlogF, else a vertex in X2 would have been chosen as "max" before accumulating all its decreases. When each of these "max" vertices was removed from G, it took itself and at least 1+E/V log V vertices from G, none of which can be in X2. Thus V>{1 + E VlogV )*(2 + E VlogV )+ l^2| + |Xx| But this implies V^/log^V > E'^, which is not true for sufficiently large V since we assumed y^/log^y < E, which means E^ > y^/log^y. But yV log''^ asymptotically dominates V^/ log^ V". Hence the algorithm runs in time 0{VlQgV + E). A[k]. That is, the array element A[k\ contains a pointer to the list of degree k vertices. Furthermore, each vertex in G has a pointer to its associated vertex in the linked list. Computing the vertex degrees initially takes 0{V + E) time. Sorting the degrees can be done in 0{V) time by placing each vertex into the appropriate linked list. This can be done in 0(1) time by placing the vertex at the head of the A[k] linked list. At each step of the greedy algorithm, a maximum degree vertex is chosen and removed from the graph (and the vertex degree data structure) along with its neighbors. A vertex of degree k whose neighbor is deleted from the graph can be moved in 0(1) time from linked list A[k] to hnked hst A\k - 1]. In this manner the "greedy" steps of the algorithm can be performed in 0{V 4- E) time, since only 0(1) work is done for each vertex and each edge in G. This running time is optimal, since any dominating set algorithm must examine every vertex and edge of the graph. Algorithm 6. 1. Compute Vertex degrees 2. Sort vertices by vertex degree 3. Build the following data structure: A[V-l..l] : array of pointers to nodes in graph A[i] has a linked list of all nodes with degree i A [i] points to the first element in that list 8 Elegant Data Structure Perhaps the best, and most elusive, algorithm design technique of all is sheer cleverness. This was evidenced when the second author conceived the following approach to the problem in an analysis of algorithms course taught by the first author. The data structure designed elegantly suits the problem so well that proving the 0(V -I- E) running time becomes trivial. The idea is to initially sort the vertices by degree. Since the maximum degree of a vertex is bounded by the number of vertices, the sorting can be done quickly as follows. All vertices having the same vertex degree will be linked together in a linked list. The first vertex in the linked list of vertices having degree k can be accessed by indexing into an array, i.e. Keep index to the first non-empty A[i], call this "head" Keep a pointer from each node in G to its associated node in A. 4. While A contains a pointer to at least one node do 5. remove first element from A[head]; call this max; 6. for each neighbor v of max do 7. remove v from A; 8. for each neighbor w of v 9. let V be in list A[j]; 10. move V to list A[j-1]; 11. end for; 12. end for ; 13. while A[head] empty do 14. head:=head-l; end; In this final algorithm, steps 1-2 take 0{V + E) time. The "for" loop at line 6 iterates 0(V) time in total and the "removed" at line 7 takes 0(1) time per "remove." The "for" loop at line 8 iterates 0(E) times in total and steps 9-10 require 0(1) time per iteration. The "while" loop on Une 13 takes 0{V) time. 9 Conclusions Six different implementations of a simple greedy algorithm for the dominating set problem were presented and analyzed. Different design and analysis techniques such as data structure choice and code optimization were emphasized. References [1] Garey M., and Johnson, D. (1979) Computers and Intractability: A Guide to the Theory of NP-completeness, W.H. Freeman, San Francisco [2] Lund, C. and Yannakakis, M, (1994) On the Hardness of Approximating Minimization Problems. J. ACM, 41, 5, p. 960-981. [3] Gormen, T., Leiserson, G., and Rivest, R. (1992) Introduction to Algorithms, McGraw-Hill, New York [4] Fredman, M. and Tarjan, R. (1985) Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithm. J. ACM, 34, 3, pp. 596-615 Postmodernistic Information Concerning Autopoiesis out of Chaos Ante Laue Faculty of Economy University of Osijek Gajev trg 7 Osijek, Croatia e-mail: alaucQpublic. srce. hr Keywords: human development, organizational development, economic development, cultural development, autopoiesis, self-organization Edited by: Anton P. Železnikar Received: November 30, 1995 Revised: May 14, 1996 Accepted: May 22, 1996 Autopoiesis as a new concept of interpretation of reality requires understanding of the four Aristoteles causes. The next step is understanding of four types of development: human, organizational, economic and cultural. The key category is understanding of freedom which is for the author high efRciency and high democracy with minimal pollution of the nature. All questions and answers must be "closed". It can be done through the permanent restructuring and reprocessing. 1 Introduction This short article will present a summary of theory and methodology of postmodern organization (PMO^) including the sketch of technology. It seems that the most important problem in the world is lack of development. There is an enormous demand for it but supply is too short. Four questions in the best philosophical tradition of Aristoteles would be helpful to overcome the problem of demand: 1. What is causa prima finalis? (What is objective function of the subject of development?) 2. What is causa prima efRciens? (What are causes of growth and development of the subject of development?) 3. What is causa prima materialis? (What are the means of growth and development?) 4. What is causa prima formalis? (What are rules of the configuration arid the organization of the subject of development?) The next phase is to find out answers to questions of supply. Through theoretical and empirical ^ A dictionary of abbreviations is given at the end of the paper. research it is suggested to look after the answers in the following fields: 1. Human development (HD), 2. Organizational development (OD), 3. Economic development (ED), and 4. Cultural development (CD). The purpose of this article is to apply the theory of self-organization and autopoietic theory combining four questions and four answers. In the narrow sense, PMO is the problem for causa formalis and inside organizational development. But in the broader sense with postmodern approach we could analyze all questions and all possible answers, using synergistic force (H. Haken [19]). The author of the theory of self-organization, based on the paradigm of non-linearity, irreversibility, and dependence, is Ilya Prigogine—winner of the 1977 Nobel Prize for Chemistry. Before him the man was in opposition with nature (Mo-nod and Jacob). Such,philosophy is the result of Newton's paradigm based on mathematical assumptions of hnearity, reversibility, and independence of relations. It started with Galileo and F. Bacon and Descartes as the spiritual leaders of Modem Age. While Aristoteles' view was based on perception, a modern age was based on cognition. Postmodern age includes senses, feeling, thinking and doing with permanent restructuring and reprocessing within and between subjects. With deeper understanding, beneath this antagonistic relationship (Galileo as negation of Aristoteles) there is harmonic relation, Galileo as negation of negation—renegation—of Aristoteles. The best modern scientists were inspired by Cartesian tradition, with only several, who tried to interpret reality according to philosophy of Leibniz, and Hegel as the opponents to Newtonian paradigm. Due to Prigogine's research [31] we now have a new model of nature, that could be in harmony with human being and society. The key idea is freedom. Hegel's interpretation [20] of nature is a challenge, because it helps in explaining non-linearity, irreversibihty and interdependence of relations. Aristoteles philosophy, enriched with discoveries of Heisenberg's uncertainty, Bohr's complementarity and Prigogine's non-hnearity and irreversibihty demonstrate the spiral movement of postmodern science. The author is not competent to interpret Aristoteles and Hegel philosophy of nature, but is only a believer, owing to intuition in Bergson tradition. It is not a pure chance that scientists with Prigogine paradigm quote Hegel, while scientists with Newtonian paradigm ignore it. It requires a special research to discover relationship between religion, philosophy and science. Hypothesis behind this article is that au-topoietic theory requires a better knowledge of religion, philosophy and the best scientific discoveries. Ignoring it we are falling into Newtonian paradigm, although we need not be conscious of that. Modern paradigm is allopoietic (defined by outside) and defines the border in artificial nature (buildings, machines, economic products, and political relationships). With postmodern paradigm owing to autopoietic approach (defined by inside) we have capabilities to design products and relations that will be in greater harmony with man and nature. Economy and sociology are limited with idea of equihbrium. Prigogine with discovery of the behavior of nonequihbrium and nonUnear systems is a predecessor of postmodern age. Nature is rarely in equilibrium. Life is now interpreted as self-organized phenomenon that is far from equilibrium. Social sciences ignore this new paradigm resulting in low quality of products, low productivity, stagflation, wars, and environmental problems. 2 Development as Autopoiesis and Self-Organization From the author's point of view autopoiesis and self-organization are the bridge between causality and finality, necessity and freedom, nature and man. Without it we are forced to live in a less efficient and less democratic organizations. High efficiency and high democracy with minimal pollution of nature could be the way to freedom. With deeper understanding of these three goals we could overcome sustained development, stagflation, and environmental problems, because knowledge of the purpose should eliminate much entropy. The problem is how to design decision making to reach at least two goals (efficiency and human-ness). It is the problem related to causa prima finalis. Before postmodern age the humanistic and natural sciences were separated and the objective function was split between developing efficiency and humanness. Human and cultural development prefers the second objective, while organizational and economic development prefers the first one. There is very httle research on correlation of these two variables. Probably the reason is a lack of postmodern paradigm in contemporary social sciences. Author's hope is to help reinterpret the viewpoint far from equilibrium, from modern paradigm toward postmodern paradigm, from chaos and necessity as opposites to freedom as a renegation of chaos and necessity. This prevalent, traditional reasoning is the. result of division of labor (ignoring Durkheim [25]), the lack of understanding of the wholeness (ignoring Hegel, Capra, Wilber, Prigogine, Jantsch), the lack of emotional involvement (ignoring Freud, Jung, Reich, Plutchik, Kelley), the lack of lateral thinking, the lack of mutual understanding to be in the other's shoes (Vanek [40]), and—as synthesis—the lack of knowledge and understanding of development in general. Autopoietic theory enables us to develop decision making that will harmonize these two either opposite or complementary approaches (technical versus humanistic, economic versus political, male versus female, man versus nature, discursive versus intuitive). 3 Development and Causality Causa prima efficiens has its roots in Darwin's law of survival of the fittest. It is a necessary condition, and a sufficient one could be the will for power (Nietzsche). How to achieve greater rate of return or how to survive? Behind this behavior is either natural or social law of necessity. Behavior determined only with these laws is near equihbrium. It is based, in some way, on the second law of thermodynamics and the result will be according to the prognosis of maximal entropy and elimination of life. With optimistic view, the life is the only chance in the universe, pure gamble of roulette (Monod [31]). It is owing to postmodern paradigm far from truth and freedom. Such behavior "near equihbrium" is demonstrated in the literature of the 20th century and every sensitive human being is conscious of it in everyday life. Contemporary scientists are very often afraid to look after the thelos, and for many of them teleology is too similar to theology. The result is that we are too near the equilibrium, defined by chaos and necessity, without understanding of freedom as thelos. If we are defined only by causality we behaye almost as spiritual animals in Hegel sense. With better understanding of the history of philosophy and development of the concept we could reach a better understanding of the concept of development. Before Galileo there was a dominance of thelos with poor understanding of causality. With Galileo instead of why, the dominant question became how. It was negation of finahty (not re-negation) and affirmation of understanding of causality. Descartes and Bacon developed the ontological frame for industrial and scientific revolution. Before Newton, as the "new Moses", nature and natural laws laid hidden in the night and God said: "Let Newton be". The world was changed and all was light (Pope), but we have lost the way. As Einstein said we don't know where we are going, but we will reach there soon [31]. 4 Human Development In human development it is suggested to analyze human needs, as A. Maslow has proposed understanding of human nature [27]. Motivation depends upon probability of success and amount of benefit. The background is a conative aspect of personality. It is this writer's experience that the Plutchik theory of emotion [30] is helpful as an interface between nature and man, biology and psychology. The key problem is to overcome the need for security and need for prestige (fight or flight) through self-actuaUzation, that is in fact a need for freedom. In Plutchik's model, self-actualization is more probable with orientation and exploration above 50%. Maslow's basic position is that when things are seen as a whole, the prospects are far more cheering than they look at the first sight. Pessimism is the outcome of "parti-alism". Whoever is nearer to the 5th level of needs is more capable for autopoiesis. Self-actualisers, says Maslow, are capable of more love than others, as well as of deeper relationships. They have a clear and pragmatic sense of difference between good and evil, although they are more capable of tolerance about the other people's lack of it. They are example of the behavior that is far from equilibrium in Prigogian sense. Their 'peak' experiences, the oceanic feeling, the sense of hmitless horizons opening up to the vision is very similar to the Chinese experience of Tao or Indian experience of samadhi. Maslow saw a human nature as naturally self-transcending, as permanent re-negation, re-structuring and reprocessing. Freud is within Newtonian paradigm, because he is unable to see far beyond ego-satisfaction, that is probably nearer to equilibrium. Rogers research [33], specially, coefficients of correlation before and after his psychotherapy, demonstrates that the more healthy a person is the better self-organized a person. Maslow and Rogers did not use the term self-organization or autopoiesis, but it is implicit in their theories of personality. Some psychologists use these terms but their approach is either negation or ignorance of postmodern paradigm. The criterion for differentiation is related with causa prima fina-lis. Skinner's experiment with operant conditioning helped very much in partial understanding of human nature, but when he tried to extrapolate his empirical work he reached the wrong conclusion—there is no need for freedom and dignity. Discussion about his book "Beyond Freedom and Dignity", specially with C. Rogers, is well known. Author's experience is that the best criticism is when it comes within one's paradigm. The Author asked Skinner: "Is the freedom the best reinforcer?" His answer (it could be if somebody is closed in room) proved that his understanding of freedom was very narrow, almost near equilibrium, and with such constraints in cognitive map, his conclusion is logical. The general hypothesis for understanding of human development is that a free and creative person thinks as s/he feels, communicates as s/he thinks, and does as s/he says. These transformations of feeling in thinking, thinking in communication, and communication in deeds is ignored in modern psychological research. Without understanding and progress in 3C (from "connectivity" to "compatibility" and "coordination") in accordance with the new paradigm we will be closer to the equilibrium. Chuck Kelley's research [23] about an armor and the flow of the radix is promising. Without more precise understanding of dialectical thinking through renegations, at least for decision-making which should be more efficient, effective and democratic, we have to know that the greatest constraints are the lack of knowledge. Triadic classification (technical, economic, and psychosociological knowledge) is necessary but not sufficient for designing a professional development. Author's research [25] has discovered a very low correlations between knowledge levels and intensity of problems. Most people do not know that they do not know, specially about their own emotions. How to overcome fear, anger, sorrow? We propose to start learning with technical discipline (about material, energy, equipment, process, product), transform technical categories in economic (cost—benefit analysis, marginal productivity, law of fair exchange) and economic ones in humanistic (need for work and love; how to achieve self-actualization and peak experiences). The greatest problems are in the group dynamics from family development and team work to self-organizing in any organization. Authors' theoretical and practical work on the theory of interaction could be classified into the three basic laws: X. The law of the transfer of information; Y. The law of the relative deprivation; and Z. The law of reciprocal behavior. By the first law we have to define the elements of sets of information on feelings, thoughts, problems, know-how, and solutions ranging from an individual to the group, with all possibilities. Physical units (human being or a group) have arithmetic speed {n -I-1) in matching and ordering, while images of these, owing to interactions, have geometric speed (n^ -I-1). The only rational solution is through re-negation that minimizes the entropy with on-line feedback whenever it is possible. Loud thinking is a negative feedback which minimizes entropy too. Anonymous questionnaires are the second best solution with negative feedback and off-line regulations. They are very often necessary for self-learning as the first stage in socialization in the sense of postmodern age. The law of relative deprivation explains the emotional background of problems within and between human beings. An individual mainly compares oneself with better achievers in relevant economic and political values (money and/or status). The result is relative deprivation—either envy or injustice, depending on relations in productivity and consumption. Self-actualisers are free of this law. The law of reciprocal behavior explains actual behavior relating expectation and realization. There are three categories on qualitative level (help—"-I-", indifference—"0", and obstruction— "—") and enormous quantitative differences. On qualitative level there are nine combinations, where three symmetrical ones are the most frequent. Anticipation and satisfaction of help are called familiarity, because it is initiated in the family, but in the life out of the family it has negative connotations. It is informal and infantile level of autopoiesis. Owing to the first and second law (X and Y), the next phase is an exchange of negative (—), instead of positive (-I-). Such a case is called the revenge (eye for eye, tooth for tooth as in Old Testament). The final stage in natural order is nearer to equilibrium—^(neither exchanges of "-f" nor "-" but exchange of zeros) and returns to the anomie. Such spiritual death sooner or later leads into physical illness and death as cumulative of errors in work and interactions. Postmodern paradigm opens enormous possibilities for human development (HD). Author suggests that with better understanding of rehgion, philosophy and modern discoveries in natural sciences, as stereoscopic view, we could understand much better problems in HD and define more creative questions and answers. Difference between Skinner and Maslow could be explained as behavior near and far from equilibrium. The first one points out the causality, the second one emphasizes the finality. With OD, ED and CD we will synthesize them. Without autopoietic interpretation it seems impossible. 5 Organizational Development In organizational development, beneath this area, there are natural laws discovered in physics, chemistry and biology that have synthesis in technological development. In interaction with achievements in human development we could design postmodern organizations that could have higher rate of growth owing to aUiance of man and nature. Ergonomy is an attempt to understand the interactions of man and machine without much insight in enormous potential of human being and nature. Shigeo Shingo [35, 36] has proved that "improvement engineers" (in Maslow terminology—self-actualisers) demonstrated the essence of his theory—zero quality control as the production system, manufacturing with no defects. To achieve this ideal in many Japanese firms, two things were necessary: 1. Source inspection—looking at errors before they become defects and either automatically adjust the error condition (autopoiesis) to prevent defect production, or stop the system, because it is cheaper, than produce defects 2. Apply poka-yoke devices what give immediate feedback where we can get the root cause of the problem and prevent production of a defect. Company like Toyota Motors eliminates the need for statistical quality control, and as many others, has almost zero defect production. It is really difficult to comprehend the idea of "zero defects". When the author asked Shingo how much time is nee- ded to teach workers zero defect production, his answer was—three days. World is abundant with teachers. Without postmodern paradigm, there is scarcity of pupils. Total quality management points out learner instead of teacher, free worker instead of manager. His next enormous contribution is the Single Minute Exchange Dial (SMED) system that enables revolution in manufacturing and can be generalized as his discovery in quality policy. Again Shingo teaches us that the real problems are our mental barrier. Separating the inside exchange of die (lED) from the outside exchange of die (OED) he reduced setups from hours to minutes and seconds. With Just in Time (JIT) and SMED he and his collaborators in Toyota Motors did speed up the production flow by 1000% in comparison with American and European firms. Shingo is the best example of autopoietic paradigm in quality and productivity. Goethe in last century said that we can do nothing but love those better than ourselves. Due to the first and second law (X and Y) the results are order of magnitude worse. It seems that we hke to be near equilibrium and behave according to the natural law of least resistance. If this is true, then experiences of R. Fritz [16] could be a challenge. Contemporary organizations failed to incorporate human factor and the result is the lack of OD. P. H. Mirvis, M. Beer and T. Mills [24] had chance to promote OD but without SO some of them abandoned it. Without power of vision we stay near equihbrium. R. Merton, P. Blau, and W. Bennis with criticism of bureaucracy; R. Li-kert with model of a System 5 organization and linking pin model; C. Argyris with integration of individual and organizational needs; Blake and Mouton with managerial grid; M. Maccoby with gamesman; M. Porter with competitive advantage; A. TofHer with third wave; Ouchi with theory Z; J. Naisbit with megatrends; are all verified in Peter and Waterman studies of excellent companies and in K. Blanchard and N. V. Peale studies [7] about power of ethical management. In theory and practice there is a gap between efficiency and humanness, and without autopoietic approach it can not be overcome. M. Zeleny as author and editor of Human Management Systems opens sky with autopoietic approach. All these and many other authors with their propo- sitions should promote a vision of harmony and ordering between man and nature, emotional and intellectual aspects, human and physical capital, means and ends in organizations. Human development and organizational development can not be resolved without understanding of economic development. 6 Economic Development Economy probably has a duty to keep us as far as possible from equilibrium. In economic development the function of law of value in Smith-Ricardo-Marx-Debreu tradition is the central point. Within controversy of labor versus capital we will not be far from equilibrium. A. Marshall was the first who recognized that knowledge is the most powerful factor of production (quoted T. W. Schultz [34]) but it is still ignored in contemporary theories and practice in investment. In theory, economists try to reach Pareto optimum without necessary understanding of nature, human being, and technology. Without better knowledge about potential of resources (rate of return of human and physical capital) it is impossible to achieve optimal allocation. Arrow was near autopoietic paradigm when he concluded: "the use of knowledge in productive activities obeys the law of increasing returns" (Arrow [2] p. 164). With remarks that neither the demand for nor the supply of knowledge satisfies the condition of a competitive economy (p. 192), he remained within Newtonian paradigm. A. Lewis was thought by his mother to believe that anything that the developed can do, the developing can do too. He wrote in 1955 a thirty-five page chapter on "Knowledge" [29] and did not agree with enormous literature initiated by G. Becker about human capital, that did not differentiate private and social rate of return in manpower. T. W. Shultz even published the book Investing in People, but his theoretical background is too narrow to overcome Newtonian paradigm of equilibrium. Economy of human capital is very near equilibrium and it could be explained with Skinner operant conditioning. Author has a very sad experience related with private versus social rate of return, and investment in human capital versus investment in physi- cal capital through 25 years. Owing to autopoietic theory there is a possibiUty for the opening of a new approach to the relationship between production and consumption. If we look back in the history of the theory of OD we can recognize the attempts to understand the organization as a co-ahtion of individuals where some of them are self organized into subcoalitions (Barnard, 1938, Simon, 1947, Cyert and March, 1959. [24]). This was a research of bargaining, neglecting di-flferentiation between above and beneath average in production, distribution, and consumption. It was easier for this author to recognize the importance of it because there are lower positive correlations between production and distribution in former Yugoslavia and they could be even negative. Here we have four basic categories: 1. Above average workers and managers who receive above average moral and/or material rewards (type A) 2. Persons whose output is above average and who receive below average rewards for it (type B) 3. Persons whose output is below average and who receive above average rewards for it (type C) 4. Workers who are below average in productivity and their rewards are below average too (type D). Hypothesis is that types A and D are according to the law of value, while B and C are diminishing correlation. A perfect correlation is impossible because production is a complex problem and only in long run there is a perfect mapping and ordering between production, distribution, and consumption. If there are same probabilities {p) for all categories then the correlation is zero. If v{B)+p{C)>p{A)+v{D) we have negative correlation. It could seem impossible in market economy, but in postsocialist economy it is the cause of their troubles. The root is the ideology of egalitarianism, in fact law of envy, almost perfect self-organization of C type of individuals. All human beings on such logistic curve as best simulation of real behavior could be differentiated beneath and above it. The main proposition of this article is that the type C and human beings who are above curve are much better self-organized. It is differentia specifica of self-organization theory for social systems in comparison with natural systems. It is the law of necessity (causa prima efRciens) that forces C type to self-organize and without it they can not survive. The type D is the most similar to Prigogine dissipative structures in inorganic nature, where input energy and the state far from equilibrium defines the behavior of hypnons. They reach the so called bifurcation point where they choose either 5 or C type of behavior. If they work and live according to social norms they could reach A phase that could be in some way autopoiesis in Maturana-Luhmann-Zeleny sense. Although there is an autopoiesis on biological level, on social level it is very important to define precisely the necessary and sufficient conditions. It could be the opportunity for reconciliation of causality and finahty, materialism and idealism, science and philosophy, work and love, man and nature. It would be a short cut if the economy were our final point. Its role is to force us to go far from equilibrium, to go in conflict or competition on the market fighting for consumers. Without understanding of CD we will postpone our future. There is no synthesis between eflSciency and humanness, ends and means without understanding that exchange of material goods and services is much easier than fair exchange of nonmaterial products and services. To be healthy, happy, and free it requires further effort. The cultural renegation is required to resolve recognized problem, because it can not be resolved within economy. 7 Cultural Development Until we do not invest much more in developing motivation (toward self-actualization), knowledge (in technical, economical and humanistic fields), and self-organization within and between organizations, we will perpetuate low quality of hfe. In cultural development we have to recognize moral problems (from Kant to Cohlberg and Rawls), legitimacy (from informal social norms to international law), political science (capitalist versus communist, left and right wings in pohtical par- ties), ideology (from religion and philosophy to science of science) and art as final expression of human being (sense of life, work, love and death as the greatest challenge). The bridge between economy, as eternal law of causality, and religion, philosophy, science, and art as eternal law of finality, could be built through G. Romans theory of social exchange [22]. Although the theory is conditioned by Skinner theory of operant behavior, his research and interpretation of social experiments of others are very important for the development of postmodernism in social science. It is not pure chance that even in sociobiology Wilson tried to synthesize Maslow and Homans. This author will try to do it inspired with autopoietic paradigm. The Homans' propositions are: 1. For all actions taken by persons, the more often a particular action of a person is rewarded, the more likely the person is to perform that action. 2. If in the past the occurrence of a particu- ^ lar stimulus, or set of stimuli, has been the occasion on which a person's action has been rewarded, then the more similar the present stimuli are to the past ones, the more likely the person is to perform the action, or some similar action, now. 3. The more valuable to a person is the result of his action, the more likely the action will be performed. 4. The more often in the recent past a person has received a particular reward, the less valuable any further unit of that reward becomes for him. His final synthesis is: profit = reward — cost and his best conclusion is that servus servorum has the greatest profit. From this conclusion, and sociobiological research about altruistic behavior we can deduct the relationship between efficiency (economy) and democracy (politics). Without understanding of autopoiesis it is small probability to reach it. There are enormous barriers in behavior of human being above and beneath logistic curve. Familiarity is more probable above curve and revenge beneath curve. In a short run causality is stronger than finality. With deeper understanding of transformations within human being (between emotion, cognition, communication, and doing) and between them (Homans' and author's proposals) combining with technological and economic knowledge we are more and more capable for creative and free (efficient and fair) solutions. 8 Theories of Self-Organization and Autopoiesis as Bridge between Causality and Finality Theories of self-organization and autopoiesis are helpful to harmonize all elements and their relations through their mapping and ordering with help of suggested criteria. There is no reason for low efficiency and fairness on this level of human, organizational, economic, and cultural development. Roots are either lack of morality (too much selfishness) or lack of knowledge (too much ignorance). The important question is a degree of correlation between these variables. Author's research [25] suggests a high positive relationship between them that could be the framework for a better policy. Causa prima materialis enables the analysis of all the relevant components (statistical units) and variables explaining objective function either simple or complex. In human development, this is the primary level of motivation transformed in level of relevant knowledge for decision making and decision implementing. We can classify all sorts of knowledge in three branches: technical, economical and psychosocial knowledge. There is a reverse relationship between intensity of thus classified knowledge and intensity of such classified problems in any organizational unit. Because of that, postmodern paradigm has enormous chance of developing and application. Besides that, it is of enormous importance to apply social psychology, especially group dynamics (G. Homans—1 through 4) and rules of social exchange (laws X, Y, Z). In the domain of organization there is a problem of transformation of material, energy, and information to a high quality product with zero defects and recycled waste in the most efficient way (for example just in time, SMED, Canban, CIM, etc.). On economic level the most important problem is an optimal allocation of resources. Moreover, if G. Becker and his followers have good conclusion [6], we should invest 50% in human capital. There is no organization where such allocation is applied, although the most developed nations and organizations invest more and more in human capital. In the sphere of culture there is a problem of contents (need for good versus evil, need for truth versus fiction, need for beauty versus ugly, kitsch). In constricted sense it is saturated with feeling, thinking and free interaction and (non)verbal communication. In a broader view it is a collection of all relevant components and relations that are "raw materials" for autopoiesis. The final question is about causa prima formalis. It means to precisely define answers to next questions: who, why, to whom, what, how, when, where, how much, at which cost, by whom, with what, and with which price. These twelve questions define different combinations, and with an objective function, constraints, data base, and deterministic or heuristic algorithm, it is possible to optimize a solution that can be combined with self-empowerment in short and/or long run. In the field of human development in final sense Heraklit defined that men are mortal gods and gods are immortal men. Charismatic persons (they think as they feel, talk as they think, do as they say) are near self-organization and autopoiesis, but too strong for their social environment. With sincere feedback of members of groups they are more capable to modify behavior than the others in the group. For self-organization everybody is mature who is ready to overcome need for power and prestige in Maslow's sense, who has normal and average capabilities for learning, and who is ready to confront in an efficient and fair way the challenges of the environment, especially market requirements of quahty and productivity and political requirement of universal human rights. In the domain of organization the chance for self-organization and autopoiesis is the greatest in conditions of high technology but is possible anywhere. Flow of material, layout, optimization of design through CAD, optimization of process through CAM and complete optimization through CIM is well known, but without knowledge about autopoiesis, as is obvious, it is expensive and less efficient and human. We must not forget that only operation adds surplus value while transport, control, and inventory reduce profits. Postmodern paradigm has the greatest possibilities in economy. If we agree that economy is optimal allocation in resources, the main problem is the rate of return from man and machine. Chicago school has developed a theory about similar rate of return, while our research has shown that investment in human being is 15-25 times greater than investment in physical assets. Although there are firms (Intel, Motorola, IBM, etc.) and states (Japan) where investments in human power are greater than elsewhere, nowhere are they even close to the theory of Chicago school. When author asked M. Friedman where is his theory of freedom the best applied, his answer was—Hong Kong. Will Hong Kong fight main China, will freedom be stronger than power? If citizens of Hong Kong fully apply autopoiesis and postmodern paradigm they could not only survive but introduce freedom in communist China. I am too far to know will it be, but it is a challenge for all of us. In a,llopoietic society there is a lack of self-organization. Human beings are not subjects. Postsocialist countries are the best demonstration of lack of human development. It is not important what the cause is—lack of motivation for development or lack of knowledge, capital or market. Postmodern theory opens new perspectives. Research has shown that more capable persons and firms impute themselves for their failure while less capable accuses others. Postmodern economy enables relationship between producer and consumer in a new way (Toffler's conproducers). On cultural level (policy of firm or state) there are constraints on one side and limitless opportunities on the other side. With more ethics (Blan-chard, Peale), knowledge and beauty embodied in products and services there is more profit, democracy and freedom. Causa formalis enables greatest surplus value because it is under control every detail of human, technical, economical and social process. Whatever pollutes efficiency and fairness is recycled as soon as possible. Self-organization and autopoiesis will be developed and applied easier and better with higher level of motivation, knowledge, group dynamic, level of technology and organizational culture. If these factors are at a lower level, it will be determined more by law of causality and less by law of finality, more by need for survival and less by need for freedom. Self-organization is a stochastic process and it is necessary to monitor from emotion to quality control, but not in a Big-brother, Orwellian way (1984). We have to know that Peter principle and Murphy law are unavoidable as is a second law of thermodynamic. Owing to described discoveries we can fight through design of irreversibility. As there are managers who do not know that they know, there are managers who develop and apply postmodern approach unaware of it. Theory and methodology of autopoiesis will help them to multiply effects. Through self-organization we will in a more natural way achieve the third wave of A. TofBer [38] and all other great ideas of great thinkers. Without understanding of postmodern age we will have a gap between theory and practice, ends and means, causality and finahty. It depends upon us. References [1] Ackoff, R. (1962), Scientific Method Optimizing Applied Research Decision, J. Wiley. [2] Arrow, K. J. (1985), Production and Capital, vol 5, Collected Papers of K. J. Arrow, The Belknap Press of Hardware University, Cambridge. [3] Arrow, K.J. (1974), The Limits of Organization, W. W. Norton, New York [4] Milan Zeleny (Ed.) (1981), Autopoiesis: A Theory of Living Organization, North Holland, New York, Oxford. [5] Gunther Teubner (Ed.) (1988), Autopoietic Law: A new Approach to Law and Society, Walter de Gruyter, Berlin, New York. [6] Becker, G. (1971), Economic Theory, A Knopf, New York. [7] Blanchard, K., Peale, N. (1988), The Power of Ethical Management, W morrow And Comp., New York. [8] Boulding, K. (1956), General System Theory-Skeleton of Science, Management Science, Aprii 1956. [9] Cyert, R. March, J.G. (1963), A Behavioral Theory of the Firm. Englewood Cliffs, N.J.:Prentice-Hall. [10] De Bono, E. (1971), Lateral Thinking for Management, McGraw Hill. [11] Debreu, H.E. (1959), Theory of Value, New York: J.Wiley [12] Drucker,?. (1989), The New Realities, Mandarin Paperbooks, London. [13] Faber, M., Proops, J. L. R. (1985), Interdisciplinary Research Between Economists and Physical Scientists: Retrospect and Prospect, KYKLOS, Vol 38, Page 599-616. [14] Forrester, J. (1971), World Dynamic. Wright-Allen Press, Cambridge [15] Friedman, M. (1969), Capitalism and Freedom, Chicago. [16] Fritz, R. (1984), The Path of Least Resistance, Stillpoint Public, comp, Salem. [17] Fromm, E. (1962), The Art of Loving, Harper and Row Pubi., New York. [18] Habermas, J. (1988), Filozofski diskurs moderne, Globus, Zagreb. [19] Haken, H. (1983), Synergetics, an Introduction, Springer. [20] Hegel, J.W.F. (1965), Encyclopedia of Philosophical Science (in Croatian), Sarajevo. [21] Hickman, S., Silva, M. (1988), The Future 500, Unwin Human lim., London. [22] Homans, G. (1961), Social Behavior—It's Elementary Forms, New York. [23] Kelley, C. (1974), Education in Feeling and Purpose, Radix Inst. Ojai, CA. [24] Laue, A. (1992), Development of Croatia from Chaos to Autopoiesis (in Croatian), Ekonomski vjesnik, Osijek. [25] Legradic, R., Laue, A. (1977), Theory of Dialectic and Social Practice (in Croatian), Osijek. [26] Luhmann, N. (1984), Systems Theory (in Croatian), Globus, Zagreb. [27] Maslow, A. (1982), Motivation and Personality (in Croatian), Prosveta, Beograd. [28] Maturana, H., Varela, F. (1980), Autopoiesis and Cognition, D Reidei Pubi, co. Boston. [29] Mayer, G., Seers, D. (Eds.) (1984), Pioneer in Development, Oxford University Press,Oxford. [30] Plutchik, R. (1962), The Emotions, Facts, Theories and a New Model [31] Prigogine, L, Stengers, I. (1984), Order Out of Chaos, New Science Library, London. [32] Pusic, E. (1989), Social Regulation (in Croatian), Globus, Zagreb. [33] Rogers, C. (1972), On Becoming a Person [34] Schultz, T. (1961), Investment in Human Capital, American Economic Review 51 March 1961, pages 1-16. [35] Shingo, S. (1986), Zero Quality Control: Source Inspection and the Poka-yoke System, Productivity Press, Cambridge, Mass. [36] Shingo, S. (1986), A Revolution in Manufacturing: The SMED system. Productivity Press, Cambridge, MA. [37] Simon, H. A. (1965), The New Science of Management Decision, Harper &: Row, NY. [38] TofHer, A. (1980) The Third Wave, (in Croatian), Beograd. [39] Toffler, A. (1985) The Adaptive Corporation, Pan Books, London. [40] Vanek, J. (1970), The General Theory of Labor Managed Market Economic, Cornell University Press. [41] Yew-KwangNg. (1988), Economic Efficiency Versus Egahtarian Rights, KYKLOS, Vol 41, Fase 2, 215-237. [42] Železnikar, A.P. (1990), On the Way to Information, The Slovene Society Informatika, Ljubljana. A Dictionary of Abbreviations A Above average virorkers and managers who receive above average moral and/or material rewards B Persons whose output is above average and who receive below average rewards for it CD Cultural development C Persons whose output is below average and who receive above average rewards for it D Workers who are below average in productivity and their rewards are below average too ED Economic development HD Human development lED Inside exchange of die JIT Just in time OD Organizational development OED Outside exchange of die PMO Postmodern organization SMED Single minute exchange dial X The law of the transfer of information Y The law of the relative deprivation Z The law of reciprocal behavior Development of Expert Systems for Nuclear Power Plants in Korea Se Woo Cheon Korea Atomic Energy Research Institute Yu Song P.O. Box 105, Taejon 305-600, Korea Phone: +82 42 868 2941, Fax: +82 42 868 8357 E-mail: swcheon@nanum.kaeri.re.kr Keywords: survey, expert systems, nuclear power plants, in Korea Edited by: Matjaž Gams Received: March 17, 1996 Revised: April 12, 1996 Accepted: May 14, 1996 This paper presents a survey of expert system applications for nuclear power plants in Korea. Expert systems have been applied to various Gelds of nuclear power plants since mid 1980s. The fields of applications include accident diagnosis, failure diagnosis, alarm processing, fuel shufRing, and so forth. Most expert systems are based on rule-based systems and implemented using artificial intelligence languages or expert system tools. Some applications use advanced techniques, such as numerical simulation models, model-based reasoning, artificial neural networks, and fuzzy logic. 1 Introduction Expert systems are an emerging technology for organizing knowledge and applying inference that may achieve performance comparable to human experts in well defined narrow domains. Nuclear power plants are so large in scale and complex in functions that the information from various local fields may be excessive. Thus, plant operators may not properly process it without a computerized support. To help operators in effectively maintaining plant safety and enhancing plant, availability, considerable studies have been performed to develop operator support systems. The Three Mile Island (TMI) Unit-2 accident in 1979 played a significant role in the development and implementation of such systems. Expert system techniques have the potential to make a significant contribution to the rehable and safe operation of nuclear power plants. The capabilities and benefits for expert systems and their potential on the operator support systems were realized by the development of accident diagnosis expert systems, such as REACTOR (Nelson 1982) and DISKET (Yokobayashi 1986). The objective of REACTOR is to monitor a nuclear reactor facility, detect deviations from normal operating conditions; determine the significance of the situ- ation, and recommend an appropriate response. DISKET can identify the cause and type of plant disturbance using its knowledge base and the online data from a plant. In Korea, various expert systems have been developed since mid-1980s. This paper surveys the research activities undertaken in Korea on expert systems for nuclear power plants. These activities (up to eighteen cases) can be classified into: - accident and failure diagnoses (four cases), - procedure guidance (three cases), - alarm processing (two cases), - design aid (two cases), - fuel shuffling (two cases), - material flaw inspection (two cases), - integrated operator aid (one case), and - others (two cases). The techniques employed in these systems include traditional rule-based deduction, model based reasoning, algorithms for search and optimization, numerical simulation models, fuzzy logic, and embedded neural networks. Most of the activities have been carried out at Korea Atomic Energy Research Institute (KA-ERI), Korea Advanced Institute of Science and Technology (KAIST) and Korea Electric Power Corporation (KEPCO). 2 Expert System Applications to Nuclear Power Plants in Korea 2.1 Accident and Failure Diagnoses 2.1.1 Expert systems for accident diagnosis A prototype expert system (Choi et al. 1988) was developed for overall accident diagnosis in a plant. For better performance under dynamic plant state, the system can modify accident identification rules dynamically when plant parameters and conditions are varying. Also, the system uses calculated performance indices on each accident. The performance indices were obtained from the results of transient analysis numerical packages and from data logs of actual plant parameters. A hybrid knowledge based expert system HYPOSS (Yang & Chang 1989) was developed based on the above work. This system uses four types of knowledge, i.e., structural, functional, behavioural and heuristic knowledge, according to the diagnostic stages. The structural primitives include components, pipe lines and equipment. The functional knowledge has five types: power control, pressure control, inventory control, flow rate control and temperature control. HYPOSS uses governing equations, such as mass and energy balance, to represent the behavioural knowledge. This knowledge is used for validating the diagnostic results. The above two systems were implemented on SUN workstations using Prolog. 2.1.2 NSSS-DS: an expert system for failure diagnosis of primary side systems A prototype expert system (NSSS-DS) (Cheon et al. 1992a) was developed for the diagnosis of the primary side failures in the Kori Unit 2 nuclear power plant. The system was implemented on an IBM-compatible personal computer (PC) using Prolog and C languages. The target systems include a rod control system, reactor coolant pumps (RCPs) and a pressurizer system. In NSSS-DS, the diagnostic strategies are broadly classified into electrical (i.e., the rod control system) and mechanical (i.e., the RCPs and the pressurizer) sj^stem diagnoses. The diagnostic symptoms include two groups: obvious and non-obvious symptoms. The obvious symptoms, i.e., fired alarms, are used for finding out a primary causal alarm, automatic actions, and probable failure modes. The non-obvious symptoms are instrument readings, parameter trends, plant computer signals, and valve lineup. These symptoms are required to diagnose probable causes and to provide operators with emergency actions and follow-up treatments. The system employs primarily rule-based deduction. In the diagnosis of the rod control system, model-based reasoning techniques are used for testing the performance of logic cards. The system can find out a faulty card by simulation of Boolean equations for the card. The knowledge base consists of interlock action rules, functional logic rules, alarm processing frames, cause inference rules, and follow-up treatment rules. 2.1.3 An expert system for rod consolidation process Rod consoHdation is an advanced technique to reduce the storage space occupied by spent fuels. The process separates bearing components and non-fuel material from the spent fuel assemblies. A prototype expert system was developed using a Nexpert/Object expert system tool to aid operators in the rod consolidation process (Kim et al. 1993a). The objectives of the system are: - to report the status and history of the fuel material, - to diagnose functional and equipment failure, and - to recommend corrective actions to operators. In the system, the knowledge base consists of three database groups and sixty production rules, and employs object-oriented techniques. 2.2 Procedure Guidance A typical nuclear power plant uses thousands of operating procedures, which are intended to cover all likely plant scenarios. The structure of traditional written emergency operating procedures (EOPs) is normally step and instruction based. That is, a procedure consists of a step of steps, each step consisting of one or more instructions to the operator about checks to do or actions to take. EOPs provide diagnostic treatments based on written logic steps, which are voluminous and complicate for effective references under high stress situations. Therefore, the written form of procedures imposes apparently methodological difficulties on representing and using information. 2.2.1 ESEOP: an expert system for emergency operating procedures An expert system for emergency operating procedures ESEOP (Cheon et al. 1992b) was developed for the tracking of EOPs. The objectives of the system are to. help operators verify plant status and take appropriate actions in emergencies. The system was implemented on an IBM-compatible PC using Prolog and C languages. ESEOP was approached to preserve the form and structure of actual written procedures as much as possible. The written procedures already include many logical sentences, such as IF, THEN, AND, OR, and NOT. Thus, the knowledge representation for rules meets the requirement to maintain the form of the actual written procedures. 2.2.2 ERPES: an emergency response procedure expert system An emergency response procedure expert system ERPES (Lee et al. 1991) was developed for the tracking of EOPs. This system was implemented on a SUN workstation using a LASER expert system tool (which is in C libraries) and X libraries. The system can generate real-time messages to operators using the on-line signal from a compact nuclear simulator through a data acquisition system. 2.2.3 COSMOS: a computerized success path monitoring system A computerized success path monitoring system COSMOS (Jeong et al. 1992) was developed for the tracking of EOPs. The system was implemented on a SUN workstation using Prolog and X libraries. The feature of COSMOS is the incorporation of critical safety functions (CSFs) and success paths. CSFs mean a set of functions that ensure the integrity of physical barriers against the release of radioactive material. The success path is a combination of actions associated with automatic or manual controls of components and/or systems required for restoring or controlling CSFs. Operators are required to restore the challenged CSFs and the success paths in a failed system. COSMOS can provide the operators with emergency actions, challenged CSFs and success paths. 2.3 Alarm Processing Alarm processing refers to filtering and suppressing unnecessary and potentially misleading alarms among fired alarms. A typical nuclear power plant may have around two thousand alarms in a main control room, besides the display of analogue data. During plant transients, mode changes and component trips, hundreds of alarms may be activated in a short time. For example, in one simulated loss-of-coolant accident (LOCA), 500 lights, went on or off within the first minutes, and 800 in the second. The objectives of alarm processing systems are: - to reduce the number of alarms presented, - to organize the alarms so that they could be grouped in relation to a single root cause, - to order the alarms within a group, and - to display suitable alarm messages. 2.3.1 ESAPD: an expert system for alarm processing and diagnosis An expert system for alarm processing and diagnosis ESAPD (Cheon et al. 1993) was developed for multiple alarm processing and diagnosis in the Kori Unit 2 nuclear power plant. The objectives of ESAPD are to help operators identify a primary causal alarm and diagnose the plant malfunction quickly. The system was implemented on an IBM-compatible PC using Prolog and C languages. ESAPD employs the following two-level diagnostic flow: — the alarm processing stage for overall plant-wide diagnosis, where the system identifies a primary causal alarm and diagnoses possible failure modes, possible failed systems, and automatic interlock actions. — the alarm diagnosis stage for specific root cause diagnosis, where the system provides operators with the possible specific causes of the primary causal alarm, emergency actions, and follow-up treatments. The diagnostic method used in the system is a "hypothesize and test" paradigm. The knowledge base is represented as object-oriented concepts. 2.3.2 AFDS: an alarm filtering and diagnostic system Based on the experiences on the implementation of ESAPD, an alarm filtering and diagnostic system AFDS (Choi k Chang 1994) was developed for the Young-gwang Unit 1 nuclear power plant. The system was implemented on a SUN workstation using Prolog and C languages. The system uses alarm filtering meta rules, based on alarm relationship networks. For alarm diagnosis, the system applies fuzzy relations between an alarm and possible plant states to the knowledge base. In addition, the system uses compositional inference rules. For the alarm prognosis, the system can predict the trends of critical plant parameters using some signal prediction methods, such as linear prediction, data smoothing, and fuzzy memberships. 2.4 Design Aid 2.4.1 An expert system for design synthesis and reliability analysis The conceptual design of systems in nuclear power plants was traditionally performed based on engineering judgment and experiences, in parallel with the review of previous system designs. After the conceptual design, if needed, the reliability analysis is performed. Thus, this design strategy requires time-consuming efl^'orts and may cause mistakes. The conceptual design and reliability analysis may be accomplished automatically and more systematically using expert system techniques. An expert system for the design synthesis and reliability analysis (Min & Chang 1988) was developed for the conceptual design of an auxiliary feedwater system. The system was implemented using a KOPS tool (a dialect of 0PS5 shell) in conjunction with the reliability analysis numerical packages, such as CAT, FTAP and IMPORTANCE (written in FORTRAN). The system consists of two parts: synthesis part and reliability analysis part. For the input of design requirements, the system performs design synthesis. From the output of the design synthesis, the system runs the reliabihty analysis codes to generate design deficiencies. If the assessment of the rehability analysis does not satisfy the design requirements, the system can generate improved design changes. 2.4.2 SACOM: a task simulation analyzer with a cognitive operator model Computer modelling of an operator's cognitive behaviour is a promising approach for human factors study and for the assessment of man-machine interfaces (MMIs) at the main control rooms of nuclear power plants. A task simulation analyzer SACOM (Cheon et al. 1995) is being developed to assess the conceptual design of MMIs and the task performance of operators (including both the physical and cognitive workloads). Using the system, a control room designer can assess the layout of control panels and the arrangement of MMI objects (i.e., displays, instrument readings and controllers) by simulating various task scenarios at the main control room. The system is implemented on an HP 9000 workstation using a G2 expert system tool. SACOM consists of three main modules: i.e., an operator model, an interaction analyzer, and a situation generator. The operator model can simulate the cognitive behaviour and responses of operators. The interaction analyzer functions to supervise overall program controls and to assess the results from the simulation. Also, the situation generator provides the operator model with event scenarios and dynamic parameter data in a plant. A blackboard framework is employed to the inferential architecture of the model. The blackboard is a global database containing input data, partial solutions, and other data in various problem-solving states. It is composed of three parts, nainely the blackboard itself with its internal structure, agents working on the blackboard and the controller governing the actions of the agents. The agent, which consists of a set of meta rules, is an independent module that contains the knowledge needed to solve the problem. • Agents can be diverse in representation and inference techniques and respond opportunistically to changes on the blackboard. Finally, the controller monitors the changes on the blackboard and decides what actions to take next. SACOM can evaluate the performance of operators, with several measurement criteria. The criteria include the number of tasks allocated to each operator, the feature of communication among operators, and the feature of task allocation. In SACOM, the knowledge base consists of meta knowledge, control knowledge, procedural knowledge and diagnostic knowledge. Meta knowledge is mainly considered as a way to control and filter a knowledge base, in order to initiate reasonings and strategies using expert knowledge. For the implementation of meta knowledge, SACOM uses about fifty strategic meta rules to collect evidences, generate/refine/group hypotheses, generate active hypotheses, and diflFerentiate active hypotheses. 2.5 Fuel Shuffling 2.5.1 KHURES: an expert system for the selection of an optimized fuel reloading model At the refueling stage of pressurized-water-reactor (PWR) type plants, optimizing the fuel reloading pattern is very important to maximize the fuel burnup rate in a reactor core. An expert system for the selection of a fuel reloading model KHURES (Sin & Kim 1992) was developed to search an optimized fuel reloading pattern in the reactor core of the Kori Unit 4 nuclear power plant. The system was implemented on an IBM-compatible PC using Prolog. The optimization is to search the fuel reloading pattern that maximizes the average fuel burnup rate and also satisfies the limit of the relative power density. A core analysis numerical package, SCOUPE, was linked to the system. The searching strategy for finding the optimal pattern consists of three stages: 1. Generate initial reloading patterns and apply to five basic reloading rules. 2. Exchange the arrangement of fuel assemblies using exchange rules and the SCOUPE results. 3. Select an optimized reloading pattern at the end of the fuel cycle (EOC) and the pattern that maximizes the average fuel burnup rate in the core. The above strategies were implemented on the knowledge base as production rules. 2.5.2 OFSS: an optimal fuel shuffling system An optimal fuel shuffling system OFSS (Kim et al. 1993b) was developed for the Kori Unit 1 nuclear power plant. The system was implemented on a SUN workstation using Prolog. The system is a hybrid approach that incorporates the rule-b^ed deduction, fuzzy logic and a neural network. To classify the fuel reloading patterns, the system uses several heuristic rules and a fuzzy rule. The back-propagation neural network predicts core parameters for the patterns generated from the system using heuristic rules. 2.6 Material Flaw Inspection 2.6.1 RViES: a reactor vessel integrity expert system A prototype expert system RViES (Shon et al. 1992) was developed to assess the material integrity of a reactor vessel. This system was implemented on an IBM-compatible PC using Prolog and C graphic libraries. The objectives of the system are: - to estimate the whole operating life of the vessel, and - to provide the trends of pressure-temperature (P-T) operating graphs for the brittleness, fatigue and pressurized thermal stress of the vessel. Reactor vessels should be designed according to the criteria, such as upper shelf energy (USE), pressurized thermal shock (PTS) and P-T operating graphs. The knowledge base consists of three knowledge units: each for USE, PTS and P-T. The system is linked with two numerical program codes: the elastic plastic integrity evaluation system EPIES and a fatigue analysis program FATiLiFe. 2.6.2 An expert system for the evaluation of eddy current signals An expert system for the evaluation of eddy current (EC) signals (Kim et al. 1992b) was developed to identify the material flaw of steam generator tubes. This system can inspect the material flaw by evaluating the EC signal from steam generator tubes using artificial neural networks. The system was implemented on an HP 9000 workstation using C and X libraries. The system consists of three modules: i.e., a syntactic pattern recognition module, a neural network module and a rule-based production module. The syntactic pattern recognition module can process and filter the voluminous data of EC signals and to detect event signals, such as defected and structural signals. From noisy signals, the neural network module classifies a flaw category. 2.7 Integrated Operator Aid An on-line operator aid system OASYS (Chang et al. 1995) was developed to support the operators' decision making process and to ensure the safety in a plant. OASYS is integrated with the following four task modules: - a signal vahdation and management (SVM) module, - a plant monitoring (PM) module, - an alarm filtering and diagnostic (AFD) module, and — a dynamic emergency procedure tracking (DEPT) module. Both the SVM and PM modules help the operators maintain the plant in a safe condition during the normal operation. In the SVM module, major signals are validated using simple signal vahdation techniques. In the PM module, major signals for performance are monitored to maintain the normal plant conditions and CSFs are monitored to identify the threats to safety functions of a plant. The DEPT module is the central part of OASYS. This module manages the step by step tracking of EOPs in real-time. The DEPT module aids operators with proper guidelines to shut down the reactor safely. The AFD module uses the priority grading and the structured hierarchy relationships among alarms for alarm filtering. Also, this module uses an "establish refine" strategy for the diagnosis of abnormal events. This strategy matches a set of symptoms with a specific malfunction hypothesis in a predetermined structure of possible hypotheses. OASYS employs the rule-based deduction and fuzzy logic. Using the rule-based deduction, the system classifies the pre-defined events and tracks EOPs. Also, the system uses fuzzy logic to generate conceptual high-level alarms for the prognostic diagnosis and to evaluate the quahtative fuzzy criteria used in the tracking of EOPs. The system was implemented on a SUN workstation using Prolog and C languages, and installed in a real-time full scope simulator for vahdation. 2.8 Other Applications 2.8.1 An advisory expert system for Technical Specifications The Technical Specifications document describes LCO (Limiting Conditions for Operation), AOT (Allowed Outage Time) and operator actions for a reactor and safety-related equipment. Based on Technical Specifications, operators can decide whether the operating conditions of equipment violate the LCO. To computerize Technical Specifications, an advisory expert system (Kim et al. 1992a) was developed using a Nexpert/Object development tool and C language. The scope of the system was limited to a reactor coolant system (RCS) and an emergency core cooling system (ECCS). The knowledge base consists of four knowledge units that consist oi start.up, mode.change, LCO-check and action.display parts. 2.8.2 ESPED: an expert system for performance evaluation and diagnosis An expert system for performance evaluation and diagnosis (ESPED) (Kang et al. 1992) was developed using an M.l expert system shell. The domain was a secondary system in the Kori Unit 3 nuclear power plant. In ESPED, the performance evaluation module can process major plant parameters by meta facts and evaluate the performance effective factors using performance evaluation rules. 3 Conclusions This paper has surveyed representative expert systems for nuclear power plants in Korea. From a decade ago, many attempts to develop expert systems have been made, especially in the fields such as accident diagnosis, failure diagnosis, procedure guidance, and alarm processing. Early developed systems were mainly implemented on IBM-compatible PCs, based on simple rule-based systems, and developed using artificial intelligence languages such as Prolog and LISP. Whereas, recent expert systems are implemented on workstation level platforms, developed using advanced expert system tools and C languages, and combined with other innovating techniques like numerical simulation models, neural networks, fuzzy logic and genetic algorithms. In the future, feasible application areas in nuclear power plants may include operator tutoring systems for the training of various operating procedures, full-scale real-time plant monitoring and diagnostic systems that may integrate all features of operator aiding systems, and maintenance scheduling systems. Also, the emerging technologies, such as multimedia/hypermedia expert systems and distributed expert systems, will be introduced to the applications. Acknowledgements This survey was conducted under financial support of Korea Ministry of Science and Technology. I would like to acknowledge efforts and achievements of many developers and researchers of expert systems in Korean Nuclear Society. References [1] Chang S. H. Choi S. S., Kim H. G., Jeong H. K. & Yi C. U. (1995) Development of the OnLine Aid System OASYS Using a Rule-Based Expert System and Fuzzy Logic for Nuclear Power Plants. Nuclear Technology, 112, 2, p. 26694. [2] Cheon S. W., Kim H. G., Kim W. J., Min B. K., Chang S. H. & Chung H. Y. (1992a) Development of an Expert System for Failure Diagnosis of Primary Side Systems. Nuclear Technology, 97, p. 1-15. [3] Cheon S. W., Kang K. S., Kim W. J., Kim H. G., Park J. W. & Chang S. H. (1992b) Computerization of Emergency Operating Procedures Management in Nuclear Power Plants. Proceedings of the Korean Nuclear Society Spring Meeting, Kori, Korea. [4] Cheon S. W., Chang S. H. & Chung H. Y. (1993) Development Strategies of an Expert System for Multiple Alarm Processing and Diagnosis in Nuclear Power Plants. IEEE Transactions on Nuclear Science, 40, 1, p. 21-30. [5] Cheon S. W., Sur S. M., Lee Y. H., Lee J. W., Park Y. T. & Lee K. R. (1995) Development of a Cognitive Task Simulation Analyzer of Operators in Nuclear Power Plants Using Blackboard Techniques. Proceedings of the Pacific Asian Conference on Expert Systems, Hu-angshan, China. [6] Choi K. Y., Yang J. O. & Chang S. H. (1988) The Manipulation of Time-varying Dynamic Variables Using Rule Modification Method and Performance-Index in Nuclear Power Plant Accident Diagnostic Expert Systems. IEEE Transactions on Nuclear Science, 35, 5, p. 1121-25. [7] Choi S. S. & Chang S. H. (1994) Development of a Fuzzy Expert System for Integrated Alarm Processing in Nuclear Power Plants. Proceedings of the Korean Nuclear Society Spring Meeting, Pohang, Korea. [8] Jeong K. S., Yang J. O. & Park C. K. (1992) Identification of Accident Recovery Actions During an SGTR Accident Using an Expert System. Proceedings of the ANS'92 International Conference on Design and Safety of Advanced Nuclear Power Plants, Tokyo, Japan. [9] Kang K. S., Cheon S. W. & Chang S. H. (1992) Expert System for Nuclear Power Plant Performance Evaluation and Diagnosis. Proceedings of the IEEE 5th Conference on Human Factors and Power Plants, Monterey, USA. [10] Kim C. T., Lee E. S. k Suh Y. S. (1992a) Development of a Computer Program for the Evaluation of Technical Specifications. Proceedings of the Korean Nuclear Society Spring Meeting, Kori, Korea, p. 405-10. [llj Kim K. Y., Hur Y. H., Woo H. G. & Choi S. S. (1992b) A Hybrid Expert System for Eddy Current Test of Steam Generator Tubes. Proceedings of the Second International Forum on Expert Systems and Computer Simulation in Energy Engineering, Erlangen, Germany. [12] Kim H. D., Kim K. J. & Yoon W. K. (1993a) Development of a Prototype Expert System for Intelligent Operation Aids in Rod Consolidation Process. Journal of the Korean Nuclear Society, 25, 1, p. 1-7. [13] Kim H. G., Chang S. H. & Lee B. H. (1993b) Optimal Fuel Loading Pattern Design Using an Artificial Neural Network and a Fuzzy Rule-Based System. Nuclear Science and Engineering, 115, p. 152-63. [14] Lee J. S. et al. (1991) Emergency Response Procedure Expert System (ERPES): An Expert System for Emergency Operation in the Nuclear Power Plant. Third Symposium on Expert Systems Application to Power System, Tokyo-Kobe, Japan. [15] Min B. K. & Chang S. H. (1988) Development of an Expert System for the Design Synthesis and Reliability Assessment. Nuclear Science and Engineering, 100, p. 426-34. [16] Nelson W. R. (1982) REACTOR: An Expert System for Diagnosis and Treatment of Nuclear Reactor Accidents. Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, USA. [17] Sin H. C. k Kim M. H. (1992) Development of an Expert System for the Selection of a Fuel Shuffling Model (III). Proceedings of the Korean Nuclear Society Spring Meeting, Kori, Korea. [18] Shon J. H., Kim Y. J., Jeong H. D. k Kim J. G. (1992) Development of an Expert System for the Evaluation of Reactor Vessel Integrity. Proceedings of the Korean Nuclear Society Autumn Meeting, Seoul, Korea. [19] Yang J. O. k Chang S. H. (1989) A Diagnostic Expert System for Nuclear Power Plant Based on the Hybrid Knowledge Approach. IEEE Transactions on Nuclear Science, 36, 6, p. 2450-58. [20] Yokobayashi M., Yoshida K., Kohsaka A. k Yamamoto M. (1986) Development of Reactor Accident Diagnostic System DISKET Using Knowledge Engineering Technique. Journal of Nuclear Science kTechnology, 23, 4, p. 300-14. Optimization of One Dimensional Cutting in Clothing Industry Miro Gradišar and Jože Jesenko University of Maribor, Faculty of Organizational Sciences, 64000 Kranj, Prešernova ulica 11, Slovenia Phone: +386 64 222 804, Fax: +386 64 221 424 Keywords: cutting, optimisation, heuristics, clothing industry Edited by: Rudi Murn Received: September 21, 1995 Revised: December 20, 1995 Accepted: April 11, 1996 The article examines the procedure for optimisation of roll cutting in the clothing industry. The issue of roll cutting can be defined as a bicriterial multidimensional knapsack problem with side constraints. In general one may not fìnd an optimal solution within reasonable time limits. For this reason we have found a solution through a combination of approximation and heuristics. On the basis of the achieved algorithm the computer program COLA has been developed. The article presents an example of the implementation of the program. 1 Introduction The first phase of a technological process in a number of industrial branches is cutting stock which has been obtained from warehouses. This often results in incurring trim loss which is generally unwanted. The most straightforward type of cutting is one-dimensional which can be defined as guillotine cutting of a small number of long pieces into a large number of short pieces. The problem of reducing trim loss at one-dimensional stock cutting can be observed in various branches of industry (Ferreira et al. 1990, Stadtler 1990). In the clothing industry it arises as the problem of optimal utilisation of stock at roll cutting into pattern shapes. A pattern shape consists of pattern parts which make up a piece of clothing. It is rectangular and usually drawn on paper. The width of the rectangle equals the width of fabric rolls. The width of a fabric roll is standardized and generally remains the same within one customer order. The length of the rectangle depends on the number and the size of pattern parts which make up a piece of clothing. Pattern parts such as a trouser leg, a sleeve, a pocket etc., are generally of irregular shape. They are placed on a rectangular surface in such a way that there are the fewest possible unutilized parts of the surface as well as making pattern shape as short as possible. A particular item of clothing can be produced in large series which consist of a number of groups of the same products. Groups differ, according to size numbers, colours and fabric. As there is a great number of different types of pattern parts of various sizes, to have only one pattern shape would mean that it would be too long. Therefore, it is necessary to make several pattern shapes for a particular item of clothing within the framework of one order. The number of different pattern shapes generally ranges from three to eight. Obtained pattern shapes of various lengths (hereinafter referred to as order lengths) need to be cut into a number of required pieces by guillotine cuts out of fabric rolls as illustrated in Figure 1. Cutting order lengths is determined by two features. First, the width of a stock roll equals the width of the order length. Second, the length of the cut piece is the same as the order length. The roll lengths usually range between 10 - 100 metres, whereas order lengths are mainly up to 5 metres long. Most commonly, 10 to 50 pieces of a particular order length are required. Our objective is to create a plan of roll cutting by finding such a combination of individual order lengths within a certain number of pieces in each roll that will minimise the overall trim loss. However, in practice fabric is often cut without a plan. 212 Informatica 20 (1996) 211-221 M. Gradišar et al. Stock of large lenghts - 3 different lenghts ! r Orders for small lenghts - 4 different order lenghts in the necessary number ofpieces 12 3 Cutting without a plan I I M T T Trim loss Unfulfilled order: Cutting according to a plan 1 r Trim ^.loss w> T I I individually. The following notation is used: m number of rolls, d number of different colours, e number of types of fabric, djkf roll lengths; j = 1,..., m; A; € {1,..., c^}; / G {l,..-,e}. We need a certain number of order lengths which consist of a particular number of pieces. The fabric and colours are the same as in rolls. We can denote the following: n Si hkf number of varying order lengths, order lengths; i — 1,... ,n, Required number of pieces which are determined by length i, color k, and fabric /; i = l,...,n; k = l,...,d; f = l,...,e. Order requirements are fiilly met Figure 1: Stock cutting Creating a plan for optimal one-dimensional roll cutting with respect to trim loss belongs to the group of the so called NP complete problems which can not be solved in real life because of their great complexity in terms of place and time. However, it is possible to find a sub-optimal heuristic solution by designing a computer program which generally surpasses human capabilities and reformulates a poorly structured cutting problem into a completely structured one. In this case, full computerisation is feasible and justified. The human role is restricted to the input of data and setting parameters. This article describes the mathematical model of the cutting problem, solution development in the form of computer program, and an example of the practical implementation of the program. 2 Problem Description and a Formal Model The problem of optimization of roll cutting in the clothing industry exists within the framework of a particular customer order for which there is a certain number of rolls differing in length, fabric and colour. Each customer order is planned for Every kf combination represent independent cutting problem, so we can omit indexes k and / in further description. We want to obtain: X. i j number of pieces which are defined by order length i having been cut from roll j under the following conditions: (1) min A minimize number of uncut order lengths, distribute evenly; (2) mmET=itj minimize trim loss which is smaller than UB s.t. (3) ELi Si Xij -F 6j = dj Vj knapsack constraints; i^) ET=i^ij>bi-A Vi demand constraints; (5) Vi maximum number of different order lengths for a roll; (6) Xij > 0, integer \/i,j 6j>0 Vj tj > 0 V; A > 0 ž/i,-€{0,1} Vi,i .,€{0,1} Vi tij€{0,l} Vi For the above model the following functions are used: ' 0 if xii = 0 Vi = < 1 1 otherwise to indicate whether roll j is used in the cutting plan, _ f 0 if Xij = 0 1 1 otherwise to indicate whether order length i is cut from roll j and f. ^ J if = 1 A^j < UB ^ 10 otherwise. tj indicates the extent of the trim loss relating to roll j. Data: UB upper bound for the trim loss - it can be set e.g. to the smallest or longest order length. At roll cutting, it is necessary to pay attention to two limitations which arise from the practical conditions in clothing industry. The first one is related to the requirement that in the case of material shortage the lack of fabric should be evenly distributed among the order lengths. If the shortage had been accumulated with one order length this would cause deficit in clothes of one or two size groups. This would in turn bring about difficulties in sales. The second limitation refers to the fact that the number of different order lengths which are cut out of one roll can be maximally four. The model makes a distinction between two groups of unutilized parts of rolls. First, there are unutilized parts which are long enough to be used again even though the requirements of the order have been met. Second, there are unutilized parts which are too short to be used again. Hence, unutilized roll length which could be used further is not considered to be trim loss. We are addressing the issue of defining which unutilized length is so short that it could be referred to as trim loss. The answer depends on the quantity of available fabric. Overall trim loss can be defined in two ways: 1. Overall trim loss is the sum of those trim losses in particular rolls which are shorter than the shortest order length. 2. Overall trim loss is the sum of those trim losses in particular rolls which are shorter than the longest order length. In the cases when there is enough required fabric available we refer to definition 1, whereas in other cases we refer to definition 2. If definition 1 was referred to in both cases this would - in cases of shortage of fabric - give rise to the following problem. As the aim of the algorithm is minimization of the overall trim loss, this could lead to unfulfilled requirements for the longest order lengths, although the overall trim loss would be small and even though the aim would be achieved according to the logic of the algorithm. The trim losses which would be longer than the shortest order length but shorter than the longest order lengths could remain unutilized. The residual length which is larger than a certain lower bound (e.g. longest order length) is not considered as a trim loss. If there are sufficient rolls available, then there will be cutting plans with "no trim loss" but ever growing stocks. To prevent this an aditional condition must be set: only one residual length may be longer than the longest order length. 3 Solution Development The authors have not come across an algorithm in literature which would be directly applicable to the described situation. Similar problems are solved by heuristic algorithms. A similar kind of problem is the classical "bin packing problem", which can be solved by the "First Fit Decreasing" (FFD) (Coffman et al. 1984) heuristic procedure. However, the basic feature of this type of problem is that all stock lengths are the same. The literature abounds in solutions to problems of a similar kind based on a hybrid algorithm, which was developed by Gilmore and Gomory (Gilmore and Gomory 1961, 1963). The implementation of this algorithm would also be feasible in our case if all rolls were of the same length, as described in Stadtler 1990, or of the few standard lengths, as described in Gilmore and Gomory 1961. An extensive review of the relevant literature up to 1992 is given in Sweeney and Paternoster 1992, while a review of different types of problems and solution methods related to reducing trim loss is given in Dyckhoff 1990. The cutting model we described could be termed a bicriterial multidimensional knapsack problem with side constraints. To handle the bicriterial objective function a lexicographic approach is proposed. In general one may not find an optimal solution within reasonable time limits. The algorithm which would check all possible cutting patterns while taking into account all rolls of generally different lengths would provide solution without unnecessary trim loss and would have exponential time complexity. A basic simplification which was used to transform this exact algorithm into an approximate one with polynomial complexity is the limitation of processing one roll at a time. The algorithm was developed on a step by step basis. The number of basic steps equals the number of rolls necessary for fulfilment of an order. At the beginning, all rolls belong to the set of unprocessed rolls. The set of processed rolls is empty. At each step, the set of unprocessed rolls is reduced by one and the set of processed rolls increases by one. Also, the number of cut pieces of particular order lengths changes, as well as the length of the processed roll, which becomes equal to trim loss. While developing the algorithm, we had to address two basic questions. First, which roll is to be chosen from the set of the not yet processed rolls and second, which order lengths are to be cut out of it. The algorithm was developed in such a way that it is based on the following assumptions: 1. It is easier to find a good solution referring to one roll provided it is selected from the largest possible set of different rolls. 2. It is easier to find a good solution for the selected roll provided it is selected from the largest possible set of feasible solutions. This happens in case when: a) the roll is as long as possible, b) the ordered lengths are as short as possible, c) the number of different order lengths and the number of pieces is as large as possible. All the assumptions are statistically proven and this is shown by the analysis described in section 5. However, we cannot be completely sure that the assumptions are valid in each individual case. The issue of roll selection can be tackled in accordance with the assumption 1 by calculating the solutions for all unprocessed rolls and selecting among them the one with the lowest trim loss. As it is possible that more rolls have the lowest trim loss, we apply the 2.a assumption and choose the shortest roll. This can be achieved by the initial arranging of the rolls according to increasing lengths. By doing so, the longer rolls will be processed later when the conditions become more difficult due to the first assumption. The remaining question to be dealt with is the selection of order lengths. According to condition (5) the number of order lengths cut up from the same roll is limited by a parameter whose value can be 1, 2, 3 or 4. Consistent with the assumption 2.C and the condition (1) stipulating that the possible trim loss should be distributed among order lengths as evenly as possible, we select those order lengths which consist of the largest number of pieces. This can be achieved by arranging the order lengths according to the decreasing number of pieces. This selection does not take into consideration the assumption 2.b, nevertheless, due to the condition (1) there is no other choice. For this reason this solution is reached only in case when there is not enough fabric. When there is abundance of fabric we face a dilemma while selecting the order lengths, as to what assumption should be given priority - either assumption 2.b or 2.c. The dilemma can be resolved by developing three algorithmic variants and selecting the best result. The first variant is the same as the one which is applied when there is not enough fabric. The second variant is based on assumption 2.b. We select those order lengths which are the longest. This is achieved by arranging order lengths according to decreasing lengths. This means that shorter order lengths are processed later, i.e., when there is less room for manoeuvre and according to assumption 1 the cutting problem is becoming more complicated. The third variant is a compromise between assumptions 2.b and 2.c. Order lengths are chosen from the arrangement which is made according to the decreasing value of the product obtained by multiplying order lengths by the number of necessary pieces. The decision to deal with each case separately, whether or not there is a shortage or abundance of fabric, stems from the definition of the overall trim loss within the framework of a customer order, the latter being different in both cases. Due to the nature of fabric the order lengths are defined precisely up to 1 cm, and the roll lengths up to 1 dm. Because of that all lengths can be expressed in centimetres and we refer fully to integer problem. The loss of fabric incurred due to a certain cut width can be compensated by lengthening all order lengths by 2 cm. The algorithm for the optimisation of roll cutting for the needs of the clothing industry is shown in Figure 2 by a flowchart of basic steps. The creation of a cutting plan is repeated for each combination k and /. 4 Program COLA The proposed algorithm was formulated in FORTRAN program language, as it is relatively fast at carrying out numerical operations. The program consists of 2,000 lines. The data input and the printout of the results were made in 4GL, as it is the most suitable for this purpose. The program can be run on a personal computer and it enables the achievement of 0.1% of the average planned trim loss per order. It was called COLA (computerized L Aying out). COLA program has been used in industry for about 10 years. The most recent and significant changes were made in 1994. The program is being used by several clothing manufacturers and has been used the longest by Modena. The program is used to process an order in sequence of the following 3 stages: - data input, - creation of a cutting plan, - printout of results. 4.1 Data input Data input is carried out in three stages. First, setting parameters, second, input of details about order lengths, and third, input of details about rolls of fabric. The parameters of an order are as follows: - reference number of the order, - short description of the order, - programmed trim loss: - fixed part, - variable part, - maximum number of different order lengths for cutting a roll of fabric. The fixed part of the programmed trim loss is entered in centimetres and means that there will remain at least that much fabric in each roll. The common value of this parameter is 0. It has been observed that rolls of some manufacturers tend to contain somewhat less fabric than declared. This discrepancy can be taken into account by setting the programmed trim loss. Second possibility of the use of this parameter can apply to cases when we want to have certain trim loss in each roll because we want to use it later on. When we want each roll to have trim loss which is in proportion to the roll length, we have to enter the required percentage as the variable part of the programmed trim loss. The fixed and variable part are not mutually exclusive. A parameter of great importance is the number of different order lengths which are cut out from a particular roll. A higher number means on the one hand more possible combinations and less trim loss. On the other hand, the cutting procedure at higher numbers becomes more complex and more time consuming. It has been experienced in practice that the highest reasonable number of different order lengths combined in a roll is four. At number 4 the trim loss becomes so low, that any further reduction would not outweigh the extra efforts at cutting. Possible values of this parameter range from 1 to 4, we usually select 3 or 4. A customer order normally consists of a few order lengths. An individual order length can be of different colours and fabrics. It can be determined by the following details: - the reference number of the order length, - the order length given in centimetres, - size numbers of ready-made clothes, which are taken account of in order lengths. M. Gradišar et al. (start ) ( lc=l ) C f=i ) (^filter the set of rolls according to k and f j sort the rolls according to djitf~) ^filter the set ot order len^ts accordmg to k and t ') f sort order lenght accordmg to bjjtf ( select ordW lenghts ') select and cut the roll wth the lowest trim Ipss accoring to all possible combinations of selected order lenghts ( change the number of cut pieces ) Ci°ED C cutting plan=T2 ) f overall trim loss—T1 ^ T (T2=cuttingplan ^ (sort order lenght accordmg to bju ( select orJer lenghts ) select and cut the roll with the lowest to all possible ected order lenghts trim Ipss accori^--- combmabons of selectei ^change roll lenght ^ ( change the number of cut pieces (calculation of the overall tnm loss ') (Tl=overaU trim loss } CT2=cuttinp plan ) (set to the start ) ( sort order lenghts according to s; I select order lenghts | select and cut the roll with the lowest trim Ipss according to all possible combinations of selected ordr- - (change roll lenght (change the number of cut pieces ( calculation of the overall tnm loss ^ (T3=overaU trim loss } T1>T3 yes C T1=T3 ) (T2=H!Uttijg plan ) (setto the start ) ( sort order lenghts accordmg to s j * b^j^ ( select order lenghts^ select and cut the ------------with the trim Ipss according to all possible combmations of selected order lenghts (cBange roll lenght ) (change the number of cut pieces ( calculation of the overall tnm loss (T3=overall trim loss ") Figure 2: Flowchart of the cutting algorithm OPTIMIZATION OF CUTTING date: 05-24-94 customer order: 635 model: jacket DETAILS ABOUT ORDER LENGHTS No. order lenght lenglit size numbers pieces colour fabric 1 1 239 74 80 86 92 140 6449 BL-72 2 1 239 74 80 86 92 145 7209 BL-100 3 2 188 80 86 92 55 6449 BL-72 4 2 188 80 86 92 35 7209 BL-100 5 3 134 86 92 25 6449 BL-72 6 3 134 86 92 30 7209 BL-100 DETAILS ABOUT No. roll lenght 1 15 10280 2 16 10220 3 17 10160 4 19 10180 5 20 10100 6 9 10200 7 10 10790 8 11 10100 9 13 10070 10 14 900 11 12 3000 ROLLS colour 6449 6449 6449 6449 6449 7209 7209 7209 7209 7209 7209 fabric BL-72 BL-72 BL-72 BL-72 BL-72 BL-100 BL-100 BL-IOO BL-100 BL-100 BL-100 RESULTS ARRANGED ACCORDING TO ORDER LENGHTS order lenght pieces conection roll color fabric foU.o.l. size numbers 1 32 15 6449 BL-72 2 74 80 86 92 1 34 16 6449 BL-72 2 74 80 86 92 1 36 17 6449 BL-72 2 74 80 86 92 1 28 19 6449 BL-72 2 74 80 86 92 1 10 20 6449 BL-72 2 74 80 86 92 1 30 9 7209 BL-100 2 74 80 86 92 1 42 10 7209 BL-100 2 74 80 86 92 1 28 n 7209 BL-100 2 74 80 86 92 1 40 13 7209 BL-100 2 74 80 86 92 1 4 12 7209 BL-100 2 74 80 86 92 2 14 15 6449 BL-72 0 80 86 92 2 9 16 6449 BL-72 3 80 86 92 2 4 17 6449 BL-72 3 80 86 92 2 10 19 6449 BL-72 3 80 86 92 2 18 20 6449 BL-72 3 80 86 92 2 4 9 7209 BL-IOO 3 80 86 92 2 4 10 7209 BL-100 0 80 86 92 2 11 11 7209 BL-100 3 80 86 92 2 2 13 7209 BL-100 3 80 86 92 2 4 14 7209 BL-100 3 80 86 92 2 10 12 7209 BL-100 3 80 86 92 3 16 6449 BL-72 0 86 92 6 17 6449 BL-72 0 86 92 12 19 6449 B1^72 0 86 92 4 20 6449 BL-72 0 86 92 17 9 7209 BL-100 0 86 92 10 11 7209 BL-100 0 86 92 1 13 7209 BL-IOO 0 86 92 I 14 7209 BL-100 0 86 92 1 12 7209 BL-100 0 86 92 TRIM LOSS UTILIZED ROLLS roll trim loss colour fabric 15 0 6449 BL-72 16 0 6449 BL-72 17 0 6449 BL-72 19 0 6449 BL-72 20 3790 6449 BL-72 9 0 7209 BL-IOO 10 0 7209 BL-IOO 11 0 7209 BL-100 13 0 7209 BL-IOO 14 14 7209 BL-100 12 30 7209 BL-IOO UNUTILIZED ROLLS roU lenght colour fabric OVERALL TRIM LOSS..................................................; 44 cm (0,0512%) PROGRAMMED TRIM LOSS - fixed..............................; 0 cm PROGRAMMED TRIM LOSS - variable..........................: 0% OVERALL TRIM LOSS WITHOUT OPTEVIIZATION...: 1074 cm (1,2508%) SAVINGOFSTOCK........................................................:I030cm(1.1996%) Figure 3: Example of the COLA program printout For each combination of the colour and fabric it is necessary to feed in three more items of information : - colour reference number, - fabric reference number, - the number of necessary pieces. In addition, we need information about the rolls. Each roll is determined by the following details: - the roll reference number, - the length of the roll in centimetres, - colour reference number, - fabric reference number. The information about the width of rolls and order lengths is not significant, because we assume that the widths of order lengths and rolls match. 4.2 Creation of a cutting plan and calculation of statistics The program operates at great speed. Creating a cutting plan and the calculation of statistics for an extensive order consisting of 50 rolls and 1000 pattern pieces takes less than 10 seconds on a personal computer (486DX4, lOOMHz). The time spent on creating the plan is negligible. This is especially evident when we compare it to the manual laying out for orders of average size, which can take about four to eight hours. The speed enables the users to carry out a "what-if' analysis by changing the parameters of the order. For instance, a possible question could refer as to what extent the result would be worse if we cut only two different order lengths out of one roll. 4.3 Printout of results After the final calculation, the results are displayed on the VDU where they can first be checked and then printed out on paper. They can be printed out completely or partially, depending on the parameter which is used to set the printout variant. The longest variant contains 10 or 20 pages. The printout consists of the following four basic headings: - details about order lengths and rolls, - plan of roll cutting, - trim loss, - statistics. The plan of roll cutting is presented in a table form. The three most important columns are: the order length reference number (order length), the number of pieces (pieces) and the roll reference number (roll). Therefore, each line provides information as to which order length is to be cut and out of which roll. The trim loss is generally the smallest if the roll contains more fabric than necessary. In such a case some rolls are not utilized at all, whereas others are not cut up to the end. If the remaining part is longer than or the same length as the shortest order length, it cannot be considered a trim loss. The trim loss is higher when all rolls are used and there is approximately as much fabric as we need or less. In the case that there is shortage of fabric, the program operates in accordance with the condition (1) in such a way that it distributes the loss to approximately the same number of pieces to all order lengths. The statistics which follow are very straightforward. The program calculates the differences between the required and the obtained values and sums them up according to order lengths, colours and size numbers. Finally, it calculates the overall trim loss as well as the average overall trim loss which could be incurred without laying out. The difference is a saving. Further details are shown by the example of results given in Figure 4. Statistics have been left out. The example shown in Figure 3 contains real data, however, the customer order related to is rather small, containing only three different order lengths and only 11 rolls in two colours and two fabrics. The devised cutting plan assumes complete cutting of 10 rolls and partial cutting of 1 roll. The anticipated trim loss is only 44 cm, which makes 0.05% of total length. The saving of the fabric in this order is 10 metres. To clarify the printout of results, some further explanations need to be provided. The column "the following order length" (foll.o.l.) contains order length reference numbers which are about to be cut from the rolls given in the column "rolls". without laying out manual laying out computerised laying out Figure 4: Dependence of trim loss on the type of laying out This item of information is helpful to the worker dealing with the rolls at the placing machine. The column "size numbers" refers to the size numbers of ready-made clothes, whose pieces make up a particular order length. Size numbers do not affect the creation of a cutting plan. They are added only to provide records and statistics. The "corrections" column contains possible deviations between the planned and actually cut number of pieces. 5 Factors Affecting Utilization of Stock Computerized creation of a cutting plan or computerized laying out leads to less than one percent of trim loss. Without laying out the trim loss accounts for three percent. In industry manual laying out is also used. The results of manual laying out largely depend on individual skills and experience of a particular worker. Average amount of trim loss depending on the manner of laying out is shown in Figure 4. Factors affecting the amount of stock savings are as follows: — the average roll length, — the average order length, - the average difference between the longest and the shortest order length in a particular customer order, - the number of different order lengths cut out of the same roll. 05 ■■ Oy45 ■ OA ■■ 0^5 ■ trim loss - (%) oas■ - 0,15 ■ 04 ■ 0,05 -0 - 0 10 20 30 40 50 average number of pieces cut out of a roll Figure 5: Dependence of the trim loss on the average number of pieces cut out of a roll The influence of the above factors has been analysed by calculating a large number of fictitious orders with different values of particular factors and obtaining the average values of trim loss. The value of a particular factor has been tested in 50 to 250 cases. Although we have generated orders randomly, the values remained within the limits which are typical of the clothing industry. Average roll length and average order length determine the average number of pieces cut out of each roll. The higher the number the more room there is for manoeuvre when seeking the optimal combination as well as the better result and respective utihzation of the fabric is the best. Very good results can be achieved if we have more than 20 pieces. Figure 5 shows the conditions. Figure 6 shows the dependence of the trim loss on the average difference between the longest and the shortest order length of a customer order expressed as a percentage. It can be observed that 10 to 20% of difference can suffice for the optimal utilization of fabric. A crucial factor which is set as a parameter of the program is the number of different order lengths cut out of a roll. A higher number means more possible combinations and, therefore, also lower trim loss. Analysis shows that in case of a combination of two order lengths in a roll the average trim loss accounts for 0.165%. In combinations of three the average trim loss is reduced to 0.101%. trim loss 10 20 30 40 differences in order lenghts (%) tage of fabric, in other 100 cases there was about 5% surplus. We have obtained the following results: In the first 100 cases the average trim loss was 0.03% and it was 0.17% in the worst case. In other 100 cases the average trim loss was 0.01% and it was 0.06% in the worst case. In 98 out of first 100 cases the sum of all trim losses was lower than the shortest order length. This means that we have certainly found an optimal solution. In the remaining 100 cases all solutions were optimal. Figure 6: Dependence of trim loss on differences in order lengths number of different order lenghts cut out of the same roll Figure 7: Dependence of trim loss on the number of different order lengths cut out of the same roll In the combinations of four the average trim loss is 0.082%. It is noteworthy that even when cutting only one order length out of a roll as in the case of the traditional procedure without laying out, implementing the COLA program can lead to reductions of the trim loss of up to 70%. This can be achieved by merely placing the rolls about to be cut into a sequential arrangement proposed in the program as shown in Figure 7. We have carried out an analysis of 200 randomly generated samples, containing factors with values which are favourable, common in real Hfe and have an affect on trim loss. This means that the average number of pieces which were cut from a roll was approximately 30. The order lengths differed by approximately 40%. The number of different order lengths which were cut out of a roll was 4. In 100 cases there was about 5% shor- 6 Conclusion The issue of roll cutting in clothing industry is essentially a bicriterial multidimensional knapsack problem with side constraints. In general one may not find an optimal solution within reasonable time limits. For this reason we have found a solution through combination of approximation and heuristics. The result is the COLA program which facilitates a reduction in trim loss on average by 0.1%. Processing an average customer order takes less than 10 seconds when using the COLA program. The speed is of utmost importance in those cases when there should be the shortest possible lapse of time between the point of receiving the fabric in the warehouse and processing of the order. References [1] Coffman Jr., E.G., Carey, M.R., and Johnson, D.S.(1984): Approximation algorithms for bin packing - An updated survey, in: Ausiello G., Lucertini M., and Serafini P. (eds.). Algorithm Design for Computer Systems Design, Springer-Verlag, New York, 49-106. [2] Dyckhoff, H.(1990): A typology of cutting and packing problems, European Journal of Operational Research 44/2, 145-159. [3] Ferreira, J.S., Neves, M.A., and Fonseca, P.(1990): A two-phase roll cutting problem, European Journal of Operational Research 44/2, 185-196. [4] Gilmore, P.C., and Gomory, R.E.(1961): A linear programming approach to the cutting stock problem, Operations Research 9, 849-859. [5] Gilmore, P.C., and Gomory, R.E.(1963): A linear programming approach to the cutting stock problem, Part II, Operations Research 11, 863-888. [6] Stadtler, H.(1990): A one-dimensional cutting stock problem in the aluminium industry and its solution, European Journal of Operational Research 44/2, 209-223. [7] Sweeney, P.E., and Paternoster, E.R.(1992): Cutting and packing problems: A Categorised, Application-Orientated Research Bibliography, Journal of the Operational Research Society 43/7, 691-706. AI in Eastern and Central Europe Matjaž Gams and Ivan Bratko Jožef Stefan Institute, Jamova 39, 61000 Ljubljana, Slovenia Phone: +386 61 17-73-644, Fax: -^386 61 161-029 E-mail: matjaz.gamsOijs. si, WWW: http: //www2. ijs. si/~mezi/mat jaz .html University of Ljubljana, Slovenia E-mail: ivan.bratkoOf er .uni-lj .si 1 Introduction In 1995, the General Assembly of ECCAI (the European Coordinating Committee for Artificial Intelligence decided to publish a critical assessment of the state of Artificial Intelligence in Europe and recommendations for its future. The ECCAI action is coordinated by its president Nicolaas Mars [1]. During this action, the authors of this paper were invited to prepare the report for Eastern Europe, Switzerland and Austria. Representatives from countries concerned were invited to submit a report. The summarised report will be published as part of the official ECCAI report, the extended version here. In addition, reports from specific countries follow this paper. This report was partially influenced by reports in the popular press concerning the "death" of artificial intelligence (AI). Basically, the press described saturation of the traditional, old AL At the same time, two new directions of AI are emerging: new, weak AI and the industrial, invisible AI. Current status of AI is not well defined, and may change rapidly. In USA, different reports have appeared describing the AI situation, and recommendations for a basic AI research agenda. The ECCAI effort is intended to provide a description of academic and industrial research, as well as applications. The report was not intended to be another commercial for AI, rather it is aimed at highlighting AI strengths and weaknesses. Each author was invited to structure his/her report according to the following items, however, the form was not intended to tame the subject: — Research: major organisations (companies, research institutions, universities, etc.), areas of research. - Development: AI applications. - Use: AI techniques, systems. - Education: institutions, degrees. - Funding: national, multi-national, EU stimulation programs for AI. - Impediments to AL 2 Group Report State-of-the-art in the reported countries follows. There are the following groups - Switzerland and Austria with their long established AI activities, healthy economy and neutral political positions. Especially Austrian AI with, e.g., the AppHed Artificial In-telhgence journal is highly active at the European and world-wide level. Nevertheless, funds for basic research in general, funds for, AI and interest in AI are not growing as fast as they were a couple years ago, and in some areas it is decreasing. In specific surroundings, a small disillusion from vendors is present. Some of them remember big promises, e.g., of the 5th generation, not fulfilled. Due to respected tradition, AI was already relatively slowly introduced and is about staying at the same level in the last years. - Countries in transition (ECCAI members: Bulgaria, Czech Republic, Hungary, Slovakia, Slovenia, Ukraine; and non-ECCAI members: Croatia, Macedonia, Poland, Romania) have generally long tradition in AI. Government-sponsored basic research in general and AI research proportionally have severely declined due to political and market changes - e.g. shock-therapies. Therefore, state-of-the-art report is bound to change rapidly. Several of these countries have their own AI and/or computer-science journals, and good ties with EU resulting in many joint research projects. Due to changes, the problems with AI saturation emerging in specific western countries have not played a significant role. After the shock and dechne in economic terms and research funds, many AI communities have oriented more towards real-life applications. The growing optimism is related to a constant growth of information-technology (IT) products, combining IT products with AI techniques, and expected economic growth. — Ex-Yugoslavian countries hit by war (Bosnia and Herzegovina, Yugoslavia). While AI activities in some ex-Yugoslav counties (especially in Slovenia with the highest GNP in Eastern Europe, partially Croatia and Macedonia) are comparable with other countries in tradition, AI activities decreased or even ceased in Bosnia which was the heaviest hit by the war, and Yugoslavia (Serbia, Monte Negro) due to international isolation. Yugoslavia was once reasonably prosperous AI country with several important AI applications even in the military area. Overall, funding basic AI research, and AI activities in general are in a small dechne. Currently, research prototypes made for the purpose of making new publications look interesting still dominate. Many scientific events such as conferences still have a distinct scent of academic isolation thus distancing themselves from real life and real problems. A reasonable optimism in AI is present in Central and Eastern Europe orienting more towards real-life apphcations and integrating AI with modern information technology. AI is changing from the science unsoundly claiming to change the world into a modest and reasonably successful intelligent-systems IT discipline. New bold ideas are rare yet there is a feehng that there are many fundamental questions only AI integrated with related disciplines can investigate. 3 Summary Report 3.1 Research The basic and applied research in the field of AI is done mainly at universities, governmental in- stitutes and national academies of sciences. The proportion of AI research activities between different institutions varies from country to country. In several Eastern European countries, national academies cooperate research activities of several single institutions. Largest national AI groups have usually around 10 researchers, exceptionally over 20 researchers. A typical AI group has only a couple of researchers. AI groups are often organised as laboratories or even informal groups in larger computer departments with several 10 people. Number of all people involved in AI research in a country is typically a couple of 10, but less than 100. The exceptions are: around 200 researchers in Austria and fewer than 20 active AI researchers in Croatia and Macedonia. Major research institution: - Austria: 43 university and research institutes and departments of companies work in the area of AI, the largest among them being the Austrian Research Institute for AI (OFAI), the Christian Doppler Institute for Expert Systems, both located in Vienna, and the Research Institute for Symbolic Computation (RISC) in Hagenberg near Linz. - Bulgaria: research institutions in Bulgarian academy of sciences: Institute of Information Technologies (IIT) and in the Institute of Mathematics and Informatics (IMI). - Czech Republic: universities in major cities, especially Prague: Czech technical university, Charles university. Institute of computer science. Academy of sciences of the Czech republic. - Hungary: Computer and Automation Institute of the Hungarian Academy of Sciences (SZTAKI), the Central Research Institute for Physics of the Hungarian Academy of Sciences (KFKI), the Computer Research and Innovation Centre (SZKI) and the Computing Apphcations and Service C. (SZAMALK). - Slovakia: universities in major cities, especially Bratislava: Slovak technical university, Comenius University, and government research institutes, including those of the Slovak Academy of Sciences. — Slovenia: Jozef Stefan institute and Ljubljana University. — Switzerland: Universities of Geneva, Neucha-tel, Zurich and the Swiss Federal Institute of Technology in Lausanne and Zurich, private research institutions funded by the Dalle Molle Foundation, and a couple of companies such as The Union Bank of Switzerland or Swissair. — Ukraine: Association of Developers and Users of Intelligent Systems (ADUIS, Kiev), Taras Shevchenko National University (Kiev), V.M. Glushkov Institute of Cybernetics. — Croatia and Macedonia: Smaller AI groups exist in Croatia (University in Zagreb, Rud-jer Boskovic Institute) and a couple of AI researchers exist at the Macedonian university. — Poland: universities (among others: Krakow University of Mining and Metallurgy, Poznan University of Technology, Wroclaw University of Technology) and Research Institutes (e.g. Institute of Computer Science of the Polish Academy of Sciences in Warsaw - IPI PAN). — Romania: Center for Advanced Research in Machine Learning, Natural Language Processing and Conceptual Modelhng of the Romanian Academy (RACAI), Research Institute for Informatics in Bucharest (ICI). 3.2 Area of interest — Austria: medical expert systems, machine learning, intelligent agents, knowledge-based systems, computer vision, robotics, neural networks, natural language, speech understanding, knowledge representation, knowledge acquisition, intelligent tutoring systems, cognitive science, AI and society, connectionism, user interfaces, deduction systems, AI software technology, nonmonotonic reasoning, quahtative reasoning, planning and configuration, diagnosis and classification, knowledge engineering, dialogue models, philosophical foundations of lAI, image recognition, cognitive modeling. Bulgaria: knowledge representation and reasoning, knowledge-based systems, intelligent tutoring systems, intelligent computer algebra systems, natural language processing, logic programming, machine learning, case-based reasoning, neural networks. Czech Republic: distributed AI, evolutionary computing, expert systems (medical, probabilistic), fuzzy systems, ILP, image processing, industrial applications, knowledge engineering, logic programming, machine learning, machine vision, qualitative reasoning, robotics, natural language, philosophical foundations of AI, neural computing, probabilistic methods, knowledge discovery in databases, pattern recognition, speech recognition, artificial life, multi-agent systems. Hungary: expert systems, cognitive aspects of AI, IntelHgent decision support systems, neural networks, image processing, knowledge-based systems, speech generation/synthesis, robotics, data bases, simulation, Prolog, logic programming. Slovakia: intelligent support in software construction, knowledge based programming, software reuse, knowledge based software system configuration management, intelligent processing in distributed and parallel databases, neural networks, speech recognition and pattern recognition, robotics, constraint logic programming, configuration design, genetic algorithms, computational logic, declarative programming, expert systems, knowledge representation, knowledge-based systems, uncertainty in nonstandard logic, approximate reasoning, application of graph theory, Prolog compiler, modelhng of discrete event dynamic systems and synthesis of their in-telhgent control. Slovenia: machine learning, ILP, expert systems, multistrategy learning, AI in medicine, decision systems, knowledge engineering, logic programming, qualitative reasoning, natural language, philosophical foundations of AI, neural computing, probabilistic methods, data mining, pattern recognition, speech synthesis, genetic algorithms, intelligent agents. — Switzerland: expert systems, knowledge-based systems, computer vision, robotics, neural networks, natural language, machine learning, AI and perception, speech understanding. — Ukraine: decision making and planning systems, expert systems, knowledge discovery systems and their application in chemistry and material science, systems for classification, diagnostics, forecasting, systems for processing natural language texts, processing of encyclopaedia! knowledge, neural networks, theorem proving, dialogue natural language systems, visual image processing, medical diagnostics, situational management systems, application of AI methods in CADCAM. — Croatia: logic programming, ILP, philosophical foundations of AL — Macedonia: machine learning, AI in robotics, intelligent information systems. — Poland: intelligent control systems, image recognition, automatic learning, decision analysis, expert systems, distributed intelligent multiagent systems, intelligent information systems. — Romania: multistrategy learning, knowledge acquisition, intelligent adaptive agents, inductive logic programming, natural language processing, conceptual modelling, knowledge representation, knowledge acquisition, intelligent tutoring systems, cognitive science, automatic programming, genetic algorithms, intelligent manufacturing, simulation, deductive databases, expert systems, neural networks, constraint processing, knowledge-based modelling and simulation, computational logic, artificial life, fuzzy theory and systems. 3.3 Development Typically, institutions active in basic and applied research occasionally develop mostly experimental prototypes. However, there are some very interesting applications in specific countries. Only a couple of specialized AI companies exist mainly in the Czech republic and Switzerland: - Czech Republic: Rockwell Automation, Ltd., Research Center Prague (distributed AI, knowledge acquisition and KB systems), and a private company HEM. Romania: a small company (VEDA). - Switzerland: development groups in many large companies. Each of the three large Swiss banks (Union Bank, Credit Suisse, Swiss Bank Corporation) has at least a small group working on AI and neural networks. Several small companies are selling AI services, the most active one being Synlogic AG in Binningen close to Basel. Swissair has a large (15 people) group developing AI software for various airline applications. These are also sold to some 40 other airlines worldwide. Swissair is using about 200 Lisp machines worldwide. 3.4 Use The current use of AI techniques is typically restricted to experimental prototypes. Such systems are normally tested by domain experts and occasionally by end-users. Very few of them reach the stage of daily use. However, there are several very interesting systems used in many practical appHcations, some of them in practical use for a longer period. - There are many systems developed in Austria, however, no clear description is available. - Bulgaria: experimental prototypes of decision support and expert systems are developed. Such systems use typically rule-based, case-based or based on neural networks. Examples: A rule-based expert system for classification of historical-geographical texts, two expert systems for diagnosis and treatment of plant diseases, classification and evaluation of musical structures, a rule-based system for diagnosis of glaucoma eye diseases. An expert system, called "Medicotox Con-cillium", for urgent toxicology is used in a Medical University and in about 10 hospitals throughout Bulgaria. A multi-agent based system for computer simulated military exercises is used in the Bulgarian army. A rule-based expert system for classification of coins and coin treasures is used in several historical museums in Bulgaria. Czech Republic: several industrial AI systems are in routine daily use: production planning system TEPRO (running in TESLA Kolin since 1988, in CKD-Tatra since 1989); ES IZOLEX for technical diagnostics in the domain of electric net (Czech electric power distributor CEZ); ES for evaluation of the actual state of air-pollution and its impact on living organisms (Czech Institute of Hydro-Meteorology, Usti nad Labem). TEPRO is a production planning system offering creation of a correct and errorless production plan using information base consisting of manufacturing data relevant to the scope of the considered production type and knowledge about local production constraints. The system is based on heuristic search in state space. Financial effect of TEPRO system was evaluated to 6 mil. Kc/1 year in both factories using it. Prototype ES were developed for genetic counselHng and a KB system supporting decision making in pathology. Several original systems have been developed and they are being tested actually, e.g., GUHA for data-mining and the linguistic system TI-BAQ. GUHA is an elaborate system searching for propositional dependencies in data (characterized through a set of unary attributes). TIBAQ (text-and-inference based answering of questions) is a model of a build-up of an automatic "encyclopaedia". Its algorithms analyze the input texts (in Czech) and perform natural language inferencing. Hungary: RECOGNITA PLUS, an OCR product with 14000 installations in 24 countries, 1500 installations of MProlog in 25 countries all over the world, CS-Prolog is now in use in 13 countries. Application areas and the number of projects: building industry (7 projects), chemistry (10), computing (6), energetics (7), medicine and health service (16), other industrial projects (11). Products and developers: ALLEX PLUS (ALL), GENESYS (SZAMALK), MProlog Shell (IQSOFT), KASNES (SZTAKI), ME-TABOLEXPERT (CompuDrag Ltd.), PANGEA (RMF), REALEX (BME), OPSQL (KFKI). An expert medical system for young children has been in use for several years. Slovakia: a prototype for prediction of technological parameters of selected devices at a nuclear power plant. An AI apphcation for II AS A, Laxenburg, Austria: a knowledge based checking of the correctness of (a huge) amount of data for the RAINS model and knowledge based wizards supporting a user of the RAINS model. An expert system shell Codex has been in a routine use in more than 15 apphcations throughout not only Slovakia, but also in the Czech repubhc. Slovenia: Around 10 systems used occasionally or regularly for real-life, applications, e.g., machine learning systems used for medical diagnoses. These systems are: DEX -a decision support system, machine learning systems Assistant, Ginesys, Golding, expert systems developed for signal processing, genetic and CLP algorithms for scheduhng tasks in industrial process optimisation, Golding - a system for constructing equations from data. Systems were regularly or occasionally applied at several institutions in Slovenia and sold also in several other European countries. Over 60 prototype or real applications were performed for various users and contractors. The major Slovenian applications were implemented at the Jesenice still mill factory. The expert system controlling quality of the rolling emulsion in the Sendzi-mir rolhng mill has been in regular every-day use for over 5 years. Switzerland: Expert systems and other applications using AI technology are in use in many companies and organizations, e.g., in the electronics, chemical, insurance industries. Swissair is a major user of AI technology with measurable and very substantial benefits. Ukraine: There are 15 intelligent systems in occasional or regular practical use. CON-FOR is intended for knowledge discovery, object classification, diagnosis and prediction. CONFOR operates with examples and non-examples of investigated class of objects (processes, situations, phenomena, etc.) that are represented as sets of attribute values. Original method of inductive CONcept FORmation based on the growing pyramidal network is taken as a principle in the CONFOR system. CONFOR passed successful long-term examination in Ukraine, Russia and USA in such fields as material science, chemistry, medicine, geology, technology, astronomy, economy, sociology, etc. Examples of application tasks: prediction of existence of predefined chemical compounds, search for new materials having desired properties (electro-optical, ferro-electric, superconducting, semiconducting), classification of economic situations, discovery of regularities that characterize diseases, disease diagnostics, prediction of solar, activity, diagnostics of failures in technical units and so on. MANAGER complex makes it possible to design systems intended for situation analysis and decision making in various spheres of administrative and economical activity. Application domains: management in offices, enterprises, regions, for business activity support and may be used as a manager training system. Poland: Applications of prototype systems in histological images recognition (diagnosis of brain and liver cancer), evolutionary techniques to task scheduling, multiagent systems for resource allocation and multiagent system for task assignment. Romania: Natural language interfaces and expert systems have been actually used in the past 10 years. Examples: DISCIPOL, an expert system able to learn from examples and intelligently assist a technolog in choosing the best technological solution in loudspeakers manufacturing (used at thè "Elec-tronica" Factory in Bucharest), DEXTY, an expert system for designing industrial halls using typified elements. It integrates frames (structured objects) with rule-based knowledge and also with existing programs for stress verification written in conventional languages. This expert system was used at the Civil Engineering Design Institute in Bucharest. lURES, a natural language interface building system. The system (based on se- mantic grammars and conceptual graph mo-delhng) has been used to develop Romanian language interfaces to an information retrieval system (Institute for Metallurgy Research in Bucharest), a DATATRIEVE personnel database ("FLARO" Factory in Sibiu). 3.5 Education AI education is available at graduate and PhD level as part of computer and information sciences in practically all countries. There are university programmes at institutions mentioned in the research section. 3.6 Funding In practically all countries concerned, the major part of AI research funding comes from the state. Educational activities are usually funded through Ministries of education. Research activities are supported by Research ministries or National science foundations or grant agencies or committees. A substantial source of funding for the AI research and Al-related activities are EU programs and bilateral cooperation. - In Austria, the Austrian Research Institute for AI alone is at present partner in six EU projects and member of 4 Networks of Excellence. There are many other Austrian institutions involved in EU-projects. In the Austrian report for 1994, there are 149 cooperating partners, 87 in Austria and 62 abroad - of these 80 - In Bulgaria, there are two TEMPUS programs in the field of AI education. There are also PECO and Copernicus research projects and European Networks in AI related areas. - In the Czech Republic, 4 TEMPUS projects have been finished already, 1 is still running, 8 PECO AI related projects are running actually. - Slovakia: 2 TEMPUS projects have been finished already, cooperation in COPERNICUS programme. - Slovenia: 7 active EU projects - TEMPUS and COPERNICUS projects. Networks of Excellence. - Poland: TEMPUS and COPERNICUS projects. - Romania: Network of Excellence in Language and Speech-ELSNET-Goes-East, Network on Inductive Logic Programming-ILPNET, and Trans European Language Resource Infrastructure-TELRI), a European joint project (Multilingual Text Processing-MULTEXT). 3.7 Impediments Among the countries in transition, the major impediment for the usage of AI results is the present state of economies. There are not yet major private enterprises or major investors which could invest in AL In general, lack of interest in industry and dechned funding for basic research are seen as two negative factors for further AI progress. Another problem is that some AI product and service providers did not meet the expectations created by themselves and this is still remembered by the managers. Acknowledgements We would like to thank Nicolaas Mars for coordinating and initiating the ECCAI action. References [1] Nicolaas Mars (1995), Instructions for the ECCAI Report "The Future of Intelhgent Systems in Europe", pp. 3. AI in Bulgaria Zdravko Markov Institute of Information Technologies Bulgarian Academy of Sciences Acad.G.Bonchev St. Block 29A 1113 Sofia, Bulgaria Fax: (+359 2) 720497, Phone: (+359 2) 707586 markovOiinf.bg 1 Research The basic and applied research in the field of AI is done mainly in the universities and in the Bulgarian Academy of Sciences (BAS). The research in the universities is restricted, as they are mainly concerned with education, and the research activities are not stimulated enough. The main part of the AI research is done in BAS. BAS constitutes of a number of research institutions. Four of them have AI or Al-related departments with a total number of about 30 researchers and PhD students. The larger AI departments are in the Institute of Information Technologies (IIT) and in the Institute of Mathematics and Informatics (IMI). In the universities no separate AI departments exist. However within the Computer Science or Information Technology departments there are groups doing AI research. The major groups are in the Faculty of Informatics of Sofia University — Department of Information Technology and Department of Computer Science, and in the New Bulgarian University — Computer Science and Cognitive Science Departments. There are also individuals in other universities working in AI. The total number of the academic people involved in AI can be estimated at about 20. The main AI subareas where these organizations are strong are Knowledge Representation and Reasoning, Knowledge-Based Systems, Intelligent Tutoring Systems, Intelligent Computer Algebra Systems, Natural Language Processing, Logic Programming, Machine Learning, Case-Based Reasoning, Neural Networks. 2 Development AI apphcations are reahzed in most of the organizations mentioned in Section 1. No other organizations (e.g. companies) are involved. Since currently very few AI applications exist and they are mostly experimental prototypes, it is difficult to estimate the number of people involved in this area. 3 Use The current use of AI techniques is very restricted. Experimental prototypes of AI systems are developed basically as a final result of basic research projects. Such systems are normally tested by domain experts and occasionally by end-users and very few of them reach the stage of daily use. Mainly experimental prototypes of Decision Support and Expert Systems are developed. Such systems use typically Rule-Based, Case-Based or Neural Network technology. The following instances of such systems can be listed: A rule-based expert system for classification of historical-geographical texts is under testing in the Faculty of History at Sofia University. Two expert systems for diagnosis and treatment of plant diseases are used in the Institute of plant protection — Sofia. An AI system for analysis, classification and evaluation of musical structures is used in the Sofia Musical Academy. A rule-based system for diagnosis of glaucoma eye diseases in under development and experimental testing in the Medical Academy of Sofia. Generally, real routine daily use of AI systems is difficult to be found. There are few systems which are used occasionally mainly in medical, educational and mihtary institutions. Here are some examples. An expert system, called "Medi- cotox Concillium", for urgent toxicology is used in a Medical University and in about 10 hospitals throughout Bulgaria. This system employs a Case-Based technology and is. implemented in Prolog. A newspaper (Armia, No. 13854, 1995) reported that a multi-agent based system for computer simulated military exercises is used in the Bulgarian Army. A rule-based expert system for classification of coins and coin treasures is used in several historical museums in Bulgaria. All these applications run on PCs. Since in all mentioned above cases the AI applications are used in non-commercial organizations, it is difficult to make any cost-benefit analysis. 4 Education AI education in Bulgaria is available at graduate and PhD level. Currently there are two university programmes for AI education at graduate level. These are the MS programme for AI in the Faculty of Informatics and Mathematics of Sofia University and the programme in Cognitive Science in the New Bulgarian University. The first one is a joint activity of several university and BAS departments and has currently 18 students. Almost all of the key players in the Bulgarian AI research are delivering courses within the framework of this program. The second AI program is both at MS and PhD level and has also 18 students. The teachers in both programs are research scientists from BAS and assistant professors from the universities, mainly with Computer Science and Mathematical background. The PhD level is available in all institutions involved in AI research or education. The number of PhD degrees in AI granted annually can be estimated at about 5. 5 Funding The major part of the AI research funding comes from the state. For the state universities and BAS it is in the form of annual budget. Another way of providing state funding for AI research is through grants with the National Science Fund (NSF). Normally all groups working in AI have at least one grant with NSF. These grants provide limited funds for equipment, books and periodicals, and conference travel. However they are insufficient for hiring additional staff. The universities (especially the private ones, as the New Bulgarian University) raise funds by paid education. These funds however are used only for educational purposes, but not for AI research. A substantial source of funding for the AI research in Bulgaria are the EU programs. Currently there are two TEMPUS programs in the field of AI education. There are also PECO and Copernicus research projects and European Networks in AI related areas. Unfortunately in no one of these projects the AI research is a central topic. The newly announced INCO-Copernicus program also provides no direct funding for AI research. There are two problems with the EU programmes concerning AI. Firstly, this is the unbalanced ratio between the money spent for educational programmes (e.g. TEMPUS) and those spent for research. Thus the AI education, which is relatively well developed and does not require much funding, receives much more money than the AI research, which strives for live. Secondly, the EU programmes always require application or industry oriented research. However, currently there is no demand for such kind of research from the Bulgarian industry, which makes difficult any efforts to set up an EU funded project in the field of AI. 6 Impediments I think that the major obstacle to introducing AI systems in practice is the limited use of advanced Information Technology (IT) by the Bulgarian industry. This is actually the reason for the lack of interest to the AI technology from the Bulgarian IT companies, which still have a large market for conventional IT products. AI is currently purely an academic area. Even the applications, as far as they exist, are motivated by the desire of the AI researches to prove their ideas in practice rather than by a real practical need. In conclusion, the applied AI is almost dead (there are a hmited number of non-commercial applications of AI systems), though the AI research and education are still alive. AI in the Czech Repubhc Vladimir Mank and Olga Stèpànkova STEPQlab.felk.cvut.cz 1 Research Most of the basic AI research is carried out at the universities and institutes of the Academy of Sciences of the Czech Republic. The major relevant institutions and their research topics are listed bellow - the number in brackets oflFers a rough estimate of full time equivalent of persons involved, the topics are presented in alphabetical order: — Czech Technical University, Prague (10) distributed AI, evolutionary computing, expert systems, fuzzy systems, ILP, image processing, industrial applications, knowledge engineering, logic programming, machine learning, machine vision, qualitative reasoning, robotics — Charles University, Prague (10) hnguistics, logic programming, philosophical foundations of AI — University of Economics, Prague (5) knowledge engineering, machine learning, uncertainty processing — Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague (8) fuzzy logic, knowledge discovery, logical foundations of AI, neural computing, probabilistic methods, uncertainty processing — Masaryk University, Brno (4) fuzzy logic, ILP, knowledge discovery in databases, logic programming, machine learning, medical ES — Technical University, Brno (4) evolutionary computing, fuzzy systems, industrial applications, machine learning — Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague (4) fuzzy systems, image processing, pattern recognition, probabilistic expert systems - West-Bohemian University, Pilsen (4) pattern recognition, speech recognition - VSB, Technical University, Ostrava (2) fuzzy logics, linguistics - Silesian University, Opava (1) artificial life, multi-agent systems 2 Development Most of AI apphcations are developed by organizations active in fundamental research on basis of contracts with the end users. Rockwell Automation, Ltd., Research Center Prague was the first applied research institution oriented towards industrial applications of the basic research results. It cooperates closely with the Czech Technical University (especially in the area of distributed AI, knowledge acquisition and KB systems). Recently, there was established a private company HEM dedicated to neurocomputing exploring some results achieved at the Institute of Computer Science of the Academy of Sciences of the Czech Repubhc. 3 Use Several industrial AI systems are in routine daily use: - production planning system TEPRO (running in TESLA Kolin since 1988, in CKD-Tatra since 1989) - ES IZOLEX for technical diagnostics in the domain of electric net (Czech electric power distributor CEZ) - ES for evaluation of the actual state of airpollution and its impact on living organisms (Czech Institute of Hydro-Meteorology, Usti nad Labem) TEPRO is a production planning system offering creation of a correct and errorless production plan using information base consisting of - manufacturing data relevant to the scope of the considered production type and - knowledge about local production constraints comparable to that of highly skilled human production planner. The system is based on heuristic search in state space. Financial effect of TEPRO system was evaluated to 6 mil. Kc/1 year in both factories using it. As all over the world, medical institutions belong among the first users of developed prototypes of AI systems - this is the case of several ES for genetic counselling and a KB system supporting decision making in pathology. Several original systems have been developed and they are being tested actually. Let us mention the system GUHA for data-mining and the linguistic system TIBAQ: - GUHA is an elaborate system searching for propositional dependencies in data (characterized through a set of unary attributes). It provides a strong statistical support for its conclusions which are efficienty presented to the user due to elaborate use of logics. - TIBAQ (text-and-inference based answering of questions) is a model of a build-up of an automatic "encyclopedia". Its algorithms analyze the input texts (in Czech) and perform natural language inferencing. 4 Education All Czech universities oriented towards computer science and/or technical topics offer a basic course in AI for undergraduate students as well as graduate coürses in AI. PhD degree in AI can be obtained from the major universities. In 1992, the Vilem Mathesius Centre was established at the Charles University. The centre is named after the founder of the Prague Linguistic Circle. It organizes regular international intensive two-week courses in linguistics and semiotics twice a year. arch grants provided by several Czech grant agencies, namely the Grant Agency of the Czech Republic and the Grant Agency of the Academy of Sciences of the Czech Republic. Several EC projects helped to create up-to the date infrastructure for several research teams involved (4 TEMPUS projects have been finished already, 1 is still running, 8 PECO AI related projects are running actually). There is growing a strong Czech-Austrian cooperation in the AI domain - it is supported by the Joint Research Center of PAW J.Kepler University, Linz and Czech Technical University. 5 Funding There are two main sources of AI research funding, namely the host institutions and the rese- AI in Hungary Edit Santane-Toth and A.E. Eiben séintaQiqsoft.hu, vanczaOsztaiki.hu 1 Introduction The enclosed short overview is an updated version of the paper A.E. Eiben - E. Santane-Toth, Artificial Intelligence in Hungary I.-II. NVKI-Nieuwsbrief oktober 1992. pp 155-157 and februari 1993, pp. 27-28. The main objective of this overview is to give a short overview of Hungarian AI and thereby to encourage and enable international scientific cooperation. We try to provide readers with information on different aspects of AI in Hungary. As everyone knows the Hungarian economic life is under reconstruction now. Research labs, staff and fields of activity have undergone major changes during the last six years which makes it difficult to get an up to date picture. We attached an Appendix were the exact addresses of institutes mentioned in this overview are listed. 2 Prologue Hungarian Artificial InteUigence has its roots in the mid-50s. When Professor Laszio Kalmar from Szeged University (Hungary) who designed a machine which could be programmed in a mathematical formula language. With this and his logic machine built in the USSR, he was a forerunner of AI's aim: an information technology revolution -based on non-Neumann architecture. Thereafter the country came into the limelight again in 1975 when the development of the Hungarian Prolog system (MProlog: Modular Prolog) and its applications started - [Szeredi, 1975], [Santane-Toth and Szeredi, 1982]. This was noteworthy also from an economic point of view: by 1988 more than 1500 MProlog systems were installed in 25 countries all over the world. By the early 80's in Hungary there were a number of AI labs doing research and/or applications in knowledge based technology. The most important centers were the Computer and Automation Institute of the Hungarian Academy of Sciences (MTA SZTAKI), the Central Research Institute for Physics of the Hungarian Academy of Sciences (MTA KFKI), the Computer Research and Innovation Center (SZKI Theoretical Lab, later IQSOFT Ltd.) and the Computing Applications and Service Co. (SZAMALK)^ 3 AI Association In Hungary it is the John von Neumann Computer Society (NJSZT) - founded in 1968 - that primarily takes care of distributing leading scientific results in computing, getting technical experience and disseminating computer culture. Its role in propagating AI was, and is still crucial. In 1979 the seminary series "Theoretical and practical problems in programming" was started under the auspices of NJSZT, SZKI and SZAMALK. For six years it provided a high level weekly forum for those who were interested. For example, the lectures that methodically surveyed the Japanese 5th generation project were regularly attended by 100 people from about 70 national institutes for six months. Experts and researchers working on different fields of AI ran more and more forums, among others monthly/quarterly seminars within the NJSZT and outside it. The Hungarian AI community was officially formed by creating an Artificial Intelligence and Pattern Recognition Section (AI&PR Section) within the NJSZT in 1976. Among the nearly 2000 NSZJT members there were about 80 taking part in the work of the AI&PR Section. After several years of coexistence it turned out that the groups interested in the topic of pattern recognition an in the other topics of AI were almost disjoint. As a result, in 1990, the Artificial Intelligence Group (AIO) and an Image Processing Group (IPG) were founded as two separate sections, being the legal successors of the AI&PR Section, within the NJSZT. There has been an expressive Al-activity in the frame of the Section of Medical Informatics, founded in 1968. In 1996 For the exact addresses see the Appendix. the number of persons who are interested in AI is about 300 (though in the AI membership Directory, organized by ECCAI, the number of Hungarian colleagues only 160). The AI Group organizes regular activities, such as the quarterly Al-day featuring lectures on selected topics. In autumn 1991 the topic was "Applications of Expert Systems", whilst in 1992 "Cognitive Aspects of AI", "Intelligent Decision Support Systems" and "Neural Networks" were addressed in separated meetings. In addition to these thematic meetings there are occasional lecture-afternoons of invited (foreign) speakers of both the AI and the IP Groups. The first separate interest group of a field close to AI was the Hungarian Robotics Association (HRA). The HRA was founded in 1985 having 40 affiliated institutions and more then 300 individuals. Their objective now is to turn the association into a consortium formed by industrial companies, R&D institutions, individuals, universities and ministries having interests or duties regarding reconstructing the Hungarian economy. The NJSZT is a member of IFIP (since 1969), lAPR (since 1982), ECCAI (since 1983), IMIA and EFMI (since 1986 and 1988). 4 AI Newsletters There is no AI periodical in Hungary. Nevertheless, there are journals regularly reporting on AI developments, for instance Informacio-Elektronika published a thematic series on 5g project in 1983/84 (based on the above mentioned seminary series). After publishing the first national volume of studies on Expert Systems in 1988 [Gabor, 1988], this journal devoted a special issue to the state-of-the-art of the expert system field in Hungary, see [Koch and Santane-Toth, 1990]. (Unfortunately, this journal and others have stopped.) From 1992 the national monthly computer magazine ALAPLAP publishes popularizing AI-series. Other Hungarian journals often publish AI related papers, too. 5 AI Meetings The first regular meetings of the field concerned image processing. The triennial Hungarian Workshop on Image Analysis was first held in 1985, while the last one (with about 70 participants) in 1991 [Chetverikov, 1991]. The first two-day seminar devoted specifically to Artificial Intelligence was held in Visegrad in 1989 yielding proceedings in English, containing 15 contributions, see [Fekete, 1989]. There were also 20 other lectures not included in the proceedings. The Second Conference on Artificial Intelligence was held in 1991 in Budapest. This time there were about 300 participants from almost all institutions having anything to do with AI. During the three-day conference 46 lectures were dehvered and compiled in an English proceedings, see [Fekete and Koch, 1991]. Due to the use of Hungarian as official conference language lots of foreign participants were from neighboring countries. The topics referred to by the speakers, without giving a complete list, are: theory and applications of expert systems, knowledge representation and acquisition, reasoning techniques, natural language understanding, pattern recognition, decision support systems, logic programming (Prolog), learning, cognitive psychology and education of AI. The 3rd nationwide AI conference with the same topics was held in Budapest, in 1993, see [Koch, 1993]. Besides national AI gatherings there are regular Austrian-Hungarian Conferences, most of them touch on Al-related topics. For example. Intelligent Systems in 1991, see [Gyorgy, 1991] - this conference was held in cooperation with the Slovak Cybernetic Society, as well. Main international events in Hungary (mainly in Budapest) during the last four years were the 5th International Conference on Computer Analysis of Images and Patterns, (1992), the 10th International Conference on Logic Programming (ICLP'93), the 5th International Conference on Computer Analysis of Images and Patterns (CAIP'93), IFIP/IFAC International Working Conference on Knowledge Based Hybrid Systems in Engineering and Manufacturing (KNOWHSEM'93), the 8th Symposium on Microcomputer and Microprocessor Application (1994), the 1st and 2nd EURASIP International Workshop on Image - and Signal - Processing (1994, 1995) and the 12th European Conference on AI (ECAr96). 6 Research The Hungarian Academy of Sciences, the National Committee for Technological Development and other sources provide financial support for R&D activities. The research centers of today are more or less the offspring's of those being active in the early period of Hungarian AI, see e.g. the surveys [Santane-Toth, 1991a,b]. Some of them, however, became victims of the rationalization/privatization wave and ceased to exist, or operate with a reduced staff. Many of them are being reorganized, sometimes split into a number of new companies - and new researchers are entering the scene, too. Research labs, staff and fields of activity are currently undergoing major changes which makes it difficult to get, and maintain, an up-to-date picture. AI research in Hungary is confined to small teams at academic institutions, universities and smaller firms. The following illustrative list contains some better-known on-going projects (institution, identification of the project, reference). Most of them are associated with international efforts. - Applications of AI in engineering BME: AI in measurement science, see [Dorbowi-eczki, 1994] ME: AI in material testing machines, logistics, see [Cselenyi, 1995] ME: ES application at the optimum design of engineering, see [Jarmai, 1995] MTA SZTAKI: process planning with reasoning and optimalization, see [Markus and Vancza, 1994] - Artificial life MTA SZTAKI: see [Csuhaj-Varju E.et al, 1994] - Cognitive science ELTE, Dept. of Experimental Psychology: see [Mero, 1990] - Epistemologica! foundations MTA SZTAKI: see [Vamos, 1991] - Reasoning with uncertainty BME: fuzzy reasoning and control, see [Koczy, 1995] ME: fuzzy reasoning in vague environment, see [Kovacs and Koczy, 1995] - Genetic algorithms KFKI MSZKI: continuous restoration ES, see [Kadar, 1996] - Knowledge Based Systems IQSOFT: ZEXPERT, see [Arnold et al, 1992] MTA ITA: KBS development environment (PE-CADS project in Copernicus program, CP: 759) - Knowledge representation BME: modelling measurement problems with constraints, see [Tilly, 1994] MTA SZTAKI: pattern representation of knowledge [Vamos et al, 1995] -Multi agent systems KFKI MSZKI: integrating intelligent systems into a cooperating community for electricity management, see [Varga et al, 1994] - Natural language processing MorphoLogic: Glosser, Gramlex, Multex-East, Elsnet Goes East (EC projects), see [Proszeky, 1994] - Neural networks BME: neural filters, application of neural network for dynamic system modelling, see [Duray, 1993] ME: Use of ANN for prediction of roll forces in hot rolling of steels, see [Farkas, 1996] MTA SZTAKI: analogical paradigm (the integration of neural network and logic based architecture), see [Roska and Chua, 1993] MTA SZTAKI: neuro-fuzzy approach to pattern recognition and control, see [Monostori, 1995] - Parallel logic programming IQSOFT: CUBIQ - parallel logic programming, graphical tool set and knowledge base tools, see [Umann et al, 1996] KFKI MSZKI: LOGFLOW - massively parallel Prolog machine, see [Kacsuk, 1994] ML Ltd.: distributed real-time Prolog, see [Futo, 1993] - Plausible reasoning ALL: plausible reasoning as a iouik machine discovery, see [Finn et al, 1995 foundation of - Speech understanding BME: speech recognition systems, see [Gordos, 1992] - Vision, signal understanding and image proseccing MTA SZTAKI: texture analysis applications using interaction maps, see [Chetverikov, 1995] By the time of ECAI-96 three documents will be ready: Hungarian AI Address List, Hungarian AI Bibliography, and Hungarian AI Software Catalogue. It is hoped that these documents will give a much more complete picture of Hungarian AI than this present attempt. The main tendency of the latest AI research in Hungary aims to develop theories and methodology for supporting the solution processes of complex, real life problems. Hungarian AI people may turn to two institutions maintaining national R&D databases services, so that their work and/or products get registered. The National Technical Information Center and Library (OMIKK) serves current data from their R&D databases: Hungarian R&D Information Systems, -Institutes, -Projects and -Expert (in Hungarian and in English, too). The databases of the Institute of International Technology (NETI) contain information about new products, R&D results and free R&D capacities; these are accessible trough NETI, foreign hosts and the missions of the Republic of Hungary (embassies or commercial offices). The services of both institutes are going to be available via Internet. 7 Development In Hungary there are significant results in the field of KBS. Up until 1991 the national KBS shells and applications were logic- and rule-based - mostly on PCs. After the weakening of the CO-COM restrictions and the appearance of multinational computer companies, shells for workstations and mainframes supporting multiple paradigms became available. Since then projects on intelligent, integrated applications are running and application oriented shells are being developed. Significant research is being done in natural (Hungarian) language processing, speech genera- tion/speech synthesis, image processing and robotics at several national institutes, with results that are useful in practice; several of them are internationally acknowledged. In the 80's work on KBS development tools was carried out in parallel with that on methodology, as well as with the application development. During the last three years new projects have been started to develop customizable application-oriented shells. In the meanwhile hybrid shells hke ART, KEE, Level5 Object and G2 have been introduced and are now in use in a number of sites. Below we present some better-known Hungarian KBS projects between 1985-1991, taken from Santane-Toth (1991a), indicating the application areas and the number of projects: building industry: 7, chemistry: 10, computing: 6, energetics: 7, medicine and health service: 16 and other industrial projects: 11. In 1991 there were about 30 systems are at product level. More than 40 national institutes were engaged in KBS development and even more experts from a further 40 institutions were working on system building (Santane-Toth (1991a,b) gives references, too). We hope that the above mentioned Hungarian AI Software Catalogue will give an up-to-date list of the results of the development activity. AI language systems developed in Hungary (in alphabetic order together with the developing institution): CS-Prolog, Communication Sequential Prolog Professional (ML Ltd. and ALL) Mprolog, Modular Prolog (IQSOFT, formerly Theoretical Lab. of SZKI) (It was the first Hungarian-made AI tool in the international AI market. By 1988 more than 1500 MProlog systems were used in 25 countries.) The first Hungarian general purpose shells that were commonly used are the following: ALL-EX PLUS, a hybrid sell (ALL and ML Ltd.), GENE-SYS, a rule based shell (SZAMALK), MProlog "Shell, a Prolog based shell (IQSOFT). Other, problem-oriented tools are developed and are in use: CAPE, supporting protocol analysis (KFKI MSZKI) KAS-NES, a case driven shell-hke tool, based on statistical pattern-recognition methods (MTA SZTAKI), META-BOLEXPERT, an in-house shell for building chemical and medical-biological predictive KBSs (CompuDrug Ltd.), OPSSQL , an intelligent (knowledge based) data dictionary coupHng 0PS5 and ORACLE DBMS (KFKIMSZKI), PANGEA, a simple rule-based shell for handling engineering tasks (BME) and REALEX, a real-time shell for industrial applications (BME). Applications can provide challenges to drive the research forward, but results cannot be realized as side-effects of application projects. During the last years the activity in both development and apphcation areas are diminished. 8 Use The current use of AI techniques have not been general in Hungary yet. There are internationally well known Hungarian made tools, some AI applications are use but most of them are in prototype or experimental phases. Some Hungarian made AI products are on the market: CS-Prolog Professional: By 1996 more than 150 CS-Prolog systems are in use in 14 countries. MorphoLogic tools, i.e. spelhng and grammar checkers, hypenators, thesauri: Prom 1993 tools have been hcensed by Microsoft, Lotus and other international software companies. In 100.000 application tools are used used in Hungary. After the Hungarian market, the Polish, Roman and Bulgarian markets are planned. RECOGNITA PLUS, the Hungarian made OCR product (Recognita Ltd. ): By 1996 140.000 installations are all over the world. They has distributors in 28 countries. MorphoLogic is bridging gaps between acade-mia and industry in Hungary. They are developing tools which support intelligent text analysis, free text search, database indexing and bilingual "morphological dictionaries" which may be used in machine-aided translation. 9 Education Although some topics of AI appeared in Hungarian higher educational programs as part of already established subjects in the beginning of the 80's, it was not earlier than the middle of that decade when AI became taught as a stand-alone subject on its own right first at the Budapest University of Economic Sciences (EKE) and at the Kando Polytechnic of Technology (KKMF). Presently a basic one semester course, mostly called introduction to Artificial Intelligence is typically a compulsory part of the first three years for students of all Technical Informatics and Computer Science programs taught at universities and polytechnics in Hungary. Three examples of educational AI programs follow. The AI education has originated during the mid 80s at the Eotvos Lorand University (ELTE) Department of General Computer Science. The AI module, as a separate part of the informatics education, was established in 1988. The compulsory basic course, which takes two semesters, gives an introduction to the main techniques of knowledge representation and the searching or inference methods involved. After that the students can progress by obtaining credits from the following subjects: knowledge based technology and expert systems, AI languages, (especially Prolog and its applications), robotics, cognitive science, etc. New subjects are continuously added to the present list. A new course is under development about pattern recognition and neural networks. From 1992/93 a three year postgraduate program has been launched. The second example is the Technical University of Budapest (BME), which has a long standing tradition in teaching applied Artificial Intelligence and related subjects. Dept. of Measurement and Instrument Engineering (and occasionally Dept. of Process Control and Dept. of Telematics) for already more than 20 years teach under-graduate, graduate and post-graduate courses in AL Obhgatory Msc courses: intelligent measurement systems (AI methods in design complex engineering systems), apphcation of AI (an AI survey with emphasis on how AI methods should be interpreted and evaluated from the point of view of their technical usage) and robotics and vision. Optional Msc courses: AI - state-of-the-art, symbolic signal processing, artificial neural networks, KBSs, ESS, soft computing methods. There is laboratory practice associated with almost every subject. Ph.D. courses: advances in AI (inteUigent real-time systems, 2nd generation ESs, knowledge-level analysis, agent systems, etc.), fuzzy logic, artificial neural networks. The third example is Kando Polytechnic of Technology, Institute of Informatics (KKMF MS ZI). A new model for teaching AI has been designed at the institute in which emphasis is placed on the following aspects of AI: any inteUigent action has requires at least a minimal degree of capabilities of sensing, information processing capacity, knowledge, learning and reaction (signaling, moving ability). According to this model the compulsory subject titled introduction to AI addresses contains the following major areas: sensory systems, incremental intelligence, classic and new search methods, genetic algorithms, classic and fuzzy logic and their applications, artificial neural networks, pattern recognition and image processing, machine learning and robotics. Elective speciahzation module contains three main courses: theoretical basis of AI, digital image processing and artificial neural networks. Other supplementary elective courses: robotics, decision support systems, ESs, knowledge-based technology and intelligent computer applications. The other universities and polytechnics offer a number of courses for 3 and 5 years students to broaden and/or deepen AI knowledge. It can be stated that AI education is now one of the most dynamically evolving fields in the Hungarian higher education. 10 Funding The Hungarian Academy of Sciences (MTA), the National Committee for Technological Development (OMFB), National Research Foundation of Hungary (OTKA) and other sources have given financial support to R&D activities related to AI, too. In recent years the European Community programs for cooperation with countries of Central Europe have provided funding for several AI-related R&D projects in Hungary. 11 Impediments To characterize the present situation it can be said, that although potential users are beginning to recognize the advantages of AI technology, a lot of projects are delayed by lack of financial support or interest, or because of the reorganization of the Hungarian economic, financial and scientific life. Several systems are used in-house, where they were developed, never making it to the market. Fortunately, in many fields - e.g. in engineering and financial spheres - extensive work is done in the area of information and (intelligent) consulting systems. Implementing such systems on a wide scale forms a good base for the development of usable intelligent (AI) systems in the near future. 12 Summary The modernizing wave of the Hungarian economic system raised special demands towards information technology today. The emphasis lies on the construction of the basic information infrastructure (National Information Strategy is under elaboration). In 90's the conventional technologies are in the focus of interest in Hungary (reengineering, reorganizing early applications, building integrated information services). Parallel with the massive computerization the interest for AI techniques is growing. Incorporation of AI techniques in everyday software technology has already begun - in Hungary, too. There are a number of talented AI researches, experts and system builders with notable (forcing) experience. AI education is being set up and thereby an institutional way of knowledge transfer is being formed. The latest developments indicate that AI in Hungary has reached maturity concerning theoretical work as well as applications. References [1] Arnold, D., Farkas, Zs., Gerlei, G., Molnar, K. and Umann, G., "ZEXPERT - a Prolog-based expert system shell" Proceedings of the International Conference and Exhibition at The Institution of Civil Engineers, London, 1992. 14 pages. [2] Cselenyi, J. and Toth, A. "Anwendung der Genetischen Algorithmen in der Beschaffungslogistik" Proc. of the 113th Pannonian Apphed Mathematical Meeting, Bardejovske Kupele, Slovak Repubhc, 1995., pp. 38-45. [3] Chetverikov, D. "Pattern orientation and texture symmetry", Lecture Notes in Computer Science, Vol. 970, Springer, 1995, pp. 222-229. [4] Chetverikov, D. and Allo, G. (eds.), Proceedings of the Third Hungarian Workshop on Image Analysis, Budapest, 1991, p. 155. [5] Chetverikov, D. and Kropatsch, W. G. (eds.), "Computer Analysis of Images and Patterns", Proceedings of the 5th International Conference CAIP'93, Budapest, 1993. In: Lecture Notes in Computer Science, Vol. 719, Springer-Verlag, 1993, p. 857. [6] Csuhaj-Varju, E., Dassow, J., Kelemen, J. and Pausz, G. "Grammar systems: a grammatical approach to distribution and cooperation" Amsterdam, Gordon and Breach, 1994. VIII., p. 246. [7] Dobrowieczki, T. P., Louage, F. and Pataki, B. "The contribution of the Artificial Intelligence to the Measurement Science", Baltic Electronics Conference, Proc. of the Biennial Conf., Tallinn, 1994. Pp. 105-110. [8] Duray, R., Pataki, B. andHorvath, G. "Some further possibilities of the application of neural networks for nonlinear dynamic system modelling", Proc. of the First European Congress on Fuzzy and Intelligent Technologies, EUFIT'93, Aachen, Germany, 1993, Vol, IL, pp. 930-936. [9] Parkas, K., Hwu, Y. J. and Lenard, J. G. " Use of ANN for prediction of roll forces in hot rolling of steels" Nemzetkozi Szamitaste-chnikai Konferencia, MicroCad'96, Miskolc, Hungary, 1996. [10] Fekete, I. (ed.). Proceedings of the First Seminar on AI, Visegrad, Hungary, 1989. p. 170. [11] Fekete, I. and Koch, P. (eds.), Proceedings of the Second Conference on AI, Budapest, Hungary, 1991. p. 230. [12] Finn, V. K., Gergely, T. and Kuznyecov, S. O. "Plausible reasoning as a foundation for machine discovery.". Technical Report Series 1995/6, ALL, [13] Futo, I. "Prolog with Communicating Processes: from T-Prolog to CSR-Prolog", Proc. of the Tenth International Conference od Logic Programming, Ed. Warren D. S., MIT Press, 1993. Pp. 3-17.(Invited paper) [14] Gabor, A. (ed.), "Expert Systems'88 -Knowledge Based Information Management in Hungary.", SZAMALK, Budapest, 1988. p. 500. (In Hungarian) [15] Gordos, G. (as Quest Ed.) "Special Issue on Speech Processing", Journal on Communication, Vol. XLIIL, July-Sept., 1992, p. 72. [16] Gyorgy A. (ed.). Proceedings of CIS '91 Austrian-Hungarian "Conference on Intelligent Systems", Veszprem, Hungary, 1991. p. 328 [17] Jarmai, K., Farkas, J. and Meszaros, L. "Optimum design of Main Griders of Overhead TravelHng Cranes Using and Expert System" The First World Cogress of Structural and Multidisciphnary OptimaHzation, 1995. dosar, Extended Abstracts - Lectures, pp. 84- [18] Kacsuk, P., "A massively parallel Prolog machine", Invited paper on the 6th Int. Conf. on AI and Control Systems of Robots, Smo-lenyice, 1994, pp. 71-80. [19] Kadar, P. and Mergl, A. K. "CORES - a continuous restoration Expert System", Proc. of ISAP'96, Orlando, FL (USA), 1996. [20] Koch, P. and Santane-Toth, E. (eds.) "Expert Systems in Hungary - Theory, Methods, Applications", Special Issue of Informacio-Elektronika, Vol. XXV No. 90/12, p. 143. (In Hungarian) [21] Kovacs, Sz. And Koczy, L. T. "Fuzzy rule interpolation in vague environment" Proc. of the 3rd European Congress on Intelligent Techniques and Soft Computing, Aachen, Germany, 1995. Pp. 95-98. [22] Koczy L. T., "Algorithmic aspects of fuzzy control" International Journal of Approximate Reasoning, 1995. No. 12., 159-219. [23] Mero L., "Ways of Thinking. The Limits of Relational Thought and Artificial Intelhgen-ce" World Scientific Singapore, New Yersey, London, Hong Kong, 1990. [24] Monostori, L. "Connectionist and neuro-fuzzy techniques in intelligent manufacturing", Proc. of the First World Congress on Intelligent Manufacturing Processes & Systems, 1995. Mayaguez, P. Rico, Vol. II. Pp. 940-949. [25] Proszeky, G. "Industrial Applications of Unification Morphology" Proceedings of the 4th Conference on Applied Natural Language Processing, Stuttgart, 1994. pp, 213-224. [26] Roska, T. and Chua, L.O., "The CNN Universal Machine: An Analogic Array Computer" , IEEE Transaction on Circuits and Systems - II. Analog and Digital Signal Processing, Vol. 40. No. 3. pp. 163-173, March 1993. [27] Santane-Toth, E., "Survey of Hungarian KBS tools and apphcations in engineering field, Special Issue on Apphcations of Artificial Intelligence in Hungary", Engineering Applications of Artificial Intelligence Vol.4 No.6, 1991, pp.409-417. [28] E. Santane-Toth and P. Szeredi, "Prolog applications in Hungary", in K.L. Clark and S.-A. Tarnlund (eds.), Logic Programming, Academic Press, London, 1982, pp.19-31. [29] P. Szeredi, "PROLOG - a very high-level language based on predicate logic", in Preprints of the 2nd Hungarian Comp. Sci. Conf., Budapest, 1975, pp. 853-866. [30] Tilly, K., Dobrowieczki, T. P., Varkonyne-Koczy, A., Vadasz. B. and Kiss, I. "Modelling fault-tolerance in distributed systems", Proc. of the 8th Symp. on Computer and Microprocessor Application Environment" mP'94, Budapest, 1994. [31] Umann, G., Scott, R. B., Dodson, D. C., Parkas, Zs., Molnar, K., Peter, L. and Szeredi, P., "Using graphical tools in the CU-BIQ expert system tool-set", excepted for the Conference on The Practical Applications of Prolog, PAP'96, London, 1996. [32] Varga, L. Zs., Jennings, N. R. and Cockburn, D., "Integrating intelligent systems into a cooperating community for electricity management", Expert Systems with Applications, Vol. 7, No. 4, pp. 563-579, Oct-Dec. 1994. Elsevier Science Ltd. [33] Vamos, T., "Computer epistemology" World Scientific Singapore, New Yersey, London, Hong Kong, 1991. [34] Vamos, T., Kock, P. and Katona, F. "A strategy of knowledge representation for uncertain problems: experiments and relations to similar concepts", IEEE Transactions on Systems, Man and Cyberbetics, Vol 25, No. 10, October 1995. Pp. 1371-1383. 13 Appendix Referred institutions with addresses: ALL: Applied Logic Laboratory Hankoczy J. u.. 7. H-1535 Budapest HUNGARY BKE: Budapest University of Economy Fovam ter 8. H-1056 Budapest HUNGARY BME: Technical University of Budapest Mueg-yetem rkp. 3. H-1111 Budapest HUNGARY CompuDrug: COMPUDRUG Ltd. Hollan E. u. 5. H-1136 Budapest HUNGARY ELTE: Eotvos Lorand University Muzeum krt. 6-8. H-1088 Budapest HUNGARY IQSOFT: IQSOFT Intelligent Software Co. Ltd. Teleki B. u. 15-17. H-1425 Budapest HUNGARY KKMF MSZI: Kando Polytechnic of Technology, Institute of Informatics P.O.B. 112. H-1431 Budapest HUNGARY KFKI MSZKI: KFKI Research Institute for Measurement and Computing Techniques P.O.B. 49. H-1525 Budapest HUNGARY ML Ltd.: ML Consulting and Computing Ltd Gyorskocsi u. 5-7. H-1011 Budapest HUNGARY MorphoLogic: MorphoLogic Nemetvolgyi ut 25. H-1126 Budapest HUNGARY ME: University of Miskolc Egyetemvaros H-3515 Miskolc HUNGARY MTA ITA: Information Technology Foundation of Hungarian Academy of Sciences P.O.B. 49. H-1525 Budapest HUNGARY NETI: Institute of International Technology P.O.B. 570. H-1398 Budapest HUNGARY NJSZT: John von Neumann Computer Society Bathori u. 16. H-1054 Budapest HUNGARY OMIKK: National Technical Information Centre and Library Muzeum u. 17. H-1088 Budapest HUNGARY SZTAKI: Computer and Automation Research Institute of Hungarian Acad, of Sci. Kende u. 13-17. H-1111 Budapest HUNGARY SZAMALK: Computing Applications and Service Co. Etele u. 68. H-1115 Budapest HUNGARY AI in Poland Joanna Jozefowska Institute of Computing Science Poznan University of Technology Ul. Piotrowo 3a 60-965 Poznan, Poland 1 Research Main research centers for AI are Universities (among others: Krakow University of Mining and Metallurgy, Poznan University of Technology, Wroclaw University of Technology) and Research Institutes (e.g. Institute of the Basics of Computer Science of the Polish Academy of Sciences in Warsaw — IPI PAN). The backgroung of the researchers is twofold: Computer Science or Automatic Control. Thus the main research areas appear to be the following: intelligent control systems, image recognition, automatic learning and multiagent systems. The number of people involved in research activities varies from 1-12 persons and in teaching activities from 3-12 persons in each institute or chair. Roughly estimating, the total number of people involved in research and teaching of AI is about 70 persons. AppUed research is always strongly connected with the basic research and usually both involve the same persons. The results of the research are presented on conferences. In last years the following conferences on AI have been organized in Poland: - Knowledge Engineering and Expert Systems, Wroclaw 1993 - Decision Analysis, Expert Systems and Applications of Computer Systems, Wars-zawa 1994 - Distributed Intelligent Multiagent Sy-stems'95, Krakow 1995 (international) - Fourth International Workshop on Intelligent Information Systems Augustow 1995 (international) 2 Development There are no non-research institutions reahzing AI projects. 3 Use The main areas of AI applications are the following: - histological images recognition (diagnosis of brain and lever cancer) - application of evolutionary techniques to task scheduling - decentrahzed, multiagent systems for resource allocation - multiagent system for task assignment. All of the above applications are prototype systems. 4 Education The main institutions providing advanced education in AI are Universities of Technology. Postgraduate students are offered the following specializations at the Krakow University of Mining and Metallurgy (AGH): - Image Analysis and Recognition-Vision Systems - Neural Networks and Pattern Recognition - Processing and Recognition of Spoken Polish - Analysis of Biomedical Signals. Other Universities offer a wide range of monographic lectures from neural networks and evolutionary algorithms through expert systems and knowledge engineering to intelligent control systems and multiagent systems. The background of the teachers is mainly computer science, automatic control and biocybernetics. 5 Funding The research in AI is mainly sponsored by the Universities and Polish Committee for Scientific Research (KBN) in the form of individual grants. There are several TEMPUS and COPERNICUS projects realized in Polish Universities in the field of AI. One of the Universities is negotiating a contract with an industrial partner for an AI application. 6 Impendiments The main impendiment to further introduction of Al-based systems in practice is very limited interest of Polish companies in implementing such systems. Other obstacles mentioned by the respondents are common for many disciplines of Polish science and include: scarce fundings, limited availability of recent publications in the area etc. AI in Romania Dan Tufis Research Institute for Informatics RACAI Bucharest Romania tufisQvalhalla.racaì.ro 1 Introduction Artificial InteUigence in Romania is present mainly in basic and applied research and in education. Several applications have been developed: expert systems, natural language processing applications, computer vision, speech recognition, learning agents, robotics. 2 Research The major organisations where basic AI research is done are: Center for Advanced Research in Machine Learning, Natural Language Processing and Conceptual Modelling of the Romanian Academy (RACAI), in Bucharest. RACAI is included in three European concerted actions funded by European Commission (Network of Excellence in Language and Speech-ELSNET-Goes-East, Network on Inductive Logic Programming-ILPNET, and Trans European Language Resource Infrastructure-TELRI), in a European joint project (MultiHngual Text Processing-MULTEXT) and several bilateral research Sz educational programs (with US National Research Council-The Twinning Program, with Andrew Andrew Mellon Foundation of New York and George Mason University-The Internet Lab program, with LIMSI/Orsay-Conceptual models for Natural Language Processing). Some other bilateral cooperation programs have recently been concluded (with Istituto Dalle Molle de la Inteli-genza Artificiale-Lugano, Institute for Studies in Semantics and Cognition-Geneva, Computer Science Department of George Mason University). RACAI has 8 full-time researchers and 7 affiliates and several associated students (working on their MS or Ph.D. theses). There are three research groups pursuing basic and applied research: - Machine Learning Group, led by Dr. Gheor-ghe Tecuci (director of the Center, also professor at George Mason University, USA). This group is doing research in multistra-tegy learning, knowledge acquisition, intelligent adaptive agents, and inductive logic programming. - Natural Language Processing, led by Dr. Dan Tufis (also acting director of the Center and head of the Natural Language Group at the Research Institute for Informatics). This group is mainly concerned with Hngu-istic (unification-based) formalisms and models for natural language processing (morphology, syntax and semantics). - Conceptual Modelling, led by Dr. Stefan Trausan-Matu (also lecturer at the Computer Science Department of the "Politehnica" University of Bucharest). This group is working in knowledge representation, knowledge acquisition, intelligent tutoring systems, cognitive science, automatic programming. Research Institute for Informatics in Bucharest (ICI) ICI is the biggest research institute in theoretical and applied Computer Science in Romania. It is involved in 6 COPERNICUS Al-projects (topics: knowledge acquisition, genetic algorithms, intelligent manufacturing, simulation, deductive databases and natural language processing). ICI has over 25 full-time researchers working in AL Most of these researchers are concentrated in the three laboratories indicated below (nevertheless, there are other smaller research groups working in expert systems, neural networks, constraint processing, knowledge-based modelhng and simulation.) : - Natural Language Processing laboratory with 9 full-time researchers. This laboratory, led by Dr. Dan Tufis, is specialised in lingware development and natural language applications. It has been involved in Russia, semantic processing based on conceptual graphs- with LIMSI/Orsay, Prance, intelligent tutoring systems-with George Mason University, USA). Currently is involved in the ELSNET ESPRIT project. - Artificial Intelligence laboratory with 9 fulltime researchers in knowledge representation (description logics), computational logic, expert systems, knowledge acquisition, neural networks, genetic algorithms. This laboratory, headed by Dr. Liviu Badea is involved in two European funded projects (Renegade COST project and PEKADS COPERNICUS project). - Intelligent Data-Bases laboratory with 7 fulltime researchers (out of 21) involved with AI technology. This laboratory, led by Mr. Radu Bercaru, has preoccupations in the area of deductive data bases and neural networks and applied research in deductive data bases and neural networks and conducts basic research in practical apphcations in the area of computer vision, OCR and NL conducts basic research in interfaces. This has been successfully applied in a series of commercial products: VEDA NeurOCR (an optical character recognition system dedicated to printed text). VEDA CAR (a video camera based recognition system for alphanumerical scripts which has been successfully employed for automatically car license plate recognition. VEDA PRIMS (an all purpose 2-D image processing and pattern recognition system whose structure combines traditional techniques with the neural algorithms both at the early image processing and the recognition stages). "Politehnica" University of Bucharest (PUB) PUB is the biggest technical university in Romania. The majority of AI researchers from RA-CAI, ICI and some other AI sites, graduated from the Computer Science Department of PUB. PUB has research groups in artificial intelligence in the following departments: - Computer Science (CS) Department with 6 professors doing research in AI (3 of them are also working at RAC AI). They are working in knowledge representation, expert systems, intelligent tutoring systems, learning, automatic programming. Prof. Cristian Gi-umale is involved in an ESPRIT project with the Manchester University (expert systems for integrated circuits design). Prof. Stefan Trausan-Matu is leading from the Romanian part the COPERNICUS project "PEKADS" (knowledge acquisition using the KADS methodology). - Electronics Department with 3 professors and researchers working in speech recognition (professor Corneliu Burileanu), advanced hardware architecture for AI (Professor Gheorghe Stefan). - Automatic Control Department with about 4 professors having research activities in neural networks, genetic algorithms, fuzzy systems. University of Bucharest, Department of Mathematics Bucharest University, Department of Mathematics with at least 4 professors and usually 68 MS students, is doing research in expert systems, artificial life, computational linguistics. The main groups are led by Professor Luminita State (expert systems) and Professors Solomon Marcus and Gheorghe Paun (mathematical Ungu-istics, artificial hfe). Recently, two new groups led by Professors Adrian Atanasiu and Ion Vaduva respectively started applied research in the area of natural language processing (machine translation) University Al. I. Cuza of lasi, Faculty of Mathematics, Department of Informatics (AIC) This group in lasi, led by Dr. Dan Cristea, is mainly concerned with development of natural language technology (specialised programming languages for NL systems, development, prosody tools, educational systems), but have also interesting results in multivalued-logics, constraint-propagation problem solving. This group consists of three professors and several students preparing their MS theses. Romanian Academy Institute for Theoretical Informatics in lasi (ITI) This institute is more oriented towards theoretical computer science, but there are two groups with relevant activities in artificial intelligence. The first one, led by Professor Horia Teodorescu, is the most representative group in Romania as far as fuzzy theory and systems are logic programming and natural language research. This group is made of 7 full-time researchers. 3 Development The major players in developing AI appHcations are the above mentioned: RACAI, ICI, PUB, AIC, ITI. Currently there are no AI private companies. However, there is a small company (VEDA) selling VEDA-series products, based on neural technology. 4 Use AI has not yet penetrated ,the Romanian industry. Several AI applications (including applications developed in Romania) are currently in use in universities. Other applications (e.g. natural language interfaces, expert systems) have been used in the past 10 years. Among such apphcations, which have been experimentally used, one could mention: - DISCIPOL (developed by a team led by Dr. Gheorghe Tecuci), an expert and learning system that assists a technolog in designing manufacturing technologies for loudspeakers (experimentally used at the Industrial Electronic Factory in Bucharest); - DEXTY (developed by a team led by Prof. Stefan Trausan-Matu.), an expert system for designing industrial halls using typified elements. It integrated frames (structured objects) with rule-based knowledge and also with some existing programs written in conventional languages (for stress verification). This expert system was used at the Civil Engineering Design Institute in Bucharest. - lURES (developed by Dr. Dan Tufis and Dr. Dan Cristea), a natural language interface building system. The system (based on semantic grammars and conceptual graph modelling) has been used to develop Romanian language interfaces to a information retrieval system (Institute for Metallurgy Research in Bucharest), a DATATRIEVE personnel database ("FLARO" Factory in Sibiu). 5 Education Advanced education in AI is provided in the majority of the Computer Science, Electronics and Automatic Control departments of the technical universities (and at some mathematics departments of universities) at least in Bucharest, lasi, Cluj, Timisoara, Craiova. High level (technical) education in Romania includes the master of science and doctoral degrees. There is also a newly degree between them, the so-called advanced studies degree. All these degrees include among the optional courses AI classes. All these degrees include optional courses in AL Among the four specialisations at the Computer Science Department of the Polytechnic University of Bucharest, students may choose AL However, AI is one of the four specializations offered by the Computer Science program of the Polytechnic University of Bucharest. Students from other specialisations may also take AI courses. The AI courses at the CS Dept. of PUB are: Introduction to AI, Logical Basis of AI, offered by Functional Programming (LISP, Scheme, and ML), Expert Systems. Advanced studies students have also two AI courses: Natural Language Processing and Advanced Functional Programming. Professors for this courses (at PUB's CS) are either doctors or enrolled in a doctoral programme. Students for the PhD degree may choose an AI subject. For example, there are already several doctor degrees obtained with an AI subject. 6 Funding Traditionally, AI research funding has been ensured mainly from the governmental sources. Lately, the main sources became international cooperation projects. The basic research at RACAI, for instance, relies mostly on such funds (85%). As far as the basic research is concerned, the general practice is that the governmental finaricial support covers only salariai expenses. All the other expenses with research activity (equipment, documentation, conference participations, visits at different research sites, etc.) have to be covered from other sources. European cooperation under different projects (COPERNICUS, TEMPUS, ERASMUS, etc.) and bilateral cooperation agreements with research institutions or universities from Western Europe, USA, Canada, Asia are in most of the cases the only framework within which Romanian specialists can attend some important scientific events or acquire modern equipment and programs 7 Impediments One major impediment for the usage of AI results in Romania today is the present state of the economy. Romania like the other Central and Eastern countries is in a transition phase to the market economy. There are not yet major private enterprises or major investors which could invest in AI. Nevertheless, the economic situation in Romania is improving. For example, this year more than 4000 companies will be privatised. We hope that this will involve also some new (including international) investments which also will be interested in applying AI technology. AI in Slovakia Pavol Navrat, member, Slovak Society for Informatics, member, American Association for Artificial Intelligence, Slovak Technical University, Department of Computer Science and Engineering, Ilkovičova 3, SK-812 19 Bratislava, Slovakia Phone: +42 7 791 395, Fax: +42 7 720 415 E-mail: navrat@elf.stiiba.sk 1 Introduction Artificial intelligence (AI) has gone through approximately three decades of development in Slovakia. Present state reflects the development in both positive and negative sense. On the positive side, there is a great amount of tradition, relatively large potential of highly qualified people with a lot of experience in both research and education in various branches of artificial intelligence. On the negative side, the recent development outside the field itself, most notably the process of changing almost completely the ownership within the whole economy (privatisation), along with similarly far reaching restructuring of most of the businesses results, at least momentarily, in a decrease of interest for artificial inteUigence applications. The enterprises struggling for survival cannot afford to invest in developing Al-based technologies. The government lacks money to support basic research in any but a very hmited scale. The report concentrates on the current state of AI in Slovakia. While the history of AI in Slovakia is not the topic of interest of this report, it should be stressed that there definitely has been a long development before the current state was achieved. Slovakia does not start from zero now. On the contrary, its record of AI related activities contains several noteworthy items: - Since 1982, the international journal Computers and Artificial Intelligence published in Bratislava functions as a double bridge. On the one hand, this journal provides publishing space where papers written mostly by East-European and West-European authors are meeting. On the other hand. Computers and Artificial Intelligence is a medium of communication between two important sciences - com- puter science and artificial intelligence, ena-bhng mutual enrichment as well as development and integration of ideas between them. - Since mid-eighties, there have been organized biannually international scientific conferences on Artificial Intelligence and Information-Control Systems of Robots. The proceedings are published by a major international publishing house. - Long before the iron curtain fell. Laboratory of Artificial Intelligence in Bratislava was the meeting place for scientists from East and West. They could meet and work there in times when this was not considered - unfortunately - normal at all. Obviously, such activities could not have been possible without a broad collaboration on a national level of scientists, educators, and other people working in AI. Lion's share of the credit for organizing these activities goes to the Institute of Technical Cybernetics in Bratislava. We shall discuss some of the reasons and consequences of the present state later. First, we describe the present situation in research. Next, development of AI applications is mentioned. Education at university level is described. Sources and methods of funding AI research in Slovakia are discussed. Finally, some impediments to improving state of AI are indicated. In conclusions, several possible ways to overcome the present difficulties are proposed. 2 Research Basic research in AI has been conducted at various institutions throughout the country, most notably at universities and government research institutes, including those of the Slovak Academy of Sciences. Presently, basic research concentrates essentially to a few universities. Besides them, some research is still being done at the Institute of Control Theory and Robotics of the Slovak Academy of Sciences. At the Slovak Technical University in Bratislava, Department of Computer Science and Engineering is the major player in AI related basic research. Building up on their previous experience and tradition, there are currently being investigated following projects: - intelligent support in software construction, with emphasis given to knowledge based programming, software reuse, and knowledge based software system configuration management, - intelligent processing in distributed and parallel databases, - neural networks. Approximately 15 people are involved in the research. Some topics are investigated in an international cooperation (project within the COPERNICUS programme) with institutions from Great Britain and France. Research in the area of neural computing is funded by a three year grant awarded by the US-Slovak Science and Technology Program for the project entitled "Theoretical Investigation of Ne-ocortical Plasticity". It is a joint project with partners from Vanderbilt and Brown Universities in the USA. At the same university, there are a few other departments engaged in AI related research oriented in the following directions: - neural network applications, - speech recognition and pattern recognition, - robotics. At the Technical University in Košiće, Department of Cybernetics and Artificial Intelligence is active in basic AI related research. There are 6 researchers and 4 PhD students currently involved. Their main areas of interest are: - Prolog, constraint logic programming, - constraint satsifaction, configuration design, - neural networks, fuzzy logic, - genetic algorithms. At the same department, there are other groups participating within international programmes in the following projects of applied research: - configuration design - supporting knowledge-based tools for engineering design (2 persons), - classification procedures based on neural networks for satellite images (2 persons), and participating within national programmes: - constraint solving problems (scheduling, layout) (2 persons). At the Comenius University in Bratislava, research is done at the Institute of Informatics (after the former Department of AI ceased to exist). Their research themes are curently computational logic, declarative programming, expert systems, knowledge representation, integration of subsymbol and symbol level. There are 8 persons involved. At the University of Economics, Department of Applied Informatics conducts AI related basic research. There are three persons involved in investigating decentralized architectures of expert and knowledge-based systems. Institute of Control Theory and Robotics of the Slovak Academy of Sciences in Bratislava succce-eds to continue, even if in a substantially reduced volume, in the basic research in AI. The institute builds up on a tradition of its organizational predecessor, the Institute of Technical Cybernetics. Currently, there are several research projects investigated at the institute: - Representation of uncertain knowledge for the automated reasoning in nonstandard knowledge-based systems oriented into robotics is a project with following themes: - uncertainty in nonstandard logics (fuzzy, probabilistic, modal ...), aproximate reasoning, uncertainty inference, knowledge-based systems with uncertainty,. possibility, necessity, belief functions, etc. (2 people). - Prolog logic programming, aplication of graph theory, Prolog compiler, optimal compiling algorithms (1 person). - Pattern recognition, - Speech recognition, - Modelling of discrete event dynamic systems and synthesis of their intelligent control. 3 Development Generally speaking, development of AI applications is in an even worse position at the moment. There only very few, if any potential contractors interested in results of such development. At the Slovak Technical University in Bratislava, there is a small group at the Department of Computer Science and Engineering that is currently working on a project aiming to develop a system of prediction of technological parameters of selected devices at a nuclear power plant. Probabilistic safety assesment is a very complex analysis. They attempt to develop an artificial neural network to solve the problem. There are three people involved in the project, including the cooperating nuclear power plant at Jaslovske Bohunice. At the Comenius University in Bratislava, there has been an AI application for IIASA, Laxenburg, Austria: - a knowledge based checking of the correctness of (a huge) amount of data for the RAINS model. Curretly, they develop for the same contractor: - knowledge based wizards supporting a user of the RAINS model. Another application is being developed at the same university for the Research Institute of Nuclear Energy, Trnava. They are developing a specification of an expert system. National Centre for Health Promotion in Bratislava developed a few years ago a very succcessful expert system shell Codex that has been in a routine use in more than 15 applications throughout not only Slovakia, but also in the Czech republic. A few applications are still in use. However, the institute stopped maintenance or further development of the system. , ,, Institute of Control Theory and Robotics of the Slovak Academy of Sciences in Bratislava aims to advance its endeavours in pattern recognition and speech recognition into a development stage. 4 Education Artificial intelligence is a discipline which is part of university study programmes in various major fields of study in Slovakia at undergraduate, graduate and postgraduate levels. At the Slovak Technical University, basic course in AI is included in at least two undergraduate major fields of study: Computer Science and Engineering (approx. 120 students), and Control Automation (approx. 20 students). At the graduate level, similar course is given for Mechanical Engineering majors (approx. 25 students). For Computer Science and Engineering majors, there are offered at the graduate level several courses, mainly related to Knowledge Based Systems, and Neural Networks, effectively allowing them to specialize more deeply in AI (approx. .25 students). There are regularly several PhD students specializing in AI. Teaching staff involved in AI includes 3 professors (full or associate), 2 lecturers with PhD degree, and several more assistants. At the Technical University in Košiće, a master degree is offered that covers the following background: control engineering, cybernetics, computer science, AI. At the postgraduate level, there are currently 4 PhD students involved in AI. At the University of Economics, Bratislava (Faculty of Economic Informatics) there are included specialized two semester lectures on Expert systems and some introduction to AI Technics and Techologies. The lectures are focused mainly to those future specialists interested in application of expert systems in managemant, business, etc. One associate professor and one assistant professor is involved in teaching and diploma thesis supervising. At the Comenius University in Bratislava, there are offered studies towards both a master degree' and a PhD degree in Computer Science (with an option of an AI speciaHzation track). There are about 6-10 master graduates in a year, and there are about 5 PhD students altogether. Teaching staff includes 3 teachers with PhD degree assisted by PhD students. Currently, there is no professor involved in the AI activities. Several courses are taught by external lecturers with good scientific background coming from research institutes. 5 Funding The situation with funding research in AI is not any better then the general situation in Slovakia today. The industrial sector (private or public) is still economically not strong enough to be able to increase its competitiveness by investing into research in an extent that would make it a principal provider of research funding at the national scale. Government spends, due to the lack of resources, insufficiently low amount of money on research. Moreover, it seems like the distribution of the overall sum is not favourable to universities, where the greatest potential of basic research in AI is to be found and, to a perhaps lesser extent also to the Academy of Sciences. Currently, more support goes to state owned research institutes, with more emphasis on practical applications, but often with questionable returns. Here, regarding the AI, no AI related applications could be reported. The government money flows into research through various channels, with the most transparent being through research grants awarded by the Slovak Grant Agency. In fact, most of the projects mentioned above are supported by such grants. The grants are not only insufficiently low in total amount, but inherently insufficient by intention as they cover only a fraction of overall project costs. In such a situation, universities and other research institutions seek additional support elsewhere. It should be stated that they often show a great deal of own initiative in seeking partners not only within a local industry, but also abroad. With the local industry having still more basic problems than funding Al-based innovations, the foreign companies and research agencies are more likely to become partners in cooperation. Here, universities make use of their contacts with their counterparts to join in projects in cases of common mutual interest and potential benefit. Recently, several projects have been funded by the European COPERNICUS programme. 6 Impediments Some of the problems have been mentioned already above. For AI, the situation is even worse. The AI related potential in state owned research institutes has reduced in the last few years (cf. the reported reduction in the Institute of Control Theory and Robotics of the Slovak Academy of Sciences, which was approx. halved in respect to AI, or the complete abandoning of AI related research in the National Centre for Health Promotion). The universities were hit equally hard by the brain-drain. However, they have been able at least to prevent a complete discontinuation of AI related research. This is an expression of their appreciation of the role AI has in their study programmes. Apart from the current financial situation, potential consumers of AI products due to both low level informedness with regard to Al-based technology potential, as well as aiming for short term investment turnover, rarely if at all have an interest to invest in this direction. There is still a potential danger of discontinuing a long year tradition of AI education and/or research at some places (as was the case of dissolving the AI department at the Comenius University a few years ago as the consequence of the massive brain-drain). 7 Conclusions It is to be believed that the local industry recovers from the restructuring and adaptation phase in a few years time, if not sooner, as some indications already show. Therefore, the situation is not to be seen as entirely pesimistic. On the contrary, there is a reasonable chance the AI shall survive at least in some places, most notably at the universities. Funding of AI research and development will in the future come also from local industry (as is the case already now with the Nuclear power plant Jaslovske Bohunice), although it may take a longer period of time before the total amount of such funding becomes relevant. What is needed now is to have doors open for international cooperation. We need sufficient opportunity to participate in European research programmes as "normal" partners. We need a chance to compete for research contracts, perhaps in cooperation with our European partners, from companies and funding agen- cies abroad. The contracts should cover personell costs, too, so that the success of the projects becomes less vulnerable to the brain-drain. Such gradual incorporation of partners from different parts of Europe in joint projects shall eventually turn into benefit for all parties involved. Acknowledgements Information provided by Lubica Benuskova, Julius Csontó, Karol Dobrovodsky, Jozef Ke-lemen, Ludovi't Molnar, Mikulaš Popper, Jozef Šajda, Jan Šefranek and Ondrej Sykora is highly appreciated. AI in Switzerland Boi Fallings, Swiss Federal Institute of Technology faltingsOlia.di.epf1.ch 1 Research There is an active AI research community. Full-fledged (5-10 researchers) Artificial Intelligence research groups exist at the Universities of Geneva, Neuchatel, Zurich and the Swiss Federal Institute of Technology in Lausanne. Smaller groups exist at the Universities of Bern, Fribourg and Lausanne. There are very active computer vision groups at the Universities of Geneva and Bern, the Swiss Federal Institute of Technology in Zurich, as well as some smaller engineering schools. Groups working in robotics are active at the Swiss Federal Institutes of Technology in Zurich and Lausanne, and at the University of Neuchatel. Neural networks are being studied in most Universities, with major centers at the Swiss Federal Institute of Technology in Zurich (Zentrum fuer Ne-uroinformatik) and Lausanne (Centre Mantra). Three private research institutions funded by the Dalle Molle Foundation are entirely devoted to Artificial Intelligence research. These are IS-SCO in Geneva (natural language), IDSIA in Lugano (neural networks and robotics), and IDIAP in Martigny (AI and perception, mostly speech understanding). Each of these institutes has between 10 and 20 researchers. However, due to drastic reduction in support from the foundation, all three institutes are facing major difficulties. The Union Bank of Switzerland has a research laboratory (UBILAB) which used to be active in AI research but has since refocussed mostly on other topics. 2 Development There are development groups in many large companies. Each of the three large Swiss banks (Union Bank, Credit Suisse, Swiss Bank Corporation) has at least a small group working on AI and neural networks, possibly outsourced (for example Systor for Swiss Bank Corporation). Activities also exist at the Rentenanstalt, Zurich. Several small.companies are selling AI services, the most active one being Synlogic AG in Binningen close to Basel. Swissair has a large (15 people) group developing AI software for various airline applications. These are also sold to some 40 other airlines worldwide. Swissair is using about 200 Lisp machines worldwide. 3 Use Swiss industry is very conservative and has been very hesitant to adopt AI technology. Expert systems and other applications using AI technology are in use in many companies and organizations e.g. in the electronics, chemical, insurance industries. Swissair is a major user of AI technology with measurable and very substantial benefits. 4 Education All universities with AI research activities offer courses in AL However, at no university does AI occupy a very prominent place in the curriculum. There is a strong conservativism among students and new or unusual techniques have difficulty getting a large share of the curriculum. There are a substantial number of doctorates each year in AI and related topics. 5 Funding The Swiss National Science Foundation (Schweizerischer Nationalfonds) has been a major supporter of AI activity. The national research program 23, entitled "Artificial Intelligence and Robotics" and running from 1989 to 1994, has funded about 20 projects with a budget of 12 million Swiss Francs (about 7 million ECU). The majority of the funding, however, was used to fund vision, robotics and neural network projects. More recently, the priority program for computer science research, running from 1992 to 1996, comprised a module on "knowledge based systems" which funded mostly AI projects with a budget of roughly 4 million Swiss Francs (2.5 million ECU) (the exact number is not clear as budgets have been adjusted many times). Other research funding is available through regular channels of the Swiss National Science Foundation as well as other funding agencies. Switzerland is not a member of the EU, but Swiss partners are allowed to participate in Esprit and other community research projects. The Swiss government has set aside funding for financing any Swiss participation in community research actions. Negotations for a full participation of Switzerland in community research activities are under way but might take some time to terminate. Switzerland is a full member of Eureka, COST and the IMS initiatives which fund some AI research. day. Swissair is under a lot of pressure to reduce costs, an exception among large companies. It is evident that most Swiss industry stands to benefit from using AI technology, and it appears certain that this will eventually be recognized. At the moment, AI is definitely not in fashion, but my feehng is that more and more people are again willing to examine AI techniques again. The restructuring of certain industries seems to support the introduction of new appraoches and technologies, and amongst them also AI. 6 Impediments The major obstacle to large scale introduction of AI technology is that most companies and organizations have extremely conservative data processing departments who are reluctant to launch new technology. Another problem is that some AI product and service providers did not meet the expectations created by themselves and this is still remembered by the managers. Most large Swiss companies are doing very well and are under no pressure to introduce more efficient software development. Swiss banks, for example, employ thousands of programmers but still require outside consultants just to keep their systems running, and they have the money to pay for this overhead. Some small companies would like to be more innovative but often lack the means. The same conservativism can be observed with University students who are very strongly driven to subjects which currently have industrial relevance. A major exception is Swissair, where AI technology has been introduced with much success by a very determined and clever manager. An additional factor in this success was that some of the first applications already delivered large and clearly measurable benefits which by themselves would suffice to pay a substantial AI group to this AI in Ukraine Viktor P. Gladun and Alexander Javorsky Association of Developers and Users of Intelligent Systems Krasnoarmeiskaja 23B Str. 252004 Kiev, Ukraine gladOrozrada.FreeNet.Kiev.UA 1 Research The basic researches on Artificial Intelhgence are done in following organizations: Association of Developers and Users of Intelligent Systems (Kiev); - decision making and planning systems; - expert systems; - knowledge discovery systems and their application in chemistry and material science; - systems for classification, diagnostics, forecasting; - systems for processing Natural Language texts. Taras Shevchenko National University (Kiev); - processing of encyclopaedial knowledge; - computer linguistics. V.M. Glushkov Institute of Cybernetics NAS of Ukraine (Kiev); - mathematical models of text and context; - neural networks; - theorem proving ; - machine intelligence. Institute of Radio-Electronics (Kharkov); - dialogue natural language systems. Institute of Artificial Intelligence Problems (Donetsk); - visual image processing; - medical diagnostics. Institute of Market and Economic-ecologic Researches (Odessa); - situational managment systems. State Mining Academy of Ukraine (Dnepropetrovsk); - Application of AI methods in CAD-CAM. Usually every organization has one or a few departments working in sphere of artificial intelligence, with 5-10 employees. The association ADUIS unites the specialists from various organizations and includes more than 50 members. 2 Development The major players in realising AI applications are: - ADUIS (Kiev); - Institute of Artificial Intelligence Problems (Donetsk). In Ukraine there are about 15 intelligent systems in practical application. 3 Use In Ukraine the AI methods are used in solving such problems as analysis of situations, action planning , automation of scientific experiments. As an example of the application systems we give information about systems CONFOR and MANAGER developed in ADUIS. CONFOR is intended for knowledge discovery, object classification, diagnosis and prediction. The basic functions of CONFOR are: - discovery of regularities, in particular discovery of dependencies (knowledge) inherent to data storing in data base; - using the regularities for object classification, diagnosis and prediction. CONFOR operates with examples and non-examples of investigated class of objects (processes, situations, phenomena, etc.) that are represented as sets of attribute values. CONFOR is able to discover regularity however complex it be. Regularity is represented as a logical expression in terms of attribute values that describe initial objects, so it is obvious for a user and quite interpretable. Original method of inductive CONcept FORmation based on the growing pyramidal network is taken as a principle in the CONFOR system. This approach has a set of advantages when deahng with huge data amount and have passed successful long-term examination in Ukraine, Russia and USA in such fields as material science, chemistry, medicine, geology. tehnology, astronomy, economy, sociology, etc. Examples of Application Tasks: prediction of existency of predefined chemical compounds, search for new materials having desired properties (electro-optical, ferro-electric, superconducting, semiconducting), classification of economic situations, discovery of regularities that characterize diseases, disease diagnostics, prediction of solar activity, diagnostics of failures in technical units and so on. MANAGER complex makes it possible to design systems intended for situation analysis and decision making in various spheres of administrative and economical activity. Main Functions: - storage the information characterizing situation at every decision-making step; - current situation analysis and warning about deviations of its characteristics from norm; - formation of recommendations for deviation elimination; - evaluation of recommended alternative decisions by means of modeUing their multi-step consequences. Management sphere where the complex is used must be represented with both a set of economic, resource or social parameters having mutual relations and a set of permittable controlling actions (decisions). On the basis of this information MANAGER signals about parameter violations, recommends possible control actions for elimination of the violations and helps to choose a decision by informing about all multi-step consequences of a decision being analysed. Application Domains The system is effective for management in offices, enterprises, regions, for business activity support and may be used as a manager training system. The system runs on IBM PC/AT-compatible computers under MS DOS or WINDOWS operating system with memory minimum 640 K and EGA/VGA-type of adapter. 5 Funding The sources of financing of researches on AI are State Committee on Science, Technical Policy and National Agency on Information as well as some of commercial structures. The projects are financed which used AI methods . Researches on AI are financed in state programs dealing with intel-lectuaHzation of decisions making processes and hardware-software means of integrated knowledge bases. The amounts of financing are insufficient because of difficult economic situation in Ukraine. Ukrainian scientific collectives participate in competitions on grants of European Union. 6 Impediments The obstacles for further introductions of AI-systems in practice in Ukraine are the following: - complexity of transformation of Ukrainian economy from centralized form to market; - unreading of Ukrainian industry to use new information technologies; - insufficient financing of research programs and design work on AI. 4 Education The courses on AI exists in Taras Shevchenko National University (Kiev), Institute of Artificial Intelligence Problems (Donetsk), Dnepropetrovsk State University. A Report: Multidisciplinary Conference on Consciousness, Tucson, USA, April 1996 The second international conference "Toward a Science of Consciousness" in Tucson, Arizona, USA (Tucson 11) from 8th April till 13th April 1996 was one of the biggest multi-disciplinary conference devoted to the scientific study of consciousness in whole history of science. The first Tucson conference two years ago (Tucson I) made a significant breakthrough, because it united many scientists from different discipHnes which are involved in research of some aspect of consciousness and related problems. Specialized scientific meetings of the past were unable even to put relevant questions concerning consciousness studies. They were devoted to various specific problems characteristic to a particular discipline, but have not been capable of making a holistic synthesis or have not even searched for it. Tucson I and much larger Tucson II conference (about 500 active participants) have, on the other hand, defined the consciousness problem in whole spectrum. The ignorance between specialized disciplines vanished in favour of complementary theoretical and experimental studies of multi-disciplinary nature. The participants faced the problem directly, without any prejudices. Although differences remained and debates were sometimes polemic, the atmosphere was tolerant and productive. The program of Tucson II, organized by the University of Arizona, included the following topics: April 8: Plenary sessions: 1. Approaching the hard problem (Lockwood, Gray, Shepard/Hut, Dennet). The problem of qualia (the nature and origin of qualitative subjective experiences, "how it is to be Hke") and the problem of phenomenal consciousness have been identified as the "hard" problem. The problems of system-theoretical background of consci- ousness and information processing have, on the other hand, been defined as the "easy" problem, because they are not as puzzling as the phenomenal qualia. Concurrent sessions: 2. The "hard problem" and the Explanatory Gap 3. Sensory Processing 4. Biology and Evolution 5. Emergence and Hierarchical Systems 6. Social Reality 7. Aesthetics and Communication Poster sessions: 8. Philosophy: free will, unity, function of consciousness, modularity, computation, representation, intentionahty 9. Neuroscience: attention, vision, sensory processing 10. Cognitive Science and Psychology: memory, sleep and dreams, unconscious processes, hypnosis, language, ethology, emotion, mental imagery, impHcit and explicit thought processes, creativity 11. Physics and Mathematics: quantum theory, quantum brain processes, space-time relativity, integrative models, thermodynamics 12. Experiential and Cultural Approaches: phenomenology, phenomenological reduction, aesthetics, hermeneutics, literature, art, music, religion, history, contemplative and mystical studies, integrative models April 9: Plenary sessions: 13. Neural Correlates of Consciousness (Llinas, Hobson, Bogen, Greenfield) 14. Consciousness and Folk Psychology (Blake-more, Hodgson) 15. Can Machines be Conscious? (Hillis, Lanier, Rumelhart/Lisetti, Penrose) Concurrent sessions: 16. Philosophical Theories of Consciousness 17. Vision 18. Dreams/Unconscious Processes 19. Foundations of Quantum Theory 20. Language/Interspecies Communication 21. Sociology/Anthropology Poster sessions: 22. Philosophy: qualia, hard problem, explanatory gap, personal identity, self 23. Neuroscience: neuroanatomy, motor control, psycho/neuropatology, memory, learning, specific brain areas, emotion 24. Cognitive science and Psychology: neural networks and connectionism, binding of multimodal perceptions into unified experiences, cybernetics, task performance and decision-making, intelligence, linguistics 25. Physics and Mathematics: quantum theory, logic and computational theory, integrative models 26. Experiential and Cultural Approaches: sociology, anthropology, contemplative and mystical states, information technology, virtual reality, education, business and organizational studies, ethics and politics, integrative models April 10: Plenary sessions: 27. Inter-species Communication and Cognition: cognition in human (Bloom), chipanzees (Savage-Rumbaugh), birds (Pepperberg) and dolphins (Reiss) 28. Transpersonal Psychology: altered and higher states of consciousness transcending individual consciousness (Tart, Walsh) 29. The Role of Primary Visual Cortex in Conscious Vision (Stoerig, Turner, Sagi, Block) Concurrent Sessions: 30. The First-Person and Third-Person Perspectives 31. Structural Correlates of Consciousness 32. Cognitive Architectures 33. Quantum Brain Processes 34. Science and Epistemology 35. Parapsychology April 11: Plenary sessions: 36. Medicine and Consciousness (Franks/Lieb, Weil) 37. Quantum Processes in the Brain (Beck, Na-nopoulos, Pat. Churchland, Hameroff/Penrose) April 12: Plenary sessions: 38. The Role of Extrastriate Visual Cortex in Conscious Vision (Koch, Too-tell/Dale/Reppas/Sereno/Rosen, Logothetis, Weiskrantz) 39. Integrative Perspectives (Paul Churchland, Chalmers) 40. Phenomenology and Experiential Approaches (Searle, Varela, Velmans, Forman) Concurrent sessions: 41. "Zombies" (cognitive beings without consciousness), Ontology and the Function of Consciousness 42. Anesthesia/Pain/Bhndsight 43. Computational Models 44. Time 45. Meditation/Contemplation/Mysticism 46. Information Technology/Virtual Reality Poster sessions: 47. Philosophy: epistemology, ontology, metaphysics 48. Neuroscience: anesthesia/pharmacology, cellular/sub-cellular processes, evolution, quantum neurodynamics, biophysics, psychoneuroim-munology 49. Cognitive science and Psychology: cognitive architectures, cognitive models, cognitive development, theory of mind 50. Physics and Mathematics: emergence and hierarchical systems, holography, non-linearity and chaos, bioelectromagnetics, resonance, integrative models 51. Experiential and Cultural Approaches: science and epistemology, parapsychology, contemplation, health and medicine, placebo studies, altered states, out-of-body experiences, biofeedback, psychoanalysis and psychotherapy, lucid dreaming, spiritual healing, integrative models April 13: Plenary sessions: 52. Current Directions in Parapsychology (Bem, Nelson/Radin, Blackmore, Bierman) 53. Functions of Consciousness (Ramachandran, Schachter) 54. Consciousness and the Physical World (Stapp, Elitzur, Scott) Searle defined consciousness as subjective qualitative processes of awareness, sentience or feeling. Although the problem of consciousness has not been solved, a consensus has emerged that consciousness must be treated as a synthetic, holistic, multi-level phenomenon. It has a multiple nature which is at the same time synthesized into a unity: multi-modal perceptions and representations are always unified into a single undividable experience (binding problem). Self-reference is essential for consciousness . It is connected with the qualia in the sense that it constitutes an "I" which satisfies the following "equation": objects + I (subject) = qualitative experience (of the object by the I). Almost all participants agreed that consciousness has correlates in a physically reahzed complex system. There were those who treat consciousness as an emergent phenomenon of the brain processes (down-up explanation), and those who say that our phenomenal (including physical) world reveales itself only through consciousness (up-down explanation). Further-on, there were two large groups; those who argued that neural networks are sufficient for consciousness, and those, who offered quantum systems as binders of neurally-processed perceptions and thoughts into an unified experience using quantum coherence. Protagonists of the second theory presented mi-crotubular networks and other sub-cellular structures (dendritic nets, presynaptic vesicular grids, cellular membranes, etc.) as "interfaces" between quantum and neural information-processing systems. Large-scale neuroscience presented some brain areas which are candidates for conscious processing, or which would make some cognitive processes in other brain areas conscious: striate cortex (VI) or extrastriate cortex (V2-V5), prefrontal cortex, reverberatory activity in pyramidal neurons in cortical layers 5 and 6, hippocampus, thalamus (or cortex-thalamus dialog), intralaminar nuclei, reticular formation, and co-operation of these areas. Some speakers pointed out that correlated 40 Hz oscillations bind representations into unified conscious experiences. Gray said that consciousness is not necessary for: analysis of sensory input and speech, learning, memory, selection of response, control of action, production of speech output. Consciousness comes too late to regulate behaviour. We are conscious of the output, not the processing itself. Conscious process is the one that wins the competition of many information processes. Consciousness is more like fame, not like a large group of people (where some are famous and the other treat the first ones as famous) alone. Specific groups of neurons encoding specific information were found. On the other hand, the brain works globally and integrates all local-group firings. Problems with spatial and temporal non-locaHty or parallel-distributedness were presented—also experimentally on the stage, although this has not necessarily caused decrease of confusion (because consciousness is a multi-level process incorporating superpositions of seemingly contradictory behaviors on different levels). Greenfield said that consciousness is spatially multiple, temporally unified, continuously dynamic and derived from a stimulus. On the other hand, Libet and others report time-projections of conscious experiences (e.g., backwards in time, so that we have impression that sensation and conscious registration occured simultaneously, not with a delay). Hobson said that the level of conscious changes is a function of activation, focus of consciousness is a function of input/output gating, and form of conscious changes is a function of modulatory neurotransmiter ratios. Hillis argued that machines can be conscious. Penrose said that this may in future be true, but only if machines would include non-computational processes as found in the "collapse of quantum wave-function". In quantum world there is a deterministic and computable Schroe-dinger dynamics which is interrupted by a random and non-local, or non-computable, wave-function (objective) reduction. This reduction, which occurs during a quantum measurement, promotes a microscopical quantum phenomenon to a macroscopical classical level. It is argued that this process is relevant for subconsciousness-to-consciousness-transitions or/and memory-to-consciousness transitions. Examples are transitions from superpositions of microtubular configurations to a selected configuration representing some cognitive information. In neural nets there are similar effects, but perhaps they are not enough for holism of consciousness, and therefore quantum Bose-Einstein condensation of particles into an indivisible coherent quantum whole may be the final answer. Quantum events in synapses, like tunneling, may influence neurotransmiter release which is essential for neural memory (Walker, Beck)—another example of neuro-quantum relation which may be disrupted by anesthetics. Following important claims were presented: (Non-)attention is evident from EEGs, but all that we can know about quality of subject's awareness is his own report. Abstract concepts exist without language. Consciousness can trigger "mi-racleous" healing and self-repairing of the organism. Large damages of the brain do not necessarily lead to loss of consciousness. Bhndsight patients say that they do not see, but they, if forced, react as they would indeed see (they actually perceive, but are not conscious of seeing). If their visual cortex is intact, but other cortical areas are paralized, there is no vision. So, visual cortex is essential for vision, but alone not enough. According to "holon" theory, states of one level are rules of the other level, and opposite, so infinitely in both directions, with an overall self-referential loop. Velmans spoke about subjective psychical projection of the state of the neural correlate of consciousness back into the physical world. So, pain does not come from a physical space of the body (e.g., finger), but originates in a physical space of the brain, and is then projected as if it would occur in the finger. Quantum theorists favoured Froehlich coherent, laser-like, collective biophysical processes in living systems. They also discussed systems of spins and electrical dipoles, and even delocalized states over the whole space-time as treated by the string-theory. In the relativity theory all times and spaces exist simultaneously, only our consciousness gives us impression of time-fiow and "now" by "illuminating" (manifesting) succesive portions of space-time locally. Due to HameroflF and Penrose, sequences of irreversible quantum collapses represent the origin of the arrow of time, and connect the problem of consciousness with relati-vistic space-time geometry and quantum gravity. Forman presented mystical pure consciousness as a non-intentional wakeful state. Global EEG coherence is an essential correlate of transcendental consciousness during meditation. Parapsycho-logists showed evidence for telepathy and other influences of consciousness on physical world (probably quantum vacuum which may act as information-transfer "medium") from numerous statistical studies, Nobel-physicist Josephson supported them, but Blackmore expressed scepticism. There were several lecturers introducing an universal consciousness field. This was merely a brief telegraph-style sum- mary of the main ideas. The hard problem of qualia remained unsolved. Some physicists argue that "qualia are embedded in space-time"; some cognitive neuroscientists say that this is a quasi-problem; some spiritually-oriented scientists argue that qualia are not problem at all, because they are the basic thing, and all other things must be explained starting from this experience. It is obvious that humans experience qua-ha, but for now they do not understand their nature at all. On the other hand, the system-processing background of phenomenal consciousness is known increasingly better using theory of neural networks with their hierarchies of attrac-tors, and roughly similar processes in quantum networks. At Tucson II it was evident that, in spite of many incoherencies between particular theories, there is a global "statistical" convergence in the new-emerging science of consciousness, obtained mainly by averaging conflicting theories and points-of-view into higher-level "superpositions". Because the organizers (led by Hame-roff's circle in Tucson) already plan Tucson III, it can be expected that the new series of multi-disciphnary Arizona conferences will be imprinted in the history of science and perhaps history in general. Our understanding of consciousness will drastically influence science, information technology and our world-views. By Mitja Perus A Report: International Conference "New Directions in Cognitive Science", Saariselka, Lapland, Finland, 4-9 August 1995 The aim of the conference was to open alternative interdisciplinary directions of holistic research with emphasis on new perspectives of informational quantum physics, cognitive neurosci-ence, cognitive musicology and postmodern views. It was very successfully organized by Finnish philosophical and AI societies, and directed by P. Pylkkanen, P. Pylkko and A. Hautamaki from Helsinki. The key-themes were: - consciousness: problems of the unity of consciousness, binding of multi-modal sensory experiences into a synthetic experience, the nature of qualia (qualitative phenomenal experiences), etc.; - non-conceptual experience, states of consciousness; - quantum brain dynamics; - implicate order (subquantum processes), active information and other concepts of the ontological interpretation of quantum theory by Böhm and Hiley; - quantum non-locality, relation of space-time and consciousness; - neuro-quantum relations; - levels of brain processing (neural, subcellular, quantum, sub-quantum, virtual); - musical perception and cognition; - new perspectives in neurological background of cognition. Because it is hard to summarize about 50 contributions, let us present an overview of some significantly-novel ideas only. Globus presented a theory of sub-cellular and, as he argued, non-computational processing. According to this proposal oscillating membrane biomolecules with their dipoles express sensory input; memory is realized by vacuum states of ordered water molecules within microtubules; cognitive processing is exhibited by nano-level neuropil and its Froehlich wave and Davydov soliton phenomena; interference in peri-membranous bioplasma is responsible for informational explication or collapse, respectively. Hiley discussed quantum potential and its active (non-Shannon) information in connection to the problem of mind, but Kieseppa opposed. Papatheodorou presented holomovement ("vacuum") as an "a-local, a-causal substratum of undifferentiated activity" underlying space-time continuum. In this pre-space, proto-mind and -matter coexist in a hierarchy of implicate orders, and everything is informationally contained in all "other parts" of the Universe. Gould showed analogies between the ontological interpretation of quantum theory and Pribram's holonomic theory, both using non-local features. There seem to be similarities in collective dynamics of a quantum system and a network of neurons and synapses. Complex systems of different basic elements exhibit similar collective dynamics which is used for information processing. Revonsuo showed the authonomy of human conscious models: they are causally independent of perceptual input and behavioral output, and have internal sources of neural stimulation (dreams, mental imagery). Hari and Loveless presented experimental evidence of "hearing backwards in time". A Finnish group reported complex cognition patterns during anaesthesia. Allwood presented dialog as collective thinking. Josephson searched for a trans-human source of music. Kaipainen, Toiviainen and Louhivuori implemented sequential recognition and generation of melodies by self-organizing mapping. Steinberg discussed physical time in contrast with internal or neurologic time. He gave an overview of historic development of the notions of time. Three notions of time were defined: first, linear physical time as a measure of motion and arising from periodic natural phenomena, flowing from past to future; second, subjective, unmeasu-rable experiential time, "constantly flowing 'now', the unalterable past and the unknown future"; third, duration as "an undifferentiated persistence in which physical universe is embedded" (a sort of "pre-time"). According to Steinberg, duration is reflected in recognition that objects do not de-materiahze when they leave our visual field. Physical time is reflected in sensory-motor causality and in the internal biological clock. Experiential time reflects in goal-directed behavior, expectations, anticipations... Steinberg mentioned that in Feynman's path-integral method interactions of particles and fields don't progress with time, but a sum of all possible interactions is present at once. In Feynman's words, "the entire space-time history (is) laid out, and we just become aware of increasing portions of it successively." Steinberg concluded that physical time is only a part of neurologic time and therefore physical laws and physicahst cognitive science are too incomplete to be able to describe the mind's functioning. To achieve mea-surabihty and universality of time it would be necessary to exclude experience. Experience would in that case remain an enigma. Nyman and Häkkinen presented experience of three-dimensionality of the world as a perceptual illusion which arises from brain's mapping of multi-modal, multi-dimensional input-information into cognitive structures. The conference was successful in finding many promising ways of cognitive research in contrast to mechanistic and exclusively computational tradition. Paradigm is thus shifting in cognitive science also. By Mitja Perus Erratum In the paper A.P. Železnikar: Informational Frames and Gestalts, Informatica 20, p. 84, the correct formula is for the number of formulas in gestalt The Editor-in-Chief appologies for the occurring error. A Report: The Third World Congress on Expert Systems February 5 - February 9, 1996 Seoul, Korea The congress is dedicated to providing an international reputation of both scholary and practical issues on expert systems technologies and appHca-tions. It serves to bridge the gap between expert systems theory and applications worldwide. The first world congress on expert systems (ES) was held in Florida in 1991, the second in Portugal in 1994, and the third in Korea in 1996. The fourth congress is going to be held in Mexico City, March 30—April 3. For more information about the forthcoming event in Mexico please see http://www-cia.mty.itesm.mx/wces98. Many countries have realised that expert and intelligent systems (IS) are among critical technologies significant for further progress. Many expert and intelhgent systems are widely used, however, many people are still afraid to admit that they use artificial intelhgence (AI) techniques. While many ES and IS are designed for scientific purposes and finish as research prototypes, the percentage of successful applications is steadily growing. Participation of users is among the most positive attributes for successful implementation. Unfortunately, basic research funding has declined in the past few years. In the world of downsizing and reengineering, expert systems technology is ideal for capturing knowledge and experimental learning. In the words of Jay Liebowitz, we need to be more proactive in order to encourage funding agencies to devote increased research funds for advancing ES and IS. The congress was fertile ground for establishing cooperation and learning more about the scholary work and practical applications of expert systems and organizations. Over 200 participants from 45 countries gathered at the third world congress on ES. Jay Liebowitz, the father of the world ES congresses, sang all the names of countries of participants at the charming beginning of the congress. Prof. Feigenbaum introduced the recipient of the third Feigenbaum medal, prof. Donald Michie. Prof. Feigenbaum is the pioneer and a leading promo- ter of expert systems. During recent years, expert systems have found ways to practically all areas of computer applications. Prof. Michie is a dean of AI and ES in Europe. He has introduced "Fei-genbaum's bottleneck" as a process of extracting knowledge from experts to computer expert systems. The proposed solution is machine learning or data mining. Prof. Michie's motto is that learning and flexibility are going to become one of the most important factors for any nation. In latest years. Prof. Michie has dedicated his attention to dynamical control tasks, to different ways of extracting executable clones from expert performance traces. In his speech. Prof. Michie mentioned collaboration with Prof. Bratko's laboratory in Slovenia. Prof. Michie is an associate member of the Jozef Stefan Institute (one of the supporting organisations of Informatica). Among the invited papers, Se June Hong from the IBM Watson Research Center presented "Data Mining for Decision Support". Data mining is very close to machine learning; both refer to the process of abstracting various kinds of useful information from data. However, data mining orients towards large amounts of data, huge databases and a broader use of newly created knowledge. Many techniques are similar in several diverse disciplines such as statistics, pattern recognition, and various machine learning subfi-elds such as learning decision trees, lists of rules or neural networks. In addition, many subpro-cesses in the data mining correspond to those of database technology, expert systems and decision support systems technology. Classifications, associations and clustering are the three main trusts of data mining. Tibor Vamos from Hungarian Academy of Sciences presented an invited paper "Expert Systems and the Ontology of Knowledge Representation". The paper deals with the pragmatic aspects of the mind-brain-machine problem. Among practical projects. Dr. Vamos mentioned a twelve-years ongoing ES for early diagnosis and habilitation of babies who suffered some kinds of brain injuries during their perinatal period. The system has handled about 2000 cases so far. Mark Wallace from Imperial College, London, presented "Constraint Logic Programming - Status and Prospects". Constraint logic programming (CLP) combines declarative, logic program- ming with constraints enabling high-level control and efficient built-in algorithms. In this way, program correctness, program clarity and runtime efficiency substantionally increase. There are primitive constraints and constraints agents which react to changes and add further primitive constraints. CLP has already substantially influenced industry. It seems that CLP is the most efficient and the most flexible approach for solving real-world scheduling problems. In discussions with Jay Liebowitz and Francisco J. Cantu-Ortiz, conference chair for the forthcoming world ES congress to be held in Mexico City in 1998, an agreement was reached that Slovenian Artificial Intelligence Society can be listed as a cooperating society for that congress. In this way, Slovenian AI Society can join societies from USA, Italy, ..., Singapore and Venezuela. The Korean organizers have shown how to combine national tradition with unbelievable economic growth, knowledge and science. For example, the owner of the International hotel, the best hotel in Seoul, where the congress was held, is also a vice-president of Korean ES society and presented a paper as an active participant. The fourth world congress on expert systems will be held in Mexico City, March 30—April 3, 1998. Papers should be submitted no later than April 30, 1997. For more information e-mail rsoto@campus.mty.itesm.mx. By Matjaž Gams Machine Learning List The Machine Learning List is moderated. Contributions should be relevant to the scientific study of machine learning. Mail contributions to inl@ics.uci.edu. Mail requests to be added or deleted to inl-request@ics.uci.edu. Back issues may be FTP'd from ics.uci.edu in pub/ml-list/V/ or N.Z where X and N are the volume and number of the issue; ID: anonymous PASSWORD: URL- http://www.ics.uci.edu/AI/-ML/Machine-Learning.html CC AI A Journal for the Integrated Study of Artificial Intelligence, Cognitive Science, and Applied Epistemology CC AI is a quarterly international multi- and interdisciplinary journal on AI, cognitive sciences, and epistemology. The undersigned believes that CC AI is worth to get the full attention to the critical readers of Informatica. It offers alive contributions reaching deeply into the subject of informatics and the second-order cybernetics. At least, the presented issue of CC AI confirms this presumption. CC AI publishes articles and book reviews relating to the evolving principles and techniques of Artificial Intelligence as enriched by research in such fields as mathematics, linguistics, logic, epistemology, the cognitive sciences and biology. The intention is to provide a forum for discussion of such topics as cognitive modeling, logic programming, automatic learning, automated knowledge extraction, applied epistemology, AI, and art and general aspects of AI. Subscription Information Annual subscription dues are 1050 BEF for individuals, 2950 BEF for institutions, and 700 BEF for single issues. A combined subscription can be taken for CC AI and the journal Communication & Cognition, with the rates of 1500 BEF for individuals, 3150 BEF for institutions, and 1000 BEF for students (excluding mail and packing). Cheques and international money orders are to be made payable to Communication and Cognition, Biandijnberg 2, B-9000 Ghent, Belgium, or a transfer to Communication and Cognition, account, no. 001-0454888-34, A.L.S.K. Ghent. Payment by means of credit card is also accepted (Mastercard, Eurocard, Dinnersclub, Visa, and Access). Editor, Editorial, and Advisory Board Fernand Vandamme is the editor-in-chief; Gertrudis Van de Vijver is the managing editor. Editorial Board: Johan D'Haeyer, Patrick Maes, Ph. Van Loocke, Marc Vanwormhouldt, and Dirk Ver- venne (general coordinator). Carine van Belle-ghem is the publications director and Annick Goe-thals the administration assistant. Advisory Board: M. Bates (BBN Systems and Technologies, Cambridge); A. Bundy (Edinburgh University); E. Chouraqui (CNRS, Marseilles); A. Clare (University of Sussex); M. Healey (University of Wales); N. Juristo (Madrid); O. Laske (New England Computer Arts Association); H. Lieberman (MIT); H. Müller (University of Ghent); P. Suppes (Standford); D. Sriram (MIT); N. Suzuki (T.J. Watson Research Center, IBM); and A. Villa (University of Torino). Editorial Address: CC-AI, Biandijnberg 2, B-9000 Ghent, Belgium. Information for Authors Manuscripts should be submitted in quadruple to the editor, CC-AI, Bandijnberg 2, B-9000 Ghent, Belgium. The typing should cover only one side of plain paper and should be double-spaced. An abstract must accompany the manuscript. It is highly preferable to add an IBM compatible floppy to the text. As to magnetic material, floppy disks containing ASCII files will be accepted, as well as text files of the following word processors: WordPerfect, MS-Word, Wordstar, Framework (single/double sided, single/double density inch diskettes). Diskettes should be accompanied by a rough printout. As regards bibliographic references, citations and references in the text should be identified as follows: name, year of pubUca-tion, pagination if necessary. In an alphabetic list of author names, called References, those items have to be repeated with a complete bibliographical description. Notes should be put before the references, numbered consecutively, and referred to in the text by the corresponding number. Authors will receive twenty-five offprints. A Look into CC AI Let us make a look into CC AI, Vol. 12 (1995) No. 1-2, dedicated to Self-Reference in Cognitive and Biological Systems. Contents of this, volume is the following: —Contents of 12 (1995) 1-2: The issue is dedicated to Self-Reference in Cognitive and Biological Systems (Edited by L.M. Rocca). Contributions are the following: Evolving S elf-Reference: Matter, Symbols, and Semantic Closure (H.H. Pattee); The Mind-Brain Problem and the Physics of Reductionism (R. Rosen); Universal Grammar (C. Henry); Artificial Semantically Closed Objects (L.M. Rocca); Computability, S elf-Reference, and S elf-Amendment (G. Kampis); Metalogues. An Abridge of a Genetic Psychology of Non-natural Systems (RR. Medina-Martins); and As is Time Really Mattered: Temporal Strategies for Neural Coding of Sensory Information (P. Cariani). Citations Let us list some most challenging citations from CC AI which characterize the aim of the journal. — 12 (1995) 1-2, pp. 9-27, H.H. Pattee, Evolving Self-Reference: Matter, Symbols, and Semantic Closure. [P. 9] Self-reference has many meanings. In symbol systems, like logic and language, self-reference may lead to well-known ambiguities and apparent paradoxes as in, "This sentence is false". In material systems, like molecules and machines, self-reference is not clearly defined but may describe causal loops such as autocatalytic cycles, feedback controls, and oscillators. At the cognitive level, self-reference occurs in introspection and is often considered one aspect of consciousness. I define a specific form of self-reference that applies to a closure relation between both the material ànd the symbolic aspects of organisms. I argue that this view of self-reference is necessary to understand open-ended evolution, development, and learning at all levels of organization from the origin of life to the cognitive level. This is not an entirely new view, but is an elaboration and integration of ideas from several well-estabhshed areas of physics, logic, computation theory, molecular biology, and evolution theory. [P. 14] Symbols are difficult to define in any simple way because symbols are functional, and function cannot be ascribed to local structures in isolation. The concept of symbol, like the more general concept of function, has no intrinsic me- aning outside the context of an entire symbol system as well as the material organization that constructs (writes) and interprets (reads) the symbol for a specific function such as classification, control, construction, communication, decisionmaking, and model-building (e.g., Pattee 1969^). All these activities can be identified as functions only in specific contexts from local goals of individuals to the global survival of species. The symbol vehicle is only a small material structure in a large self-referent organization, but the symbol function is the essential part of the organization's survival and evolution. This autonomous structure-function self-referent organization is what is entailed by my terms semantic closure. [P. 15] Another explanation of why symbolic behavior cannot be described by laws is that laws are invented to be complete and inexorable. Therefore, one cannot amend or adjust lawful behavior itself. Laws have no alternatives. The only meaning we can attach to a choice of alternatives in a system described by deterministic laws is through measurement and control of initial conditions. Writing symbols is a tome-dependent dynamic activity and leaves a time-independent structure or record. The mathematician Emil Post (1965^) described the writing of symbols as, "Activity in time [that] is frozen into spatial properties." Symbols are read when these structures re-enter the dynamics of laws as constraints (Pattee, 1972^). [P. 16] Any model must in some sense have similar behavior to what it models. In the symbol-based formal models that are the established format for physical theories, similar behavior is a metaphor established by a parallelism between a few selected aspects of behavior of the symbols, determined by our local mathematical or computational rules. [P. 17] Each scientific culture has its own reason for ignoring the matter-symbol problem. Physics, with its sharp categorical distinction between matter and symbol, does not normally require a theory of symbols even though theories expressed in mathematical symbol systems play the primary ^H.H. Pattee, How Does a Molecule Become a Message? Developmental Biology Supplement 3 (1969) pp. 1-16. ^E. Post, Quoted from M. Davis (Ed.), The Undecida-ble, Rowen Press, Hewlett, NY (1965). ^H.H. Pattee, Laws, Constraints, Symbols, and Languages. Towards a Theoretical Biology 4 (1972) (Ed. C.H. Waddington) Edinburgh University Press, pp. 248-258. role in physics. Also, physicists study material systems that in most cases do not themselves contain intrinsic symbolic activities and functions ... Because organisms on intrinsic symbolic controls and the origin of life requires a symbolic genetic code as a crucial step, biologists should be much more interested in the matter-symbol problem. However this is not the case. Most biologists are material reductionists, and the discovery of the material structures that correlate with the symbolic activity and function is the only level of explanation they are looking for. Consequently, experimental and material discoveries, not theory, play the primary role in biology ... Philosophers have traditionally focused on the higher mind-body problem, but they have also found metaphysical stances that effectively minimize the matter-symbol problem, such as ide-ahsm, dualism, material reductionism, functio-nalism, and the newest and most effective of all, computationalism. Besides the traditional cultures of philosophy, physics and biology, a fourth computer-based culture comprising the classical field of artificial intelligence (e.g., Newell, 1980^; Pylyshyn, 1984^) and the more recent field of artificial life (e.g., Langton, 1988®) has adopted the programmable computer as a universal symbolic model. [P. 19] Many scientists have taken the reasonable strategy of treating the matter-symbol distinction as originating at some late stage of a general process of spontaneously increasing complexity of material systems. This type of model has often been called self-organization. The older literature includes discussions of physical systems that are described simply, but spontaneously grow in complexity (e.g., Yovits & Cameron, igeo''; Yovits, Jacobi k Goldstein, 1962®). The few general theories of self-organization were not thoroughly developed at the time (Simon, 1962®; Burgers, 1963^°; Pattee, 1969^^; Kauff-man, 1969^^). In the 1970's new types of order production were discovered in nonequilibrium ther mo dynamical and non-hnear dynamical models (e.g., Glansdorff & Prigogine, 1971^^; Haken, 1977^"; Nicolis k Prigogine, 1977^^). More recent work on self-organization is collected in Yates (1987^®). Some of these developments in thermodynamics have lead to speculations about possible organizing principles that modify the traditional neo-Darwinian model (e.g., Weber, Depew Smith, 1988^^). These models and theories of organization are generally applied only to prebiotic or at least pre-symbolic matter, and therefore do not address the matter-symbol relation. Currently, with the discovery of unexpected richness in nonhnear dynamics, self-organization is now usually included in the new field called the science of complexity (e.g.. Stein, 1988^®; Nicolis & Prigogine, 1989^®; Jen, 1989^°; Zu-rek, 1990^1; Stein & Nadel, 199022; Kauffman, 19932^). Its potential arises from many sources that include mathematicians, physicists, and ■^A. Newell, Physical Symbol Systems. Cognitive Science 4 (1980) pp. 135-183. ®Z. Pylyshyn, Cognition and Computation. MIT Press, Cambridge, MA (1984). ®C. Langton, Artificial Life. Addison-Wesley, Redwood City, CA (1988). ^M.C. Yovits & S. Cameron (Eds.), Self Organizing Systems. Pergammon, NY (1960). ®M.C. Yovits, G.T. Jacobi & G.D. Goldstein (Eds.), Self Organizing Systems. Spartan, Washington, DC (1962). ®H.A. Simon, The Architecture of Complexity. Proc. Am. Philos. Soc. 106 (1962) pp. 467-482. ^"j.M. Burgers, On the Emergence of Patterns of Order. Boll. Am. Math. Soc. 69 (1963) pp. 1-25. ^^H.H. Pattee, How Does a Molecule Become a Message? Developmental Biology Supplement 3 (1969) pp. 1-16. ^^S.A. Kauffman, Metabolic Stability and Epigenesis in Randomly Connected Nets. J. Theoretical Biology 22 (1969) p. 437. GlansdorfF & I. Prigogine, Thermodynamics of Structure, Stability, and Fluctuations. Wiley, London (1971). "H. Haken. Synergetics. Springer, Berlin (1977). '^G. Nicolis & L Prigogine, Self-Organization in Non-Equilibrium Systems. Wiley, NY (1977). ^®F.E. Yates, Self-organizing Systems: The Emergence of Order. Plenum, NY (1987) ^''B.H. Weber, D.J. Depew & J.D. Smith, Entropy, Information, and Evolution. MIT Press, Cambridge, MA (1988). ^®D.L. Stein (Ed.), Lectures in the Sciences of Complexity. Addison-Wesley, Redwood City, CA (1988) '®G. Nicolis & I. Prigogine, Exploring Complexity. Freeman, NY (1989). ^"E. Jen (Ed.), Lectures in Complex Systems. Addison-Wesley, Redwood City, CA (1989). ^'^W.H. Zurek (Ed.), Complexity, Entropy, and the Physics of Information. Addison-Wesley, Redwood City, CA (1990). ^^D.L. Stein & L. Nadel (Eds.), Lectures in Complex Systems. Addison-Wesley, Redwood City, CA (1990). ^^S.A. Kauffman, The Origins of Order. Oxford University Press (1993). computer and cognitive scientists, each with characteristic but overlapping approaches, e.g., nonlinear dynamics, chaos, cellular automata, non-equilibrium thermodynamics, statistical mechanics, solid-state physics, connectionist machines, artificial neural nets, etc. [P. 20] At the cognitive level, symbols allow our own subjective sense impressions to be compared with another's and thereby endowed with some degree of objectivity. By objectivity, here, I mean only the ability of different observers to reach a principled agreement by communicating what they observe. ... Why does a material structure needs symbols to communicate or transfer its structure to other matter? The first reason, mentioned by von Neumann, is that any universal constructor that could assemble its material parts would function more efficiently if it replicated from from a symbolic description of its material parts rather than replication by material self-inspection of its parts. 12 (1995) 1-2, pp. 29-43, R. Rosen, The Mind-Brain Problem and the Physics of Reductionism. [P. 29] The various modes of Reductionism, which differ in detail, all agree that there is a list, or algorithm, of a purely syntactic nature, which itemizes such cognitive conditions. This list describes the property P in question to a finite-state "machine", as software, and thus enables the machine to either recognize the property, and/or to generate some actual system x such that P{x) is true. [P. 30] Thus, Reductionism asserts a kind of "fractionation" of an arbitrary predicate or property of a material system like x, as a conjunction of essentially independent sum-properties Pi, so that we can write N Pix) = f\ Pi{x) (1) j=l where the truth or falsity of the propositions Pi{x) can be determined on purely cognitive grounds. Moreover, as asserted above, the truth of the constituent summands on the right-hand side of (1) are each individually necessary, and the all toge- ther sufläcient, for P{x) to be true. [P. 32] Let us now turn briefly to what Reductionism says about properties like Pi above, into which we have fractionated our arbitrary property P of a given material system x. ... we can partition, or fractionate, any material system x into independent pieces Xj-, by "independent", we mean intuitively that these pieces can be separated from the larger system x, and from each other, in such a way that their individual properties or predicates Pk{xj) are (a) entirely independent of any such a fractionation process, or indeed, of any larger context whatever, and (b) are precisely the individual summands appearing in (1) above ... In any case, the essential point here is to require that each Pi{x), each property or predicate of x, be of the form P,(x) = PUx,) (2) and hence, that we can rewrite (1) above as P{x) = /\Pkixj) ■ (3) 3,k The assertion is that every property or predicate or proposition, of the form P{x), about a material system x, is expressible in the form (3). [P. 33] The soul of Reductionism ... is the expression (2) above, which identifies properties or predicates Pi of the original, intact system x with those of certain of its subsystems (fractions) Xi- we thus suppose a set of operators on x, which isolate these subsystems from x, of the form Fi{x) = Xi These fractions, we recall, are required to be "context-independent", so that their properties are the same in any environment. [P. 34] The basic feature of these suppositions is that fractionation operations like Fi are in principle reueriiò/e. Indeed, the upshot of the above discussion is that our original system x is to be regarded as some kind of direct product; e.g. X = Xi