Informatica A Journal of Computing and Informatics The Slovene Society INFORMATIKA Ljublana A Journal of Computing and Informatics Subscription Information Informatica (YU ISSN 0350-5596) is published four times a year in Winter, Spring, Summer and Autumn (4 issues). The subscription price for 1990 (Volume 14) is US$ 30 for companies and US$ 15 for individuals. Claims for missing issues will be honoured free of charge within six months after the publication date of the issue. Printed by Tiskarna Tipa, Gosposvetska 13, Ljubljana. Informacija za naročnike Informatica (YU ISSN 0350-5596) izide štirikrat na leto, in sicer v začetku januarja, aprila, julija in oktobra. Letna naročnina v letu 1991 (letnik 15) se oblikuje z upoštevanjem tečaja domače valute in znaša okvirno za podjetja DEM 30, za zasebnike DEM 15, za Studente DEM 8, za posamezno številko pa DM 10. Številka žiro računa: 50101-678 - 5184L Zahteva za izgubljeno številko časopisa se upošteva v roku šestih mesecev od izida in je brezplačna. Tisk: Tiskarna Tipa, Gosposvetska 13, Ljubljana. Na podlagi mnenja Republiškega komiteja za informiranje št. 23 - 85, z dne 29. 1. 1986, je časopis Informatica oproščen temeljnega .davka od prometa proizvodov. Pri financiranju časopisa Informatica sodeluje Republiški komite za raziskovalno- dejavnost in tehnologijo, Tržaška 42, 61000 Ljubljana A Journal of Computing and Informatics EDITOR-IN-CHIEF Anton P. Železnikar Volaričeva ulica 8, 61111 Ljubljana ASSOCIATE EDITOR Rudolf Murn Jožef Stefan Institute, Ljubljana The Slovene Society INFORMATIKA Ljubljana Letnik 15 Številka 1 Februar 1991 YU ISSN 0350-5596 Informatica časopis za računalništvo in informatiko VSEBINA Monitoring of Real-time Systems Informational Models of Dictionaries I Structured Object - oriented System Decomposition Neural Nets: Survey and Application to Waveform Processing Case Tools - Software Development and Maintenance Aid (A Survey) Implementation of Denotational Semantics Viktorija Kobold 1 M. Spegel A. P. Zeleznikar 11 S. Mrdalj 22 V. Jovanović I. Bruha 27 S. Bošnjak 43 L. Sereš M. Mernik 48 V. Zumer Significance Level Based Classification with Multiple Trees O vzporednem množenju matrik A. Karalič V. Pimat B. Robič Polona Blaznik 54 59 Hierarhični večprocesorski sistem P. Kolbezen P. Zaveršek 65 Določanje položaja teles v prostoru iz ene same 2-d slike Barbara Lakner 77 Izvedba in uporaba posebne funkcionalno zasnovane računalniške tipkovnice za mrežo osebnih računalnikov M. Baar 83 Knjižne novosti 84 MONITORING OF REAL-TIME SYSTEMS INFORMATICA 1/91 Keywords: Real-time programming, Real-time monitoring, Debugging, Analysis Viktorija Kobold, Železarna Ravne, Ravne na Koroškem Marjan Špegel, Institut Jožef Stefan, Ljubljana ABSTRACT Standard approaches to software performance analysis are not directly applicable to realtime systems owing to their time criticality. Therefore a tool which can monitor or verify if the system meets timing requirements is very useful. A real-time monitor enables us to observe the behavior of a real-time process in its execution. An emphasis have to be done on real-time programming constructs and programming languages for real-time systems. Some important research issues of a programming methodology are presented. Monitoring a process means determining the values of certain parameters associated with a time sequence of events and modifying them. Some possibilities of the implementation of real-time monitoring are presented. POVZETEK V sistemih,z odzivom v realnem času ne moremo neposredno uporabiti standardnih metod za analizo zmogljivosti sistema zaradi njihove časovne kritičnosti. Prav zaradi tega je orodje, ki zna ugotoviti, ali so bile vse časovne zahteve izpolnjene, zelo uporabno. Pri razvoju je potrebno uporabiti pravi način programiranja in izbrati programski jezik, ki bo ustrezal zahtevam glede časovne kritičnosti. Opazovanje procesa med njegovim izvajanjem pomeni določanje vrednosti nekaterih parametrov, ki so povezani s časovnim potekom dogodkov. Sama implementacija orodja za opazovanje zavisi od okolja, v katerem ga bomo uporabili. 1 Introduction In resent years, there has been rapidly increasing use of computers as passive (monitoring) and active (controlling) components of real-time systems (e.g-r, air traffic control, aerospace, aircraft, industrial plants, hospital patient monitoring systems) [LS87]. For real-time applications, the problems of safety and reliability are particularly important because a failure may cause a disaster (e.g., destruction of human life and property). According to a definition, a real-time system is a system which responds to events quickly enough to affect its environment in required response times. Such a system consists of multiple cooperating tasks, which implies detailed consideration of intertask communication and concurrency problems in the program and system design. Real-time systems are divided into two types: 'soft' and 'hard'. In soft real-time systems, tasks are performed by the system as fast as possible, but they are not constrained to finish by specific times [CSR87]. On the other hand, hard real-time systems are defined as those in which the correctness of the system depends not only on the logical results of computation, but also on the time at which the results are produced [SR87]. In the rest of this paper we concentrate on hard real-time systems denoted as real-time systems. Real-time systems are further divided into centralized systems and distributed systems. A centralized system is one in which the processors are located at a single point in the system (e.g., multiprocessor system). In contrast, in a distributed system the processors are distributed at different points in the system [CSR87]. An example of such system is a local area computer network. Such system consists of a collection of communicating and cooperating processors or computers (nodes) that work toward a common goal. Due to a growing number of real-time applications, there is also a growing need for real-time software to control them and software tools such as timing tools, schedulability analyzers, real-time monitors and real-time debuggers. Timing tools are used to estimate or measure the timing characteristics of execution the given software module or a given function such as the worst-case response time of the system to a particular type of event. A schedulability analyzer can analyze the schedulability of a given real-time task set under the specific scheduling policy. A real-time monitor can monitor and verify if the system meets timing requirements at runtime. Finally, a real-time debugger should be able to capture unseen timing bugs at runtime with the minimum system interference. It is known that is difficult to design and analyze the software systems for the real-time applications owing to their time criticality [SR87]. Standard approaches to software performance analysis are therefore not directly applicable to real-time systems. Particularly 'timing bugs' are very difficult to capture and eliminate from real-time systems. In the rest of this paper, we will concentrate on monitoring systems, which can support performance evaluation, testing and debugging of realtime systems. In section 2 we give a description of real-time programming and real-time programming event monitor. In section 3 a definition of a real-time monitor is given and monitoring process is presented. In section 4 some principles of implementation of monitoring system are described. Finally, a schedulability analyzer is introduced. It can analyze whether or not a given real-time task set can meet its timing constraints under a specific scheduling policy. 2 Real-Time Programming Real-time programming involves the design of multiple program modules concurrently executing and interacting with one another through the sharing of resources and interprogram communication of data and events [PS85]. The multiple program modules or processes may be part of one program (with subtasks created by the program) or separate programs (to be run concurrently by a real-time op- erating system). These processe? execute concurrently at different priority levels and must use realtime programming constructs. Real-time programming constructs are used tc control interaction and are often called upon through use of system services of a real-time operating system. For example, they can be used to control concurrent access of data, enforce mutual exclusion of critical sections, perform safe interprocess communication, synchronize scheduling of one another Real-time programs can not be tested siniply by running them with data sources because their execution depends upon asynchronously occurring events. Hence, testing and verification of some critical aspect of the design often requires forcing of time sequences of events. The real-time monitor software permits the addition of 'action' routines which are triggered by event reporting routines. Each action routine must be designed for the specific tests being carried out. However, it does not permit flexible forcing of concurrent tasks to synchronize in such a way that conditions for a particular test are set up and results documented in the ensuing trace report. The important research issues of a programming methodology for real-time systems include [Sta88]: • Support for the management of time. 1. Language constructs should support the expression of timing constraints. 2. The programming environment should provide the programmer with the primitives to control and to keep track of the resource utilization of software modules. 3. Language constructs should support the use of scheduling algorithms. • Schedulability check. • Reusable real-time software modules. They reduces the software development cost and enhances the quality of resulting software. They have the added difficulty of meeting different timing requirements for different applications. • Support for distributed programs and fault tolerance. In conclusion, programming languages should have explicit constructs to express the time realted be- havior of modules and the semantics must be unambiguous. In the rest of this section the features of monitoring language is introduced and an event monitor is described. Monitoring languages. A monitoring language is a notation in which a user can specify predicates and actions. It should allow predicates which can access any state variable. In order to enable the user to meet the requirement of completness the monitoring language must depart from the variable accessing mechanisms built into programming language. According to [PN81], the monitoring languages can be classified into: 1. typical high-level programming languages (they were not designed for monitoring) 2. embedded monitoring languages (monitoring statements may be interspersed between programming language statements) 3. separate monitoring languages (predicate and action specifications may range over the entire state space). Which language a programmer will use depends on why he monitors the program in execution and what he hopes to achieve by doing so. 2.1 Real-time programming event monitor In order to study and learn real-time programming, it is useful to have the real-time programming event monitor which is described by [Sch88]. The event monitor produces a complete time-stamped event history sequence. This sequence is aissociated with an execution of the real-time programs with each event identified by a type and triggering program module. For example, events which are monitored include all calls to system services, entrance and exit of routines, the entering and exiting of a critical section, the filling or emptying of a queue, the detection of a value for a variable in an alarm range, etc. [Sch89]. Event records are stored in sequence of occurence and contain a time stamp, identification of the event itself, identification of the routine making the system call or signaling the significant event, and a small amount of additional information related to the particular event. Standard reports include three different kinds of reports. First of them is a report of all events containing each event, the triggering routine, and a time of occurence in time sequence. The included events are calls of operating system service and user-defined events such as entering and leaving routines, critical sections, etc. The second is the report as the first one. but it is restricted to events associated with a single routine or a set of routines. The last report includes special events such as delays of scheduled tasks, response time, peak queue sizes and the like. In conclusion, the event monitor can be a useful aid in teaching real-time programming. 3 Program Monitoring and Analysis The monitoring and analysis of progress of a program in execution has traditionally been part of the program development cycle. There are two reasons why one might want to examine the dynamic aspects of a program. First of ,them is to evaluate the performance of a program, and thus to assess its overall behavior. The second is to demonstrate the presence of programming errors, to isolate erroneous program code and correct it [Pla84] which is referred to as the "debugging a program". Actually, we can say that monitoring is the extraction of dynamic information concerning an execution of a computational process [Sno88]. Monitoring is an essential part of many program development tools. Monitoring a process means measuring or determining the values of certain parameters associated with a time sequence of events and modifying them. It means also observing its trajectory through the program state space, and if it is needed modifying this trajectory artificially PN81]. A monitoring system must not affect'the timing constraints of the target process (i.e., monitored process). If the monitor satisfies this requirement, it is called a real-time monitor. A monitor is a very useful tool because it makes possible to observe the behavior of a real-time process in its execution, in order to collect genuine data for statistics (system and performance analysis), or for detection of illegal or unexpected process states (identifying erroneous behavior, debugging). This is particularly important in an environment where simulation of the system behavior is not possible, or in cases where software errors only manifest themselves when production is run [Pla84]. However, it is almost impossible to detect or fix a 'timing' bug for real-time programs [GlaSO]. A timing problem further complicates system modification, reconfiguration and maintenance [Mok83]. In the patót, most researchers have resorted to 'ad hoc' monitoring of a program written in a high-level language. Eventhough in practice most debuggers operate at a level close to that of the target machine in execution. Another, a more fundamental problem is that softwate-based debugger which can monitor both control-flow and data-space changes, may impose a heavy overhead on the execution of the target program. Because of this, the software based monitoring is impractical for most kinds of performance analysis, or for debugging in 'live' situations. Performance overhead is the principal factor which constrains the development of modern, high-level language oriented monitoring tools, and their application [CLW90]. 3.1 The monitoring process What is to be monitored is expresed in a monitoring statement which consists of two parts, a predicate and an action. The predicate is a boolean expression which is defined on the state space of the target process. When the predicate becomes true, then the action modifies the target process. For example, actions may consist of operations which are normally associated with performance and data analysis (e.g., saving current variable values for later processing, incrementing a counter) and with debugging (e.g., halting the monitored process). In general, the predicate identifies individual states or a set of states in the state space of the target program. Predicates may be divided into three classes: a process predicate, a state predicate and a real-time predicate. The process predicate assigns a truth value to a target process. Process predicates arise in program performance analysis (for instance, how often are two specific statements executed in sequence) and in system security (e.g., warn me if any files have been opened but not yet closed at log-out time). The next one, the state predicate is a boolean function defined on the state space of the target program. It divides the state space into two regions, a region in which it yields a true value and another in which it evaluates to false. Finally, the real-time predicate is a state predicate which can be evaluated without delaying the target process. The monitoring functions can be divided into two classes [PIa84]: 1. Tracmg low-level events that change the process state and reconstructing a high-level interpretation of the process state. 2. Executing monitoring statements (i.e. evaluating predicates and executing actions if predicates become true). As a matter of fact, a monitor process can be designed by two policies. The first one is to process as little information as possible, in order to achieve a short processing time. The other one is to update all dynamic information structures incrementally, assuming that each increment can be executed within a specifiable amount of processing time. Monitoring requires that the entire state space of a program be accessible, implying two requirements for monitoring. The first is the ability to use selective and close control over the execution of the program. This may be carried down to the finest level. The second, refered to as "complet-ness" of a monitor [PN81], is the ability to define monitoring predicates both in terms of the control flow of the program and of specific changes in its data space. This means that all conditions on the state of the target process that have interpretations at the source language level can be stated and their occurrence detected. Moreover, the requirement for debugging is the ability to inspect and modify details of the program state in execution. Typical conditions include the following: • fraction of CPU time spent in supervisor state; • I/O channel activity, disk accesses; • addresses generated, paging or cache activity. Monitoring of conditions may be defined at three levels [CLW90], namely at the primitive level, at the abstract level and at the conditional level. The primitive level implies the use of machine-level instructions. For instance, conditions at the primitive level would include the execution of an instruction at a particular location, or the accessing of data from a particular location. Completness at the primitive level is achieved by providing an implementation of three types of primitive monitoring functions: the code breakpoint, the data breakpoint and the watchpoint. The code breakpoint defines a point in the execution of the target process. The data breakpoint identifies access to a specific memory location and the watchpoint recognizes a change in value of a specific memory location. At the abstract level, the notation and semantics of the high-level languages are used. For example, an abstract condition could be a change in the value of some program variable, or an entry to a particular procedure. The conditional level of monitoring of conditions involves the description of a process state which may or may never be achieved. An example of this type of condition is reaching of a particular program execution path. A condition defined at the abstract level or at the conditional level can in principle be implemented by translating it into one or more primitives. To do this, it is necessary for the monitoring system to determine absolute addresses of the target process at run-time. In certain cases it will be necessary to mirror-the stack operation of the target process. All these problems refered to that kind of monitoring imply a need for dynamic software structures for monitoring. 3.2 Monitoring of single processor systems Execution monitors (such as debugging aids, software performance measurement tools) may share the processor with the target process. When we have two processes it is not easy to solve the problem of synchronization between the target and monitor system. A monitoring system which slows down the target process is usually useless for real-time applications where the execution monitor must leave the timing behaviour of the target process unchanged. The problem in implementation of such monitors is in switching between the two processes. This can be solved in two ways. First, the original target program is augmented by the code needed for monitoring. In the second method, the user can enter the monitoring program step-by-step while the target program is running or ready to run. The target process and monitor have to be implemented as two different processes being executed on the same processor. The multiplexing of the processor between the two processes can be achieved in several ways: 1. The execution monitor assumes the role of the CPU and interprets the target program. In fact, the execution of the target process is slowed down too much [PN81] and that may be unacceptable for many applications. 2. The CPU supports a trace mode in which it generates a trap after each instruction, and an appropriate trap handler can be used to divide the processor between the two processes. Obviously, tracing each instruction considerably delays the target process. 3. Replacing instructions in the target program at locations where the execution monitor should gain control with calls to the execution monitor ('patching') allows full-speed execution in program parts where the execution monitor does not have to intervene; There may come into existence a problem if the number of patches becomes larger. 4. The performance of the monitoring system is improved by increeising the degree of parallelism through the use of additional hardware. The latter method achieves real-time monitoring if we assume that it is possible to construct a hardware breakpoint device which is a true observer of the target process's activities. 3.3 Monitoring of real-time distributed systems The monitoring of real-time distributed systems involves the collection and interpretation of information (e.g., event time stamps, synchronization sequences, race conditions, register status, transaction identifications, interrupt activities). This information can be used for improving, the perfor- mance or for testing and debugging of distributed systems. Nevertheless, monitoring a real-time distributed system is much more difficult than monitoring a centralized, sequential computing system because of multiple asynchronous processes, critical timmg constraints and significant communication delays [TFC90]. First of all, eisynchronous processes behave nondeterministically and are irreproducible. Therefore, it is hard to understand and interpret the obtained results. Secondly, the correctness of a real-time distributed system is determined by meeting the timing constraints which are imposed by the real-world processes. Finally, geographically dispersed nodes may introduce a significant communication delay. This can cause improper synchronization among the processors. However, the inter-processor communication cost in distributed system is not negligible compared to the processor execution cost. The noninvasive monitoring system presented by [TFC90] supports process-level activities (e.g., internode and intranode communication), function-level activities (e.g., procedure calls), and finally instruction-level activities (e.g., step-by-step instruction trace). Actually, monitoring process-level activities provides a high-level view of the target system's behavior. For example, typical events are creation and termination of a process, process synchronization, interprocess communication and external signals. 4 Implementation of Monitoring System The techniques used to implement an execution monitor depend on the environment in which the monitor will be used. The earliest execution monitoring techniques are: • user-controlled breakpoints; • tracing, observing and setting values of variables; • local modification of the program (patching). Early execution monitors were obviously developed for assembly languages. Then came efforts to integT-ate debugging and other monitoring facilities into a programming language and its compiler. The simplest technique for dynamic analysis involves the insertion of 'probe' instructions (e.g. print statements) into the program text [CLW90]. This may be performed by preprocessing the program source. But the modifications to the program code may introduce unwanted-side effects. The monitoring process can be implemented in two different ways [PN81]: • systems that interleave entities of the monitoring language with the programming language; • systems that allow the specification of the monitoring task as a separate 'monitoring program'. The first kind of implementation is not designed for monitoring. In the next type monitoring statements may be interspersed between programming language statements. Hence data references are necessarily relative to the current point of control. In the last type, the predicate and action specifications may have be range over the entire state space. A further step towards real-time monitoring is done by introducing a second processor which is able to execute the monitor concurrently with the target process. The implementation of monitor which shares the processor with the target process is shown in 4.1. Principles of monitoring on distributed systems are described in the subsection 4.2. 4.1 Principle of Monitoring Implementation on Single Processor System An implementation of a real-time monitor on single processor system is decribed by [Pla84] and [PN81]. The idea is to use a FIFO (first-in first-out) queue which is inserted between the target processor and the monitor. The FIFO queue is a register which is able to store information from the system bus. The execution monitor should be synchronized with the output of the FIFO queue. It should have nearly the same speed as the target process to prevent the FIFO queue from overflowing. In this case the FIFO queue acts as a delay line. The second device, called "bus listener" or target processor interface is used to listen to the transactions occuring between the target processor and other parts of the system. The data are then fed into a FIFO register. At the output of the FIFO, the temporarily stored data about memory transactions are used to manipulate the 'phantom' memory. The phantom memory is a dual port memory, which is accessed by the monitor process and the target processor interface. The monitor process may read the content of the phantom memory at any time. It may also lock the output of the FIFO. During that interval the data are kept in the FIFO. The monitor process may also interpret directly the sequence of memory transactions, using the data path from the FIFO. The last building block, a multiple breakpoint register, is added to speed up the monitor process. It is connected to the output of the FIFO, reports to the monitor any memory transactions referencing a location belonging to a previously defined set of memory locations. Other functions of the monitoring system are implemented in software, such ćis tracing low-level events, executing monitoring statements, etc. Processes are written in high-level, block structured programming language. This kind of monitoring system has the following disadvantageous: first, it limits the feasibility of real-time monitoring to cases where procedure calls and returns do not happen too frequen^tly, and second it limits the complexity and number of monitoring statements that can be submitted to the monitor. 4.2 Principles of Monitoring Implementation on Distributed Systems Much research heis been done and many tools for monitoring have been developed. Even though, they are not yet practical enough for monitoring real-time distributed computing systems due to their invasive nature. [TFC90] have been developed a real-time monitoring system to ensure minimal interference in the execution of a distributed target computing system. Their tool is used to support the testing and debugging as well as to evaluate the performance. This monitoring system extracts information directly from traffic on the internal buses of a target system and is described in the section 4.2.1. The next example of a real-time monitor represents an invasive monitoring system because it needs extra kernel support. Finally we introduce an approach which alleviates the invasive nature of a monitor. 4,2.1 Noninvasive Monitoring A real-time distributed computing system consists of a hardware part and a software part. The hardware part includes a collection of nodes and a communication network. Each node has its own CPU, peripherals, m.emory, and communication interface. The software part includes the operating system, the communication module,' and a collection of application processes. It is executed on each node of a real-time distributed computing system. The main purpose of the noninvasive assumptions (e.g., the target system is a distributed/multiprocessor system with a master node and slave nodes, communications among processes via shared variables, asynchronous input to a target distributed system, procedural calls are implemented by system calls, recursive calls of a procedure are not allowed) is to ease the identification of the events of interest (such as interprocess communication, interprocess synchronization, creating and terminating process, input and output operations, procedural and rècursive calls), and hence to simplify the trigger conditions and reduce the complexity of the postprocessing mechanism. The system architecture of the noninvasive monitoring system consists of two major components: the interface module and the development module. The interface module can be considered as the front end of the monitoring system. Its main function is to latch the internal states of the target system based on the predefined conditions set by the user. Otherwise, it copies the internal states of the target node's processor and starts recording data from the buses of the target node onto the memory buffer unit. Second, the development module is the host computer for the interface module. This module is a general-purpose microprocessor-based system. It contains all the supporting software for the initialization of the interface module and postprocessing activities. The development module is basically independent of the target node processor. This is achieved by separating the target-dependent functions into the interface module. The development module provides an interactive interface to the user. It is responsible for all the testing and debugging activities, including initialization of the monitoring system. • controlling the interface module to latch the target node execution historys and • performing postprocessing on recorded execution history. The development module consists of the development processor unit and the development memory unit. First, the development processor unit supports functions such as postprocessing, initialization of the interface module and providing a user-friendly interface. Second, the development memory unit consists of two parts: the development processor memory and the memory configuration unit. The program execution history from the target node is latched into the memory configuration unit. Since the noninvasive monitoring system does not steal CPU time from the target real-time distributed computing system, it does not interfere with target system execution. The noninvasiveness of monitoring is achieved by extracting information directly from traffic on the internal buses of a target distributed system. 4.2.2 ARM extra kernel support. 4.2.3 Relational Approach to Monitoring Traditional m.onitoring techniques are inadequite when m.onitoring complex systems (e.g., multiprocessors or distributed systems). A new approach is described. In this approach a historical database forms the conceptual basis for the information processed by the monitor [Sno88]. Monitoring is concerned with retrieving information and presenting this information in a derived form to the user. Therefore, the monitor is fundamentally an information processing agent, with the information describing time-varying relationships between entities involved in the computations. Otherwise, monitoring is an information-processing activity. The generated information is structured in the relational model. The attention is focused on the query of a relational database model so that the user can specify what information is to be collected. This approach also controls the amount of monitoring data collected. To summarize, this approach permits advances in specifying the low-level data collection, specifying the analysis of the collected data, performing the analysis, and displaying the results. ARM is the example of the Advanced Real-Time Monitor/Debugger which monitors and debugs realtime tasks [Tok90]. It can also analyze the target system's runtime behavior in real-time. Besides it can visualize the system's scheduling decisions or events, analyze the number of tasks, the number of missed and met deadlines, the number of scheduling events, total CPU utilization. This tool has been developing to be used with the ARTS real-time kernel for distributed real-time application. The monitoring can be done directly on the actual target system or it can be run with Scheduler 1-2-3 simulator which will be mentioned in the continuing of the paper. Their approach is based on monitorability analysis which is used to predict the maximum overhead caused by monitoring/debugging activities [TK88]. As we have seen, particularly in real-time monitoring, it is very important to predict the maximum interference and capability of the monitoring process itself. As a matter of fact, this real-time monitor is an invasive monitoring system in the sence that it needs 5 Scheduling Analysis For real-time embedded system analysis, it is necessary to incorporate the timing information into analysis. However, there are several problems in building real-time development tools. For instance, basically correct software actions which are too early or too late can lead to unsafe conditions. Furthermore, the predictability of the analyzer depends on what scheduling policies the target system uses and what types of real-time tasks it has [TK88]'. For this reason, real-time software must be verified to adhere to its critical timing constraints before it is used. This verification process is denoted as schedulability analysis [Sto87]. A schedulability analyzer can analyze or verify whether or not a given real-time task set can meet its timing constraints under a specific scheduling policy. In analyzing the scheduling algorithms, different performance metrics can be adopted. In dynamic real-time scheduling, it is very important to determine the percentage of tasks which meet their deadlines (Weighted Guarantee Ratio) [BS88]. On the other hand, real-time systems must be analyzed for worst-case schedulability. In this case we should know how many events can occur in the worst case. A well-known solution is done by Leinbaugh (Guaranteed response time) [LeiSO]. The performance analysis is done by simulating the behavior of the algorithms. More complex analysis can be done by using the Scheduler 1-2-3, which heis been developing for the Arts distributed real-time kernel [Tok90]. It uses analysis methods to determine whether a feasible schedule exists for a given tsisk set and under what conditions deadline can not be met. The Scheduler 1-2-3 can be used to predict the timing effects due to software and hardware modifications and it can be integrated with other test tools and the real-time monitor. Therefore, it can be used as the close-form analyzer or simulator. For example, the schedulability analysis under the rate monotonie algorithm [Kor90] is done by means of a closed formula analysis, while for other scheduling algorithms Scheduler 1-2-3 provides a simulator. 6 Conclusions cesses is essential. Furthermore, a monitoring of real-time distributed systems is much more difficult because of multiple asynchronous processes, critical timing constraints and significant communication delays The inter-processor communication cost is not negligible compared to the processor execution cost. This factor must be explecitsly taken into account in scheduling. Som.e of the various existing implementations of a real-time monitor on a single processor and on distributed system was presented. References [BS88] S. R. Biyabani and J. A. Stankovic. The integration of deadline and criticalness in hard real-time scheduling. In Proceedings of the Real-Time Systems Symposium, pages 152-160, December 1988. [CLW90] C. C. Charlton, P. H. Leng, and D. M. Wilkinson. Program monitoring and analysis: software structures and architectural support. Software-Practice and Experience, 20(9):859-867, September 1990. In this paper we introduce monitoring system which can support performance evaluation, testing and debugging of real-time systems. A real-time monitor can monitor and verify if the system meets timing requirements at runtime. It must not affect the timing constraints of the monitored system. We discussed about real-time programming constructs which are used to control interaction and are often called upon through use of system services of a real-time operating system. Furthermore, we must take into account the role of programming languages in real-time programming. Programming languages should have explicit constructs to express the time realted behavior of modules and the semantics must be unambiguous. The real-time monitoring is presented as observing a sequence of states of a process (i.e., predicates) and assigning a truth value to each of them. A positive evaluation of a predicate indicates an action, which can be used to record information about the process. A real-time monitor on single processor system shares a processor with a target process. The problem of synchronization between the pro- [CSR87] S. C. Cheng, J. A. Stankovic, and K. Ra-mamritham. Scheduling algorithms for hard real-time systems - a brief survey. July 1987. [Gla80] Robert L. Glass. Real-time: 'the lost . world' of sofware debugging and testing. Communications of the ACM, 23(5):264-271, May 1980. Kor90l Barbara Koroušić. Real-time executives for embedded microprocessor applications. Informatica, 14(4):58-63, 1990. [LeiSO] Dennis W. Leinbaugh. Guaranteed response times in a hard-real-time environment. IEEE Transactions on Software Engineering, SE-6(1):85-91, January 1980. LS87 Nancy G. Leveson and Janice L. Stolzy. Safety analysis using petri nets. IEEE Transactions on Software Engineering, SE-13(3):386-397, March 1987. [Mok83] A. K. Mok. Fundamental Design Problems of Distributed Systems for the Hard Real-Time Environment. PhD thesis, Massachusetts Institute of Technology, May 1983. [Pla84 [PN81] [PS85 [Sch88 [Sch89 Sno88 [SR87] [Sta88] Sto87 TFC90] Bernhard Plattner. Real-time execution monitoring. IEEE Transactions on Software Engineering, SE-10(6):756-764, November 1984. Bernhard Plattner and Jurg Nievergelt. Monitoring program execution: a survey. Computer, 13(ll):76-93, November 1981. J. L. Peterson and A. Silberschatz. Operating System Concepts. Reading. MA: Addision-Wesley, 1985. James D. Schoeffler. A real-time programming event monitor. IEEE Transactions on Education, 31(4):245-250, November 1988. James D. Schoeffler. Real-time programming and its support environment. IEEE Transactions on Education, 32(3):377-381, August 1989. Richard Snodgraas. A relational approach to monitoring complex systems. ACM Transactions on Computer Systems, 6(2):157-196, May 1988. J. A. Stankovic and K. Ramamritham. The design of the spring kernel. In Proceedings of the Real-Time Systems Symposium, pages 146-157, December 1987. John A. Stankovic. Real-time computing systems: the next generation. In Tutorial Hard Rael-Time Systems, 14-37, 1988. Alexander D. Stoyenko. A schedulability analyzer for real-time eucklid. In Proceedings of the Real-Time Systems Symposium, pages 218-227, December 1987. J. J. P. Tsai, K. Fang, and H. Chen. A noninvEisive architecture to monitor real-time distributed systems. Computer, 23(3):ll-23, March 1990. [TK88] H. Tokuda and M. Kotera. A real-time tool set for the arts kernel. In Proceedings of 9th IEEE Real-Time Systems Symposium, 1988. Tök90] Hideyuki Tokuda. Arts real-time scheduler analyzer/debugger. May 1990. INFORMATIONAL MODELS OF DICTIONARIES I INFORMATICA 1/91 Keywords: data, dictionary, electronic dictionary, information, Anton P. Železnikar informational dictionary, organization, structure, terminology, Volaričeva ul. 8 understanding 61111 Ljubljana A computer-aided dictionary can be comprehended, structured, and organized in three basic ways: as a data-structured information (classical computer program), as an information-oriented structure (advanced electronic dictionary), and as a complex informational entity in the sense of the informational arising capability. All three concepts can substantially surpass the performance of a book form dictionary which always performs as a fixed, unchangeable data entity. Computer-aided and information-oriented dictionaries can bring into the foreground the most complexly imaginable semantic and pragmatic connectedness of information items. Under these circumstances it is possible to introduce various aids for construction, development, and understanding of information items constituting an advanced and also to the most pretentious user oriented dictionary. Further, the performance of bilingual and multilingual dictionaries can contain the quality of translation of sentences from one language into another on the user request. In monolingual dictionaries, the demands on syntactic, semantic, pragmatic, and stylistic requirements can be covered strictly and efficiently, delivering in any respect improved and corrected texts in written natural languages. This essay stresses the perspectives of an informational approach at the design of fumre. electronic dictionaries. Informacüski modeli slovarjev I. Računalniško podprti slovarje mogoče razumeti, strukturirati in organizirati na tri različne načine: kot podatkovno strukturirano informacijo (klasični računalniški program), kot informacijsko zasnovano strukturo (napreden elektronski slovar) in kot kompleksno informacijsko entiteto v smislu zmogljivosti informacijskega nastajanja. Vsi trije koncepti zmorejo bistveno preseči zmogljivosti knjižnega slovarja, ki predstavlja le fiksirano, nespremenljivo podatkovno entiteto. Računalniško podprti in informacijsko zasnovani slovaiji lahko upoštevajo mdi najbolj kompleksno semantično in pragmatično povezanost informacijskih enot. Pri tem je mogoče vpeljati različne pripomočke za gradnjo, razvoj in razumevanje informacijskih enot, ki sestavljajo napreden in tudi za najzahtevnejšega uporabnika zasnovan slovar. Razen tega pa lahko ima dvojezikoven ali večjezikoven slovar še lastoost prevajanja stavkov iz enega jezika v drugi na zahtevo uporabnika. V enojezikovnih slovarjih je mogoče dosledno in učinkovito izpolniti zahteve po sintaksnih, semantičnih, pragmatičnih in stilističnih značilnostih jezika in tako izboljšati in korigirati (lektorirati) pisana besedila v naravnih jezikih. Ta spis poudarja prednosti informacijskega pristopa pri zasnovi prihodnjih elektronskih slovarjev. 1. Introduction ...The semantic system of a language has to do with meanings and thus with the relation between the conventionalized symbols that constitute language and the external reality about which we need to communicate through language. ... (NCD) 24 What is an informational dictionary and how does it differ from a cpmmon, book form dictionary? But even book form dictionaries can differ essentially among each other in their intention, exten-siveness, and purpose. For the most pretentious users of dictionaries who search not only for the minimal meaning, translation, or pure concept of a word, but also for several other attributes like word contexts, phrases, idioms, and whole sentences, paragraphs, etc., who investigate informational items in an epistemological and hermeneutic (semiotic and historical) way, a convenient book form dictionary may not be a satisfactory solution at all. Already the semantic and pragmatic conceptual smdy of words calls for a careful examination of the meanings belonging to various informational items and the studying of origins, going or descending back into the domain of different languages, the modem and the antique ones, in a recurrent meaning-improving manner. To this type of the searching and construction of meaning, the intention of creating a new, innovative, and adequate meaning has to be taken into the concep-malization and design of an informational dictionary, In this respect, the process of using an informational dictionary, extracting the meaning from it, and constructing a resulting meaning of an informational item approaches, in principle, the problem of understanding as information (Uli, UI2). The tool one would like to have is a particular, let us say informational expert system which could deliver a complex, linguistic (semiotic), inter-linguistic (multi-linguistic), and particular information concerning the possible and variously related meaning of an informational item (word, phrase, sentence). More precisely, this informational tool should perform as a sufficiently sophisticated in- formational expert dictionary system (lEDS for short) in any respect. In this essay we will develop concepts and symbolic formalism of lEDS's, rooting in the informational algebra (IIA) and suggesting their iaforraational structure and organization in the proposed algebraic way. In the most cases we shall prefer the so-called formal hermeneutic approach of understanding (UI2), which considers the development of the meaning belonging to an informational item (term, headword) through the item's historicity, but giving also the outlook to the so-called direct (non-historical) parallel-cyclic approach of understanding, which can occur at the conferring of a new arising or arisen meaning in an innovative way. In the course of the discourse concerning the problem of understanding at the use of informational dictionaries, several approaches are possible. The most direct approach is the so-called spontaneous-circular understanding as used, for instance, by an inexperienced and linguistically unpretentious user of a dictionary. A professional writer, for instance, will insist to use an extremely hermeneutical method based on elementary spontaneous-circular understanding of an informational item, but also on the mostly considered expertise of understanding. While in the first case, the approach to understanding will be characteristically understanding, using a regular informing of understanding, in the second case, this process will take the form of expertizing, giving attention to the specialized emphases, which cannot only improve the content of meaning but can also enrich or innovate it essentially. In an implicit or explicit form, all these approaches of understanding of informational items of a dictionary will concern the moving from one informational item to another inside the information's semantic and pragmatic nets (maybe mumally connected or not) within an informational dictionary. In some respect, a satisfactory informational dictionary will perform as a »well-connected« (completely connected) net of included informational items. Theoretically, the well-connected net will guarantee the access from an arbitrary informational item of the dictionary to any other concerning item. In general, an arising dictionary will always enlarge and enrich its nets and thus improve its well-connectedness in an informational nianner. . The question of building up the process of a dictionary usage goes "back to the question, how does a writer or speaker perform in writing or speaking a text. Further, how does a language interpreter or a professional writer search and construct the meaning of words, phrases, and sentences by using single- language, cross-language, and professional (technical) dictionaries, handbooks, lexicons, encyclopedias, scientific, technical, philosophical scriptures, etc. How does someone construct the meaning of an informational item by gathering information from various and semantically different documents, memory, and its own intention? And finally, how does someone invent its own meaning, although considering the usual and existing meaning for an informational item? The discussion in the previous passage lends our attention to the problem of discourse occurring between a user's metaphysical disposition (as an individual, conscious and unconscious information) and an informational dictionary (as a system information). The moving inside different semantical-pragmatical nets of a dictionary will largely depend on Üie dialog between the user and the informational expert facilities of the dictionary. With the exception of the problem of understanding and expertness of this bipolar informational game also the problem of discursiveness will come into the focus of our attention. On the basis of this introductory analysis, the following scheme of our formal informational investigation can be put into the question: What could be a general informational structure of an informational dictionary? Which are the basic questions of the design and usage of a dictionary through the elementary possibilities (approaches) of understanding? Which could be the consequences of the discursive nature of understanding between the user and an informational dictionary (ITD)? HÒW does a dictionary function as a discursive informational expert dictionary system. What is an informational model? The informational model (IM) of something will be a system of informational formulas describing something as an informational entity {Železnikar, IIA). Electronic, dictionary, ED for short, will be the term concepmalized by \ht Japan Electronic Dictionary Research Institute (look at OED) and marking a sophisticated project and implementation of dictionaries in namral languages. Thus, an ED is an information-aided or information-oriented strucmre which essentially surpasses the so-called book form dictionaries and possibly also several performances (functionality) of living human dictionaries. This holds true in particular in cases of inter-lingual dictionaries concerning several natural and ancient languages simultaneously. In this essay we shall distinguish three main informational models (IM's) of dictionaries, that is, a data-structured dictionary, an information-strucmred dictionary, and, finally, an informational dictionary. The second basic aspect of an electronic dictionary we must keep in mind is the question of understanding which concerns an ED, that is, the self-understanding as the quality of the programmed computing system and the so-called other-understanding (understanding of the electronic system users and designers). The system understanding represents the system functionality, its particular performance or intelligent capability in comparison to living and other artificial sys. tems. 2. Data and Information ...No living language stands still, however much we might wish at times that it would. ... (NCD) 24 \ Before we start the discourse on data-structured, information-structured, and informational dictionary, it is recommendable to clear the question concerning the concept of data, i.e. data entity, and that of information, i.e. informational entity, concepmally and in an informationally formal (logically informational) way. So we have to point out clearly the conceptual difference of occurring theoretical distinction between the so-called data and information (al) entities. What is data as an informationally theoretical entity? How does it inform and how can it be not informed? A written text (fixed on a sheet of paper) appears always as data. It is protected against any change or illegal informational arising. A written text can inform any of its readers, certainly in different manners, depending on the informational circumstances concerning the text and the reader. We agree that a written text cannot be changed in a writing way, thus the copies of the same text perform as equivalent data. Let data as a particular informational operand be marked by 5. We agree upon the fact that data inform in an open way, that an informational receptor can be informed by data. This openness of data 5 possibility is symbolically captared by expression 5 [=. On the other hand, we have to consider two further facts concerning the lasting (absolute memorizing, steadiness) of data, roughly as 5^0, and the impossibility of data to be informed (informational unchangeableness), roughly as ^^ 5. The last formula, ^ 6, is closed in regard to 5 because of 5 t= Ò, otherwise it is open. Further, formula ò [= 5 has to be particularized because operator [= is a memorizing operator, that is, a non-arising (fixed) informational operator. Thus, instead of S |= Ò we can introduce Ò Ò, where marks a pure memorizing (data storing) informational operator. After this concepmalization of data 5 we can adopt the following informational implication: (dl) ò=»(51=;5Nm5;M5) In this formula (system of parallel informational formulas), operators |= and are the most general informational operators in the sense that (d2) (6|=)=->(5|=go|=); (M 5) Here, the operator composition Ng^Ng performs as operator which is closed (non-informing) to any other informational entity except of the memorizing data 5. Further, a more precise denotation of operator would be [=g to which the operator composition reduces in a logical way. Finally, the entity 5 as data implies (d3) 5 (6 [rrr: where is a general informational operator, whereas l=g is closed against any other operand except 5 and, in this way, represents also a general informational operator of non-informing, that is, operato: ìt^. Additionally, operator is the operator of preserving or strict memorizing of the data entity 6, that is, particularly structured operator, denoted by [=g We see how data 5 is an informational exception in regard to regular (arising) informational entities. A question which arises on account of formula (d3) is what could Üie expression (d4) 5^5; hgS represent. By definition, this data entity cannot inform or cannot be informed by any other informational entity except itself. Such an entity would perform as informational noise or concealment for other informational entities. However, entities (processes) S |=g and [=§ 5 which constitote (d4) could inform and could be informed, for instance, as the processes (5t=g)^; [=(5|=g); (|=g6)[=; \=z ([=gò); etc. A paragon of such a concept could be, for instance, the so-called Being as information (a universal constant or unity of being which never changes, as taught by the Eleatic and some modern existentialists). If data is information which as an informational entity during an informational process never changes, information is the entity which ever or always changes. By this qualification, changing data perform as information. But this can never take place in the case of a book as seen from the side of the book itself. A book can neither texmally change itself nor can it be textually changed by some other informational means. But this does not hold in the case of an electronic book (dictionary) if the user has the access to the text of the book. What is information as an informationally theoretical entity and what is a clear distinction in comparison to data? How does information inform and how is it informed? Let information as a particular informational operand be marked by a. We agree upon the fact that information informs in an open way and that by this entity impacted entities can be foreseeable but also unforeseeable. This openness of information oc possibility is symbolically captured by a]=. On the other hand, we have to consider two further facts concerning the open and spontaneous cycling of information, roughly as a 1= a, that is the possibility of information to be informed, roughly as |= a. The last formula, \= a, is unlimitedly open in regard to a as well as in regard to any other informational entity which could impact a. After this concep-malization of information a we can adopt the following informational implication: (11) a (a [=; a 1= a; t= a) In this formula (system of parallel informational formulas), operators of the form \= are the most general informational operators in the sense that (12) As we can see, the concept of information is in some sense controversial to the concept of data. By definition, data never changes or, at least, does not change within a domain of observation. On contrary, information ever changes or, at least, possesses the possibility to change and to be changed by itself and other informational entities. Thus, the so-called Being as a philosophical concept appears as the most universal data, in some respect opposite to the idea of god, which embodies the most pretentiously developing, although self-sufficient informational entity. The advantage of the concept of information lies in the possibility to express ideas being controversial to the concepts themselves. We can make a direct comparison between the concept of information a and the concept of data 5 in the following formal way: (idi) (id2) a We use operators and ';' between formulas as informational operators of implication and of 'or/and', respectively. Thus, information ex. implies a t= or/and |= a, i.e. an open informing of a in an outward and inward direction. In this sense, data 5 is a sub-concept of information a. However, data 6 preserves its form (data information), thus, 5 5, where is the symbol of an absolute (non-arising) memory operator with the meaning (d5) (5 =»(5 = 5) The so-called changeable data become information for which formula (d5) holds between two events of their change. On contrary, information can change by itself alone, without any outward influence. 3. A General Informational Structure and Organization of a Dictionary ... in English, word order is a dominant factor in determining meaning, while the use of inflectional endings to mark the grammatical function of individual words within a sentence plays a clearly subordinate role ... Other languages show markedly different patterns, such as Latin with its elaborate set of paradigms for nouns, verbs, adjectives, and pronouns and its.highly flexible word order. ... (NCD)24 In any case, a dictionary is a kind of expert information dedicated to the correct, that is, commonly (socially) understandable speaking, writing, and language forming, creation, and translation. Usually and formally, dictionaries implement.im-plicitly (rarely explicitly) the so called semantic nets, that is to say, networks of coimections among various informational items with semantically and pragmatically related meanings. ' A semantic net is an informational connectedness of items which originates out of the pos- sibilities of distinct informational items to inform other and to be informed by other informational items. An informational net is determined by an informational system with identified and yet not identified, that is, possible forms of informing. Such as it is, a dictionary implements a complex informational system of informational markers (for instance, words) with their attributes and to them corresponding forms of meaning. But meaning of an item is nothing other than an informational formula with its own meaning which points to the meaning of other items occurring in the formula, in an informationally parallel and cyclic way. Thus, informational markers with their actual and possible meaning create an implicit informational net which arises in the process of searching in a dictionary also through the intention or requirements of the user. Let us introduce the basic formal terms which constimte the informational structure of a dictionary. An informational dictionary as a ordered collection of informational items, their attributes, and meanings will be marked by O. A dictionary 9 informs as a regular information and its informing (understanding) will be marked by S). In principle, (gl) is a direct, straight-forward (potentially developing) relation which says that a dictionary 0 informs as such (per se), that is, 0 (= that it informs a kind of understanding S) (belonging to the dictionary and/or somebody else), that this process, as a form of understanding, can influence the content of a dictionary in the form ® [= 0 (in case of a dynamic changeable dictionary 9), and that any understanding of a dictionary can inform as such (per se) in the form S) f= S), The hermeneutical approach to this basic simation would be (g2) (0 S) 1= 0 or also O [= Ö) 1= 2) We see how informing S) assumes the role of understanding of item 9 and vice versa. In a real case, these basic formulas will appear in a much more complex (reasonably decomposed, developed, and/ or arisen) formal context into which several other components, for instance, various attributes and the meaning of items, will enter. An informational item (word, clause, phrase, sentence, paragraph, etc.) of a dictionary will be denoted by w. An informational item (word) informs in a specific way, for instance, within an instantaneous user ' s metaphysical disposition or as an attributed meaning within the dictionary as a form of informing, marked by 2B. Formally, we have the following formulas, expressing this part of the definition: (g3) 0), ® f= CO, sffi; (0) t= OB) 1= u; (2B1= w) 1= 2B; w e 0; 5B e S) A possible consequence of formulas (g2) and (g3) is, for instance, (g4) (((a)|=2B)h9)N®)Nw; (((5B1= w) [= 2)) [= 5) ^ ® In these formulas, informing 2ß as a component of the item's understanding already hides the meaning of 0). We see how the complexity of her-meneutic understanding of an item in the dictionary grows with the decomposition (or identification) of the concept. Entity Ü) is a marker which marks the corresponding attribute(s) and meaning(s), that is, a(oj) and (i(cj), respectively. In a dictionary S, its item w always assumes the informational triplet (or in general an n-tuple) (w, a(w), |i(u)). This triplet can be read as an informational correspondence of w, cc(a)), and |i((o) and expressed formally by (g5) (w«(cj))n(w) where is an informational operator with the meaning 'corresponds'. In general. (g6) Wj, ... , Wjjjeö; (Wja(ü)j)) i = 1, ... , m where Wp ... , are elements of dictionary ö. The consequence of this concept is (g7) (u e Ö) ((wa(to)) ; ((u a(u))^i(w)) e Ö This is certainly only a static definitional expression of the content ((u /. «(w)) /. n((o)) entering in a dictionary While CÜ belonging to 0 is a single informational item, the attribute and the meaning of w, marked by a(w) and ^(w), respectively, are, in general, informational sets of items, that is, sets of formulas depending on other informational items belonging to d. In some respect, an informational dictionary appears as an in-itself closed (completed) information. The attribute of u is a set of informationally distinct data entities (for instance, pronunciation, grammatical items, the kind and time of w's origin, graphical explanation, pictorial illustration, etc.). Certainly, the attribute item can have semantic connections to other items in the dictionary. The meaning of w is a set of informationally distinct entities expressed in the form of informational formulas of informational operands and operators (as well as parentheses) and w within 0 is nothing else than the marker of this set of formulas. How is the meaning of a marker struc-mred and organized into the so-called semantic-pragmatic net within a dictionary? If we look into a book dictionary, the meaning of an item appears usually as a non-empty collection of phrases, that is composite structures of words. The formally declared meaning of item w, marked by |i(oj), can be informationally divided, in general,' as (g8) |i(w) = (7tj((0), ... However, formula (g8) is an informationally static expression. In fact, partial meanings of w, that is, phrases 7tj(w).....\(w), inform the resulting meaning n(w) of w and thus instead of formula (g8) the informing of these partial entities to the entire meaning of w in 0 is, in general, a more adequate situation: (g9) (7ti(a)), ... ,7rjj(u))t=ti(w) This informing simultaneously satisfies the condition that with the exception of the complete inclusion of phrases in the meaning of an item, phrases of the meaning are in no way isolated or mutually and also otherwise independent items, that is, (glO) 7Tj(w), ... , 7rjj(w)Cn(w); (gli) 7r^(w).....-n^Cw) lf= 7tj(a)), ... , n^Cw) In fact, the mumal informing of phrases (gli) produces a resulting phrase of w's meaning, marked by (ij.(w), also in the user.domain, where (gl2) (7Tj(w), ... , 7tj^(w) Ttj(to), ... , TT^(u)) \= or in a more complete form (gl 3) ((tüa(w)) (7ti(w).....7t^(w) 7tj(w), ... , Tt^m) N It is to stress that operator has to be understood as an informationally active operator and not only as a static relation lacking the dynamics of understanding as information. Operator is . a particularized form of the general, operator of informing, that is, operator f=. Although the proposed schemes (or scenarios) of meaning of an item concern understanding, we have not introduced an explicit indication of understanding yet. This type of formalization we shall develop in some of the following sections expressing the explicit forms of informing belonging to the distinct items of meaning. Thus, we shall proceed to some concrete structures and organizational schemes of an electronic dictionary. 18 In general, it is possible to introduce several distinct types of understanding concerning a composite informational item w of a dictionary and particularly the understanding of the meaning, which corresponds to the dictionary item component (i. Various types of understanding can be constructed as parallel-cyclic, hermeneu tic, and hermeneutic parallel-cyclic (mixed) modes of understanding. 4. Terminology, Symbols, and Relations Concerning Dictionaries ... The major systems that make up the broad comprehensive system of language itself are four in number: lexicon, grammar, semantics, and phonology. The one that diaionary editors and dictionary users are most directly concerned with is the vocabulary or lexicon, the collection of words and word elements which we put together in various ways to form larger units of discourse: phrases, clauses, sentences, paragraphs, and so forth. ... (NCD) 23 A dictionary as informiational entity will be denoted by B. Particular dictionaries as well as sub-dictionaries of a dictionary will be denoted by subscripted 5's. A subscript will be the integer or the letter combination. Particularly important dictionaries will be, for instance: the word dictionary, marked by the concept dictionary, marked by the co-occurrence dictionary, marked by and the bilingual dictionary, marked by Oj^. These particular dictionaries appear as information entities within the Japan EDR electronic dictionary (OED). Already particularized dictionaries can be split, for instance, in the following way: the word dictionary into the general vocabulary dictionary ö ^and the technical terminology dictionary til .e concept dictionary into the concept classifications and the concept descriptions the co-occurrence dictionary into the Japanese co-occurrence dictionary the English co-occurrence dictionary and the Slovene co-occurrence dictionary for instance; the bilingual dictionary Oj^ into the Japanese-English dictionary and the English-Japanese dictionary ö^j or into the Slovene-English dictionary O^g and the English-Slovene dictionary etc. Furthermore, the general vocabulary dictionary Og^ can be split into the Japanese general vocabulary dictionary and the English general vocabulary dictionary or into the Slovene general vocabulary dictionary and the English general vocabulary dictionary etc. Similarly, the technical terminology dictionary can be split into the Japanese technical terminology dictionary Ojjj and the English technical terminology dictionary or into the Slovene technical terminology dictionary and the English technical terminology dictionary èg^^, etc. By this surface structure of embedded dictionaries we have obtained a rough structure of an electronic dictionary which can be informationally expressed by the following system of operationally interconnected informational entities: (Dl) ö = ^gv = (^JgV ^Egv)' Stt' "'Ett'' ^co = (^Jco' h j "Jco' «»•^co^^^sco'^o)' The next step in this system determination is to define the relations (interconnections, operations, also understanding) among the occurring entities (entries) and the deep strucmre of any of them, which brings new terms and symbols to the surface. What are the data entries of all these dictionaries, how are they structured, interconnected, and understood, for instance, in the Japanese EDR electronic dictionaries (OED)? What is the so-called word entry? The word entry w is composed of a headword, marked by which is the surface representation of a word, its grammatical information co^ (grammatical fea-mres), and the corresponding headconcept which is the concept represented by words in the context. Thus, decomposition of formulas. For instance, we can decompose the concept dictionary = in (Dl) which is an informational network structure comprising a set of concept entries y = (y^, y j.), with nodes y^^ which represent the concepts and arcs y^. which indicate the relation between one concept and another. , (hi) (a) = (va)hg,co^,))e\ is already an inter-dictionary strucmre. Further, there are relations among headwods Wj^ or among headconcepts which are defined within the word entries w. Concept entries w^ define possible relations between headconcepts Wj^^ with concept relation labels Thus, (h2) K = Co-occurrence entries w^^ define all of the possible co-occurrence between headwords with co-occurrence relation labels so. (h3) = CO Inter-lingual correspondence entries Wj^, in concrete cases, bilingual entries Wjg, u^j, co^g, Wgg, etc., define correspondence between headwords Ujj of different languages, in concrete cases, headwords Wjjj, Wjjg, Ujjg, etc. For instance, (h4) = etc. Here, djg, O^j, -^es, etc. are bilingual dictionaries. In formulas (hi), ... , (h4), relations between different entries are implicit. It is possible to make these relations explicit in a general way, with the intention to mark these relations and make them later on completely definite by further par-ticularization of operators and/or composition and 5. Understanding as the Processing Part among the Parallel Dictionaries within an Electronic Dictionary Understanding in the framework of an ED can be conceived from at least three points of view: as system understanding implemented within the programmed computer system which comprises the ED; as understanding of designers who develop the ED by means of an ED development system and understand also the development system itself; and, at last, as understanding of the ED performance and capability by the ED users who have this system on disposal, however, can still adapt or alter it to some reasonable extent, for instance, by a built-in or outward possibility of learning. In the process of ED development, the designers and constructors start by a top-down approach of solving the problem of, for instance, the ve'ry sophisticated ED not being definitely depicted yet. At the very beginning, they may have in mind the classical data-information system (Dl) of the ED, which from this point on can be put into the process of the so-called informational arising, where the occurring entities can be informationally decomposed deeper and deeper to the necessary details. Thus, for instance, the informationally static system CD 1) can be put into a formally arising form as (D2) ^gv N (^JgV ^Egv)' \ t= ^Ett)' ^b N («JE. «Ej) System (D2) is a developing formal system which calls for the missing details, that is, for the decomposition possibilities concerning the data structure of the ED on the one side and the system-functional, developmental, and user-friendly understanding on the other side. From this point on, understanding as a systematic informational approach can be put into the game of system development. As we have seen, hermeneutic and parallel-cyclic approaches of understanding can be applied simultaneously. What is the built-in quality of understanding in a dictionary? How could it be developed out of basic, static, already listed informational formulas? A dictionary ^ as a complex informational entity 9 \= (d^, possesses a quality (mode) of understanding by which its informational components are informed. Thus, it is possible to put into discourse two basic questions: (1) What is the internal understanding 3 of the dictionary 0? (2) How can an electronic dictionary 9 be used (understood) by a user u through his/her understanding U? We see how the problem of understanding can become informationally interweaved from the one and the other side. At the very beginning of the concept we have merely 5 as an arising information, as the intention of its development, that is, of its design and implementation. A basic concept Ö informs and is informed, that is, 9 (0 |=; [=0). This basic formula can be developed, decomposed, particularized, universalized, etc. in several ways. It can be read in the following way: as soon as the intention of a dictionary 9 informing is given, it is permitted to transit to the system of formulas (0 f= 0). Thus, the first consequence of this implicative informing is, for instance, 0 [= ■^c '®co' ^b^' becomes the basis, although it possesses a static structure in which understanding of its components is undeveloped, that is, not expressed informationally yet. We conclude that understanding 2) of Ö is implicit and can be expressed through the development of the concept, by an arising system of formulas S t= Ö where the right (Ö |=) and the left informational operands ([= of operator \= are coming into existence, respectively. In principle, by understanding, informational processes are closed into loops of understanding. The concept of the so-called self-understanding of an informational entity a and understanding of a by the entity ß through ß's understanding can be the following. Informing of a as its implicit informational process can be marked by where marks the entire informing of oc. Thus, the understanding component of a within its implicit informing ^^ can be marked by yielding 91C Under these circumstances, the understanding part of a's informing can be formalized by the system of formulas (D3) 9ICS«;(a[=S)I)^=a Further, if entity ß understands ot by its understanding S as a part of the entire ß's informing then (D4) 3ICS^;(a|=ai)t=a; ((((ß;a[=2I)|=a)|=93)hß In this scheme, entity ß considers the understanding 91 of a within its own understanding 58. Usually, the understanding of something as an informational process produces a particular information called the meaning ^ of something. For instance, in case of self-understanding 21 of a, we have the meaning (Ì5^(a). In case of the so-called other-understanding, the meaning fi^(a) or even UssCt^QiCa)). t^^O. etc. can come into exist- ence. In a discourse between entities a and ß, system (D4) extends into (D5) ((((ß;a^3I)f=a)^93)t=ß After this discussion, it is possible to ansv/er the question concerning the understanding occurring among parallel dictionaries within an electronic dictionary. The basic formula d ^c ^co' V ^ ^^^ ^^ subsequent formulas of (D2) give a firm orientation in which direction the system design of the system understanding should develop. Here, understanding concerns the necessary processing among dictionaries guarantying the achievement of goals of an ED, that is, to be functional, efficient, user-friendly, and in-, no vati ve. The decomposition ofd\= can begin by the following steps: (D6) (1) (2) (3) In formula (2) of (D6), the particularized operators "poi ^^ ^^del ^ »points« (or »marks«) and »delivers«, respectively. Entities QB and ® in (3) mark to the word entity u and to it corresponding concept y belonging understandings, respectively, informing the meaning (i(w). From the last system of formulas one can see the course of a dictionary development considering the so-called functions of understanding. In the next sections of the essay we shall point to some further informational details of an electronic dictionary disclosing the strucmre of some very concrete concepts. In these concepts, problems of monolingual, bilingual and multilingual dictionaries will be treated from the developmental, operational and implementing point of view. Further, a clear concepmal distinction among various concepts of dictionaries has to be developed, for instance, concerning formal models of data-structured (classically programmed), information structured (advanced conceptualized), and the so-called informational dictionaries. In this point, a question of developing tools for these different sorts of dictionaries can arise. Certamly, several questions will remain unanswered, but a new informational way for the development of the most sophisticated and advanced dictionaries will begin to become its specific trace. References (OED) An Overview of the EDR Electronic Dictionary, Technical Report TR-024, Japan Electronic Dictionary Research Institute, Tokyo (1990). (NCD) Webster's Ninth New Collegiate Diaion-ary, Merriam-Webster Inc., Publishers, Springfield, Ma (1986) (ITD) A.P. Železnikar, An Informational Theory of Discourse /, Informatica 13 (1989) 4, 16-37. (IIA) A.P. Železnikar, An Introduction to Informational Algebra, Informatica 14 (1990)7, 7-28. (Uli) A.P. Železnikar, Understanding as Information (Part One: A General Philosophy and Theory of Understanding as /n/omwrionj, Informatica 14 (1990) 3, 8-30. (UI2) A.P. Železnikar, Understanding as Information (Part Two: A Formal Theory of Understanding as Information), Informatica 14 (1990) 4, 5-30. A conmient. The listed references concern only the first part of the article. Further, this article is a private research and cannot be used or reproduced in any manner whatsoever without written permission except in the case of brief quotations embodied in critical articles and reviews. For information address the author. STRUCTURED OBJECT-ORIENTED SYSTEM DECOMPOSITION INFORMATICA 1/91 Keywords: object-oriented design, structured system design, object-oriented models Stevan Mrdalj Eastern Michigan University Ypsilanti, Michigan 48197, USA VladanJovanović University of Detroit Detroit, Michigan 48221, USA ABSTRACT: The aim of this paper is to present a structured method for object-oriented system design with special emphasis on discovering objects within the system and creating an object-oriented model of the system. First, a technique for decomposing an object into its components according to the actions required by the operations of that object is introduced. Second, system decomposition is presented as a coherent top-down process based on an object-oriented model. Lastly, reusability and extensibility of such an approach to system design arc discussed. REZIME: U ovom radu je predstavljen strukturan metod za objektno-orijentisano projektovanje sistema sa posebnim naglaskom na način na koji se objekti otkrivaju u sistemu i kako se kreira objektno-orijentisani model sistema. Prvo je dat opis tehnike za dekomponovanje objekata na komponente u zavisnosti od akcija neophodnih za realizaciju njihovih' operacija. Nakon toga je predstavljen proces dekompozicije sistema kao koherentan proces od vrha nadole koji se bazira na upotrebi-objektno orijentisanog modela. Na kraju je diskutovana mogućnost ponovne upotrebe objekata i mogućnost nadgradnje objekata izmenom, odnosno dodavanjem operacija. 1. INTRODUCTION The object-oriented paradigm has broader impact on system development than the traditional, functional or data oriented paradigms. The objective of object-oriented design is not only to create a model of the system, but to do so by reusing existing objects. Thus, object-oriented design requires more than just choosing objects and arranging them in class hierarchies. It needs a structured technique to: (1) avoid confusion in defining objects, (2) arrange objects into a system model, and (3) reuse already defined objects fi-om the object library. Hence, here we propose a formalized approach for structured object-oriented system decomposition (SOOSD). In considering the context of creating the object-oriented model of the system, most of the existing methods [13,1,3,8,5,15,16,2] are intuitive. They provide some informal rules for identifying the objects and their operations and can be categorized as direct decomposition methods. Also, they place little emphasis on the object's complexity and decomposition, and do not support different levels of either object or system abstractions. On the contrary, SOOSD focuses on the discovery and arrangement of the objects of interest in the real world and creating an object-oriented model of reality. It allows us to develop an object-oriented model of a system as a leveled and incremental top-down decomposition process in which already existing objects can be reused. SOOSD is based on: (1) a specification using an object-oriented model and (2) a structured object decomposition technique. 2. OBJECT-ORIENTED MODEL We use an object-oriented model called Abstract Object Model (AON^ as a formal specification tool to naturally and efficiently design the strucmre of a complex system. In AOM all things or concepts, in the designer's work environment, that are visible or otherwise tangible to the designer, are modeled as abstract objects. An abstract object encapsulates the problem space inside a set of pre-defined operations that manipulate and access Üiat space. We refer the reader to [10] for a full description of AOM. Here we concentrate only on the following characteristics that are used in the object decomposition process: I. AOM recognizes two kinds of objects: simple and composite. Simple objects occupy coherent space which cannot be further decomposed into meaningful objects, or one is not interested in their further decomposition. Conversely, composite objects are an aggregation of simple and/or other composite objects. Figure 1 illustrates a graphi- cal representation of the object aggregation structure. Figure 1: Object Aggregation Diagram. 2. The state of the composite object is a collection of its components' states. Thus, Figure 2 illustrates the assumption that the composite object state can be changed or accessed only by changing or accessing the state of at least one of its components. Figure 2: Access to the object space. 3. As a consequence of the previous characteristics, the operations of the composite objects are composed of the messages sent only to the components of that object. Figure 2 depicts that the response to message a are messages b, c, and d. 3. STEPWISE OBJECT DECOMPOSITION We use the technique called Stepwise Object Decomposition (SOD) to discover "new" objects and to start the specification of- such discovered lower level abstract objects [11], SOD is based on the following two principles: 1. Actions required to construct operations of the given object determine components of that object. 2. The operations of an object are determined by the needs of the objects in which the given object is aggregated. This is illustrated in Figure 3 where the actions required for the operations of object X determine that objects Y and Z become components of objcct X, while messages received by objects Y, Z and W determine their own operations. The usage of these principles in the process of object decomposition is summarized into the following algorithm: 1. List all operations O of object X. 2. For each operation from 0: 2.1. List actions A required for that operation; 2.2. For each action a from A: 2.2.1. Associate a with corresponding object 7; 2.2.2. Assign Y to the list C of X's components; 2.2.3. Assign a to O of 3. For each object from C repeat steps 1 through 3. X a b . . . Y. a .e Z.a W.a\ W.a/^W.b w a b . . . Figure 3: Message Flow Diagram. . The object that the designer chooses to decompose with this algorithm becomes the context of the decomposition process. Obviously, if the starting object is the system itself, the result will be the entire system specification. 4. SYSTEM DECOMPOSITION AOM allows us to view any individual level of system abstraction as an abstract object, as well as to view the entire system as an abstract object on the highest level. For example, consider a system which maintains customers' requests for transportation and the assignment of taxis to customers as an abstract objcct called TAXI-DISPATCH. Decomposition of the TAXI-DISPATCH object starts from the messages that it receives from its environment. Let us say that these messages define the following list of • operations (AnswerCall, BuyCar, Hire). These operations can be compared with basic functions of the TAXI-DISPATCH object that are required by its environment. In . order to discover components of the TAXIDISPATCH object, one needs to define actions required to accomplish operations of the TAXI-DISPATCH object. Let us start from the AnswerCall operation for which the list of all required actions is I Find Available, ReceiveRequest, MonitorTransport). Next, one associates all these actions with the objects which should be responsible to perform them. For example, by associating the ReceiveRequest action with the object which has to perform it, one discovers the DISPATCHER object as one of the TAXI-DISPATCH's components. Of course, at the same time ReceiveRequest becomes the DISPATCHER'S operation. After specification of all TAXI-DISPATCH's operations one may have its aggregation structure as it is displayed in Figure 4. Figure 4: TAXI-DISPATCH's components. Next, we decompose the system components into their sub-components. For example, let us decompose the DISPATCHER object with the following list of actions required for operation ReceiveRequest: {GiveName, GiveRoute, Find Available, Assign). Now one has to assign again all these actions to the corresponding objects. In that way one discovers the CUSTOMER, TAXI, and DISPATCH objects and their operations. Just as before, one may keep decomposing components of the previously discovered composite objects until they are composed only of simple objects. After specification of all TAXI-DISPATCH's components one may have the complete aggregation structure of the system as it is shown in Figure 5. Notice that the system has been simplified and its structure is partially presented so that the basic ideas of the decomposition process can be emphasized. 5. REUSABILITY AND EXTENSIBILITY Within a framework of developing complex and long-lasting systems, we identify two objectives that SOOSD should provide support for: (1) Reusability - ability of objects to be reused, in whole or in parts, for the construction of an existing system or a new system; (2) Extensibility - changes of the objects to accommodate modifications of their requirements. The simplest kind of reusability is the use of an object type as it already exists. For example, CUSTOMER object in Figure 5, gets its specification through the specification of DISPATCHER object. Later on, it is reused in the aggregation of the DISPATCH object. This kind of reusability is limited because object types can only be reused in the construction of the system if there is a need for exactly the same behavior as provided by that object type. Thus, it does not solve the reusability problem in general. Very often object types need to be extended or modified to fit in a new aggregation. Object types Figure 5: TAXI-DISPATCH's aggregation diagram. can be extended and modified by means of incremental design and inheritance. Incremental design. Since an object can be aggregated into more then one composite object, it can obtain its specification according to the needs of more than one object. For example, TAXI object from Figure 5, gets its initial specification through the specification of the TAXI-DISPATCH object. But then, by decomposing the DISPATCHER object, action FindAvailable is required from the TAXI object. That operation has to be added to the TAXI object prior to its aggregation into the DISPATCHER object. Later on during the decomposition of DISPATCH object, action AssignDrive is required from the TAXI object. That operation again has to be added to the previous specification of the TAXI object. Inheritance. Another case of reusability is the usage of inheritance to define new object types out . of existing ones by adding new components and operations. For example, EMPLOYEE object from Figure 5 gets its initial specification through the specification of the TAXI-DISPATCH object. But then one may discover the operation Drive for EMPLOYEE object which is required by TAXI object. Instead of adding that operation to the EMPLOYEE object, one defines a new object DRIVER as a subtype of the EMPLOYEE object shown in Figure 5 by the dashed line. The newly required operation Drive can now be attached to the DRIVER object as its specialized behavior. At this point we may also reconsider the DISPATCHER object and make it a subtype of the EMPLOYEE object. In this case the DISPATCHER and DRIVER objects inherit EMPLOYEE'S components SSN and NAME, as well as its operation Startlob. Conceptually lower-level objects could be the subtypes with specialized functionalities or even the special cases of the more abstract objects. An object library consisting of a set of object type hierarchies in the background may significantly reduce effort in the decomposition process. . As can be seen, an open design architecture with open-ended sets of extensions to an existing design is promoted. This is important for long-lasting systems because a system's functionality changes over time. However, the types of objects from which the system is composed will probably be more or less the same over time. The changes are most likely to occur when an object' gets a new operation, an existing operation changes behavior, or a new object arises. At- that time, it will only be necessary to add new operations, modify existing ones, or aggregate eixisting objects into new ones. Therefore, SOOSD enables maintenance of a system model as a process of reusing objects across time, and not only across applications. 6. RELATION TO OTHER WORK There are a few works which explore the merging of structured system analysis and design techniques [4,17] and object-oriented design [12,14], But none of them view object-oriented system design as a leveled process that starts from an entire system as an objcct or from any high complexity object and that decomposes them into lower level objects as it is in our case. This is the essential difference between the existing object-oriented design methods and our object-oriented top-down system decomposition. Although we use a top-down decomposition approach, the solution to system decomposition is a digraph that combines one object aggregation diagram and many object inheritance diagrams. We also concentrate on making object types reusable through aggregation, incremental design and inheritance, three powerful mechanisms for sharing specification and promoting their reuse. That is what makes our decomposition approach distinct from most object-oriented designs which use only inheritance as a tool for reusability [9]. Finally, SOOSD is build upon the abstraction mechanisms (such as encapsulation, classification, aggregation, and generalization/specialization) from semantic data models [7] and object-oriented programming languages. 6. CONCLUSION Structured object-oriented system decomposition represents a refinement of the structured system analysis and design using object-oriented principles. It allows us to start system decomposition from the system-object and work top-down to the complete design solution using the objects' operations to discover their components. We have now had about three years of successful experience in using SOOSD to design and implement vastly large systems. We have found SOOSD to be extremely useful in the initial design of the systems. That is because it provides system decomposition according to currently existing object operations. But as with any software project, we have done as much re-design as design. In such cases, SOOSD continues to play an important role in reusing object types through aggregation and inheritance. 7. REFERENCES [1] G. Booch "Object-Oriented Development". IEEE Trans, on Software Eng., Vol.SE-12, No.2. 1986, pp. 211-221. [2] P. Coad and E. Yourdon, Object-Oriented Analysis, Prentice Hall, 1990 [3] W. Cunningham and K. Beck "A Diagram for Object-Oriented Programs", Proc. of the OOPSLA'86. Sep. 29 - Oct. 2, 1986, pp. 361-367. [4] T. DeMarco Structured Analysis and System Specification, Prentice-Hall, 1979. [5] R.M. Ladden "A Survey of Issues to be Considered in the Development of an Object-Oriented Development Methodology for ADA", Software Engineering Notes, Vol.13, No.3, 1988, pp. 24-31. [6] P. Lyngbaek and W. Kent "A Data Modeling Methodology for the Design and Implementation of Information Systems", Proc. of the Inter. Workshop on Object-Oriented Database Systems, September, 1986, pp. 6-17. [7] R. King and D. McLeod "Semantic Data Models", in S.B. Yao (ed.) Principles of Database Design, Vol.1, Prentice-Hall, 1985. [8] B. Meyer "Reusability: The Case for Object-Oriented Design", IEEE Software, March 1987, pp. 50-64. [9] J. Micallef "Encapsulation, Reusability, and Extensibility in Object-Oriented Programming Languages", Journal of Object-Oriented Programming, Vol.1, No.l, April/May 1988, pp. 12-35. [10] S. Mrdalj "Abstract Object Model: Data Model for Object-Oriented Information System Design", Informatica, Vol.14, No.2, April 1990, pp. 1-11. [11] S. Mrdalj "Stepwise Object-Oriented System Design", Proc. of the IEEE International Conf on Computer Systems and Software Engineering -CompEuro'90, May .7 - 9, 1990, pp. 520-521. [12] E. Seidewitz and W. Stark "Towards a General Object-Oriented Software Development Methodology," SIGAda Ada Letters, July/Aug. 1987, pp. 54-67. [13] R.F. Sincovec and R.S. Wiener "Modular Software Construction and Object-Oriented Design Using Ada," J. of Pascal, Ada & Modula-2, March/April 1984, pp. 29-34. [14] P.T. Ward, "How to Integrate Object Orientation with Structured Analysis and Design," IEEE Software, March 1989, pp. 74-82. [15] A.I. Wasserman, P,A. Pircher and RJ. Muller "An Object-Oriented Structured Design Method for Code Generation", ACM SIGSOFT, Vol.14, No.l, January 1989, pp. 32-55. [16] R. Wirfs-Brock and B. Wilkerson, "Object-Oriented Design: A Responsibility-Driven Approach," Proc. of the OOPSLA'89, October 1-6, 1989, pp. 71-75. [17] E. Yourdon and L. Constantine Structured Design: Fundamentals of a Discipline of Computer Program Design, Prentice-Hall, 1975. NEURAL NETS: SURVEY AND APPLICATION TO WAVEFORM PROCESSING Keywords: neural nets, waveform processing Ivan Bruha Dept. Computer Science and Systems, McMaster University, Hamilton, Ont. Canada L8S4K1 Abstract This paper surveys Artificial Neural Nets as essential tools for pattern recognition. The author concentrates on the characteristics of well-known types of currently exploited neural nets and implementation of a multilayer perceptron. Aspects, specification, and comparison of various types of neural nets will be presented, as well. Furthermore, a decision-supporting system for neurological diagnosis will be described. The actual input to our system is an evoked potential waveform. It is analyzed by a syntax pattern recognition algorithm based on a regular attributed grammar. Its semantic functions return a list of numerical features. The second step of the waveform processing includes a two-layer perceptron which processes the above list of numerical features. The rules of thumb for optimal adjustment of perceptron's parameters will be discussed, too. 1. Pattern Recognition: A Survey Perceptrons and other neural nets we discuss in this paper have strictly numerical character, therefore, we will focus on the numerical approach only. A pattern is represented by so-called feature vector X = [ X, , Xj , ... , Xn } of numbers, called features. The number of features is usually fixed, and therefore all feature vectors form an N-dimensional space, called the feature space. Let a pattern recognition problem comprise R classes; the r-th class will be designated by the symbol z^ , r = 1.....R . Then, a pattern recognition system with numerical representation of patterns (called numerical classifier) is a device with N inputs (one for each feature) and one output, which classifies an input feature vector x to one of the given classes, i.e. it yields one of the symbols z, . The relation between its input and output is called decision ride and can be written as the following function z = d(x) Pattern recognition is one of the oldest and most widely investigated areas of artificial intelligence (AI). Although some researchers do not recognize pattern recognition as a part of AI, there are some others (including the author of the paper) who accept Minsky's definition of AI as 'the science of making machines do things that would require intelligence if done by men' and consider artificial intelligence as a 'club' of several theoretical and experimental disciplines, including pattern recognition. Pattern recognition as a discipline has been thoroughly described, analyzed, and surveyed in a great number of books and monographs. For examples the reader might look at e.g. [Du73], [Fu82], [Tou74], [Wa85], or [Bru80]. Nevertheless, in this section we present the necessary terminology. The principal function of a pattern recognition system is to make a decision concerning the class membership of the patterns received as input. Patterns are facts, observable statements, situations, events, or objects of any type that are to be recognized. The patterns are grouped into two or more sets, so-called classes, according to the given problem. Any such system (usually a computer) cannot, of course, observe real-worid objects, since it is only able to read in the data of its world, i.e. numbers and symbols. Therefore, before the actual classification, we have to represent patterns by a suitable formal description. There are in fact two distinct possibilities for pattern representation; (a) numerical (statistical) description: a pattern is represented by a sequence of numbers, called features; (b) symbolic description: a pattern is described either by a string of (terminal) symbols, or by more general structures such as relational structures, semantic nets, frames, formulas ofvpredicate calculus, etc. The feature space is divided by the decision rule into R disjoint subsets Ä, (called decision regions) where R, contains the features x such that z, = d(x) . The boundaries between decision regions are called decision boundaries. Their adjustment is the principle problem of a classifier design. There are several ways of describing decision boundaries. Among them, a most general approach uses so-called discriminant (or decision) functions. The R discriminant functions gj(x) have to be selected so that g.(x) > g,(x) . r = 1.....R , r it s for all X e Ä, . Consequenüy, the decision boundary between adjacent regions R, and Ä, is defined by the equation g,(x) - g.(x) = 0 The decision rule for a classifier with discriminant functions thus has the form: for a given x compute all g,(x) , r = 1,...,R and classify x to the class z, for which g,(x) = mM g,(x) see Fig. 1. (1) The most common type of a numerical classifier is the one with linear discriminant functions: Fig. 1. Classifier with discriminant functions gr(x) = ^ w^ , r = 1.....R (2) where w„ , n = 0,1,...,N , r = 1,...,R , are so-called weights, and Xq = 1 . This lancr condition allows Wj, to be interpreted as a threshold of the discriminant function. The structure of a linear 2. classifier is illustrated in Fig. 2. Decision, boundaries formed by a linear classifier are parts of hyperplanes (see Fig. 3). The problem of finding decision boundaries is thus reduced to that of finding optimal values for all the weights of the linear classifier. One possible and often used way of a classifier adjustment is to utilize a learning (training) process. In such a case, a teacher (a designer of the classifier) has to collect a reasonable set (training set) of typical, representative examples (training patterns) including the correct class membership of each training pattern (desired class). During the learning process, training patterns with their desired classes are presented (usually many times) by the teacher to the classifier which performs as a student, i.e. it modifies (utilizing the information involved in the training set) its weights according to an appropriate learning algorithm. Among a large collection of learning algorithms for linear classifiers, we introduce 3. die simplest one, called Rosenblatt's algorithm (also known as perceptron algorithm, fixed-increment, delta, or LMSE algorithm): Initialize weight vectors = [ Wo, , w„.....WnJ , r = 1.....R arbitrarily. For each training pattern x = [ Xq, x, ,..., x^ ] (where we added Xq = 1 for a convenience) with its desired class Z = 2, do: 2.1. Compute g,(x) , r = 1,...,R according to (2). 2.2. If the equation (1) is satisfied, i.e. the pattern x is correctly classified, do nothing. However, if g.(x) < gr(*) for some r (3) then change the weights vectors as follows ( c is a positive constant): w, += c x w, -= c x for all r satisfying (3) (where A += B means: add B to A ; similarly -= relates to a decrement). If the weights were modified for at least one training pattern, return to the step 2, i.e. repeat the training for the entire training set. If not, save the weights and terminate the learning process. 1 r X ( . Fig. 2. Linear classifier - Ri ■"1 Fig. 3. Decision boundaries of a linear classifier A simple model of a learning linear classifier was developed by Widrow and Hoff [Wid60]. Their adaline (adaptive linear neuron) is able to recognize R = 2 classes only (using a single discriminant function) where features can be only equal to +1 or -1, but otherwise its structure and learning is equivalent to those presented above. Adaline has found many useful application in technical areas. 2. Rosenblatt's Perceptron Development, investigation, and theory of neural nets is not a new discipline at all. The first attempts to model the behaviour of biological nerve cells were realized in 1940's. Their results encouraged the foundation of cybernetics, a discipline (or a 'club' of disciplines), which attempted to combine concepts from biology, psychology, engineering, and mathemancs. Nevertheless, more serious research and experimentation with neural nets started in the 1960's with the publication of Rosenblatt's book [Ros61]. Rosenblatt developed a cognitive model of the brain, which was called the perceptron. Its structure is shown in Fig. 4. It is very similar to a linear classifier (Hg. 2); however, the features are not input difectly to its weights, but instead are preprocessed by a so-called -processor which is represented by M functions = *I>m(*i.—XN) , m = . The <&-processor transforms the original features to the new ones x'„ , m = 1,...,M , which are then processed by a linear classifier. Hence, the discriminant functions of Rosenblatt's perceptron are g/x) = Wo, + w„ m(x) , r = 1.....R (4) Consequently, the decision boundaries of the original feature space need not be just hyperplanes but may consist of more general shapes that are transformed by the -processor can be of any shape, but the following cases are the most interesting ones: - The -processor consists of quadratic functions; thus decision boundaries are quadratic surfaces. - The functions are boolean ones; the case with randomly arranged connections (bonds) and boolean gates was extensively studied by Rosenblatt's group since they had expected that this would be the best model of the brain. - If the «»-processor is an identity (or is eliminated) we get the 'standard' linear classifier or the Widrow's adaline. The perceptron is a technical model for the following hypothesis of a nerve system's performance. This hypothesis states that the nerve system of a living organism is more less chaotically (randomly) organized when it arises. Its organization takes place during the life of the organism as a consequence of its adaptation to its environment and learning. The system creates suitable bonds (connections) within its «»-processor and modifies its weights. As a metaphor one could say that a living organism is a general computer whose program is formed during its life. After the program has been created (the system has learned), a negligible amount of computer hardware is utilized; this is the cost of the versatility of the computer. Perceptrons were applied to solve many problems in many technical disciplines. In some cases they were 100% successful, in others, however, they did not yield satisfactory results. The fundamental disadvantage of a perceptron consists in its low ability to generalize acquired, knowledge; that is the principle difference between perceptrons and living organisms. For instance, a perceptron trained to recognize squares and circles at a certain place on its input retinal matrix, is not able to satisfactorily recognize the same squares and circles situated at another place on the matrix. Precise theoretical investigation of perceptron's limitations were presented by Minsky and Papert [Min69]. Analysis of the perceptron's behaviour revealed that the above hypothesis is not quite correct. Some bonds already seem to be determined when the organism arises, i.e. in our metaphorical example, the designer of a computer does not propose only its hardware, but also some parts of its software as well. In spite of these limitations, the perceptron became one of the fundannenta] models of neural nets, has had many technical applications, and, at present, has successful successors. MAXIMUM -% 9R Xi. Z Fig. 4. Herceptron. g, to g^ are linear discriminant functions 3. Neural Nets 3.1. Paradigm of Connectionism Pardculariy, the latter definition reflects not only the internal structure of a neural net, but also the procedures (actions) that are carried out by the net. Now, following the paradigm of connectionism and the above definitions, we can observe the following aspects of any artificial neural net. Minsky and Papert's criticism of perceptrons strongly diminished interest in their investigation, and consequently research in the field of perceptrons, their applications, and generalization was neglected for many years. Moreover, research activities in 1970's were focused on symbolic approaches and knowledge-intensive modelling. However, a few research centres continued to investigate the neural (or connectìonist) approach and parallel processing systems, especially in computer vision; see e.g. Hough, Duda and Han [Du721, Grossberg [Gro78], Anderson [And72], Kohonen [Koh77], and others (more references are in [Lip87], [Mat87], [Om87]). After more than a decade of intensive research, however, the apparent limitations of the knowledge-based and symbolic approaches to artificial intelligence have been realized, returning the interest of AI researchers to neural net modelling. Consider just the following. The standard serial von Neumann computers are excellent in multiplication; they can multiply two large niunbers in a fragment of one microsecond, but - even in late 80's - any computer with the best available knowledge base and the best available algorithm needs hours to recognize objects in a picture. That is evidendy one of the reasons for the recent explosion of interest in parallel distributed processing and neural network models. Neural nets have exhibited promising results for a number of problems such as pattern recognition, speech processing, computer vision, association etc. (see e.g. Kohonen [Koh84], Sejnowski and Rosenberg [Sej86], Hopfield arid Tank lHop86], Rummelhait and McClelland [Ru86al, Himon and Sejnowski [Hin86], Carpenter and Grossberg [Car86], etc.). A fair survey of types of neural nets and related problems can be found in [Gro88], [Lip871, [Sim90]. The paradigm of connectionism can be specified as follows: Intelligence is an emergent property of a large parallel net of simple uniform nodes. Indeed, although there are various types of neural nets, they all attempt to get fast and perfect behaviour by massive interconnections among their elementary units (nodes), and by exploring promising cases (hypotheses) simultaneously, in parallel. An elementary unit (node) usually has a large number of inputs, either from the environment that is to be processed or from the outputs of other nodes. The input signals to a unit are usually weighted and their sum is processed by a nonlinearity whose output is either a binary number ( 0 or 1 ) or a real number in a certain interval. The weights of nodes are usually adjustable by a learning (training) algorithm. In the following, we will discuss all relevant aspects of neural nets and their specifications. We will then survey a few famous types of neural nets, and focus on the multilayer perceptron, one of the best models for the pattem recognitìon purposes (particularly, it is the best model for our task of doing waveform analysis). (I) Set of processing elements (neurons, units, nodes) A neural net consists of a large numbers of simple elements (neurons, units, nodes) of the same type. Each neuron n has its activity, (state, output) x„ . It is either a real number, usually from the close interval <0; 1> , or an integer from the set { 0,1,...,K ) , or a binary number 0, 1, or +1, -1 . (2) Topology of neural nets Neurons are connected within a net by means of a large number of connections (synapses). Each connection is accompanied by a certain strength or weight. Therefore, the topology of a neural network is characterized by connections, their weights, and so-called interconnection schemes. We will briefly discuss each of these factors. (a) Connections '{synapses). Neurons of a net are connected together by oriented synapses. We can observe a neural net as a graph with oriented edges. There are several types of connections (graphs), among them the most typical ones are: • total connection: each node is connected with all nodes; • uniform local connecdon: each neuron is connected with its neighbours only; • layered networks: neurons are grouped into layers which are ordered; neurons can be connected to neurons of the same layer, those of the next 'higher' layer, or any 'higher' layer, or 'lower' layer etc.; some of the possible layered types are on Fig. 5. (b) Weighs (connection strengths). If the neuron m is connected to the neuron n , then the coiresponding connection (synapsis) has a certain strength or weight . A weight is usually characterised by a real number from a certain interval. All weights of a neural net form a n:iatrix w = [w^ which is called (long-term) memory of the net. ■ , Weights can be either positive numbers (in such a case the connection is called excitatory), or negative (inhibitory connecdon). If a weight has zero value, then we have to distinguish either static zero (there exists no connection between the two nodes at all) or dynamic zero (the initial zero value of the weight has not been modified yet). If w^ = w^ for all connections, then the neural net is called symmetrical. If w^ * w^ = 0 for all connections, then we have one-way connection network. Otherwise, the net is of a general asymmetrical type. (c) Interconnection schemes. Wc distinguish feedforward and feedback schemes. If information (signals) flows in one direction only, the net exhibits so