Paper received: 27.04.2010 Paper accepted: 21.10.2010 A User-Centered Approach for a Tabletop-Based Collaborative Design Environment Christophe Merlo1,2* - Nadine Couture1,3 1 ESTIA, Biarritz, France 2 IMS/LAPS, UMR 5218, Bordeaux 1 University, France 3 LaBRI, Bordeaux 1 University, France In a global product development market, collaboration between team members has become a key factor for the success of product design projects and innovation. Most of the time, such collaborative situations are supported by traditional tools such as paper-based methods or single-user IT tools. Our aim is to enhance direct interactions between users through the IT tools by proposing physical devices dedicated to users' business tasks. The proposed collaborative environment is based on the use of a tabletop technology as an output device and physical devices as input devices. An electronic pen and a Wiimote device have been implemented and combined to design tools and tabletop technology for allowing such direct interactions and enhance design collaborative situations. A specific analysis of designers' activities has been achieved for helping to define input devices and tests scenarios. The results of these tests are presented and validate the feasibility of such collaborative system. ©2010 Journal of Mechanical Engineering. All rights reserved. Keywords: collaborative design activities, user interactions, tabletop, physical devices 0 FROM COLLABORATIVE DESIGN TO 3D USER INTERACTION In a worldwide context, companies must develop increasingly complex and innovative products in order to remain competitive. Several approaches, methods and tools have been developed for many years, as for example concurrent engineering [1], multi-disciplinary teams and collaborative Information Technology (IT) systems [2]. In such a context, collaboration between team members has become a key factor for the success of product design projects and innovation. Furthermore collaboration is seen as an effective and concrete articulation between designers involved in a collective action within the same design objectives [3]. We focus on collocated collaborative situations involving designers. Designers must take technical decisions and choices [4] that constrain the product for all its lifecycle after such collective processes and many interactions between all stakeholders, involving a repetitive cycle of perception, decision and action [5]. At first, we study creativity sessions occurring in the early design phases of a design project, as an example of innovative and collaborative situation; then we study project reviews occurring later on in the design process, as an example of designers interacting for proposing solutions or controlling their work. In both situations, multidisciplinary stakeholders interact to exchange viewpoints through adequate intermediate objects [6]. Collaborative tools have been proposed to support such design interactions and intermediate objects in association with CAD systems: Maher [7] proposes CSCW (Computer-Supported Collaborative Work) tools, Rosenman [8] proposes multiple views of product functionalities, added to CAD ones. Virtual Reality techniques improve DMU (Digital Mock-Up) to analyze and evaluate product design at different steps of its development [9, 10]. Despite such research works [11] and [12] and tools, such collaborative situations are not well-supported in most companies. Moreover, creative tasks are often supported by sketching and are thus based on paper-supported work. IT support is generally limited to one vertical screen, which is not always wide enough for the stakeholders, and a unique input mode (one mouse and one keyboard), which makes it difficult for several designers to act upon the visualized object through a CAD system in an interactive mode. To solve this situation we consider that two fundamentals aspects can be improved: the possible interactions between users and the IT *Corr. Author's Address: ESTIA, Technopole Izarbel, Bidart, France, c.merlo@estia.fr system are limited to a single user; moreover the use of a mouse and a keyboard is not always the best way of interacting with the IT system to achieve a dedicated task. This study builds on the works recently synthesized in Johnson [13]. Nevertheless, companies are far from integrating these concepts and techniques. The aim of our research is to propose new tools to support these interactions between designers by combining physical devices, input and output devices, allowing multiple users' interactions, and by developing an IT framework based on a multidisciplinary model for design collaboration. In this paper only the physical devices dimension is studied. The first objective is to foster multidisciplinary collocated and synchronous collaboration among designers. The second objective is to develop a new way of interaction between them, corresponding to the fact that some specific design tasks may benefit from specific device use rather than traditional mouse manipulation. Therefore, we implemented a collaborative environment that proposes direct interactions between all the designers and the 2D or 3D data [14], using dedicated interactive devices. The followed approach is based on the use of the tabletop technology combined with physical interface devices in order to allow designers to behave in a paper-support like mode. In the next section, we review works on shared interactive surfaces and tables regarding the proposed collaborative environment. 1 SHARED INTERACTIVE SURFACES AND TABLES: STATE OF THE ART The shared interface, which allows multiple users to interact simultaneously on the same device, is an old concept, already explored by the end of the 1980s at Xerox Park in Palo Alto [15]. Wellner [16] proposed the DigitalDesk, the first tabletop that allows interacting with IT by the way of an interactive table and by the use of physical devices. For fifteen years, these devices such as interactive tables remained rare [17] and [18] but recently, there have been proposals for marketed interactive multi-touch tables (the Microsoft Surface, the Mitsubishi MERL DiamondTouch, the IntuiFace from IntuiLab or the Ilight from Immersion). However, providing such devices is not sufficient to support the interactions between co-located users. Concurrently a lot of interaction styles have been developed using a wide variety of devices (mouse stylus, keyboards, microphone, etc.) and a large variety of interaction techniques (drag and drop, pull-down menus, tabs, collapsible trees, etc.). Recent work on interactive tables and multi-touch tablets, like the iPhone from Apple, the Lemur from JazzMutant or the Jeff Han's surfaces from his society Perceptive PixeL, really question a foundation of HCI: interaction through a single pointer. The goal of our work is to explore, the user interaction on a large surface of visualization and with devices offering more than a single pointer in the context of collaborative design. Firstly, for a collocated collaboration, large shared-displays such as walls or tables are especially useful. A large surface allows a group of users to work together while providing enough space for personal/private and public spaces. Several researchers in the tabletop community provide ad-hoc solutions for new ways of interaction. Mixed-presence drawing surfaces and tangible interfaces [19] have been experimented with: TIDL, RemoteDT, DiamondSpin, Buffer Framework and the very recent DigiTable and T3. Other works such as Verlinden and Horvath [20] explore a different point of view which enables each designer involved in a collocated situation to manipulate virtual models with their own system combining an I/O pad and a projector. Secondly, a person naturally acts by watching the space around her/him to gather information and by manipulating physical objects. Those physical objects can be grasped or moved. Similarly, some special physical objects can serve as a means to interact with a computer system in a natural way as in the everyday life. Let us look at one of the several types of sensing-based interaction: the pen based interaction. Subrahmonia [21] underlines that pen based interaction is still the most convenient form of input in a large number of applications. For example, in the preparation of a first draft of a document, using a pen allows concentration on content creation. The pen is a socially acceptable form of capturing information in meetings that is quieter than typing and creates a minimal visual barrier. Pen is also well adapted for applications that need privacy and that need entering annotations/marking. Most of the time annotations remain informal and are considered as mere supports to a verbal exchange. It is important to consider annotations as complex and composite elements that can play a central role in design co-operation. Today, pen hardware technology is improved in user-interfaces and handwriting recognition algorithms. There are still however, a number of challenges that need to be addressed before pen computing can address the features we listed above from Subrahmonia's work [21] to an acceptable level of user satisfaction. Thirdly, the context of design leads us to take into account the assembling of two elements such as CAD parts as it is a very common task in this context. For this task, the user has to manipulate double 6 degrees of freedom at the same time, and classical user interfaces such as the 2D mouse or the keyboard are impractical for this assembly task. On a more general point of view interacting with 3D data is a challenge, particularly well described in [22]. Despite those difficulties, in this paper, we explore the integration of three main concepts: the use of pen based interaction on large shared-displays for 2D user interaction with sketches/drawings and for 3D user interaction with CAD parts. In the next section new types of handling devices (i.e. devices useable with the hand), with the aim of improving both business tasks and collaboration among designers, are studied. 2 DEVELOPMENTS OF PROTOTYPES IN ORDER TO EXPERIMENT WITH NEW INTERACTION TECHNIQUES We developed an operational prototype to experiment with two tasks: handling a 2D object and handling a CAD part in the 3 dimensions. 2.1 Research Method The followed method aims at supporting the implementation of prototypes that must answer to users' needs. This method (Fig. 1) is a combined user-centered and technological approach composed of several activities: after an initial definition of the studied domain activity, two parallel but integrated phases are engaged - a user-centered one and a technological one - and finally, these two phases merge through a combined evaluation activity then the activity of improvements definition. The prototype is then ready for more industrial tests. Domain identification Design activities analysis « .s ft ■a Scenarios State of the art User tests Technical choices « .s ft M "3 = .s Design, Implementation and tests Evali lation Improvements definition Fig. 1. Prototype development activities During the user-centered phase, we apply different techniques inspired from ergonomics. Firstly, the designers' professional context in order to identify, characterize and analyze their collaborative design activities is studied. Secondly, scenarios for experiments in order to evaluate the prototypes in a real-like environment are defined. Finally, we proceed with the users tests. The technological phase is a traditional research phase with firstly, a large state of the art to evaluate previous work and plan our solutions with the definition of comparison criteria. Secondly, technological choices lead to the design of a prototype. The choice of one technology is defined by taking into account the type of interaction identified during the analysis activity of the user-centered phase. In our approach, the key element in the early stage of design of the 2D or 3D user interface consists of identifying the users' major needs, taking into account the users' skills and experience in doing the two targeted tasks. The design of the prototype is a multidisciplinary integrating mechatronics designer, software designer, and the end user in order to design the right interaction technique and the most suitable device. We identified 16 different technologies / devices which were compared using a set of criteria. At this preliminary stage, all criteria are very generic and based on technical and economical aspects as well as human aspects and the easiness of the possible customization of the device. Our final choice, a pen and Wiimote-based devices, is explained in the next section. Finally, the design and the implementation of the prototype and technical tests validate that the prototype is operational. The implementation influences the definition of users' scenarios. 2.2 Prototype Implementation The implemented prototype (Fig. 2) is based on a platform that includes a video projector assembled on a moveable tripod in order to work on any horizontal flat surface like a table or a desk. This set-up is related to Bricks [23] and the IP Network Design Workbench [24] and uses the prototype designed and built in [25]. A graphical representation of data is displayed on the surface of a tabletop. Direct data manipulation is enabled by acting with physical tools on the projection space. Several persons can work together to accomplish a common task around the common space that is the surface of the table. videoprojector Fig. 2. Description of the prototype environment As a physical input device a homemade electronic pen is used. It is composed of an infrared diode of 950 nm that allows a Nintendo Wii Remote (Wii) device to identify the position of the cursor on the table and a contact micro switch with a very weak displacement (<1 mm) for activating the infrared diode (Fig. 3). Note that industrial solutions are available, for example the Anoto technology, whose pen enables digital capture, transfer and processing of handwritten text and drawings on a paper sheet. Fig. 3. First 2D prototype: pen device version 2 In addition, two Nintendo Wii Remotes and a Nunchuk for achieving the whole functions are used. Firstly, the Wiimote device is used as a receiver and must be placed to look at the screen projected onto the table. The angle that forms between this device and the screen is 45 degrees for an optimal reception and its distance does not have to exceed 4 to 5 meters. During its use the field between the Wiimote device and the pen must be kept empty. The second Wiimote device is an active device for transmitting user's commands. For 3D interaction, the pen has a pointer role and must be used close to the table, under 1 centimeter for good selection recognition. Finally, a led rail (10 centimeters long) is used as a base for the Wiimote devices. To integrate and allow interaction between hardware and trade software three tools were used: the Microsoft .NET Framework 3.5 Service Pack; the BlueSoleil software that allows Bluetooth communications between the PC and the wiimotes; and Smoothboard 1.6, a Wiimote Whiteboard freeware that contains a customizable floating toolbar that allows effortless control of presentations. The built-in annotation feature allows writing and highlighting directly on top of any application or document. Finally, we configure the devices to emulate the mouse and some keys combinations coupled to the functions of collaborative DMU software (Product View). 3 USER STUDY 3.1 Aim of the User Study The aim of the user-centered experiments is to compare the interaction between the users in a day-to-day activity and the prototype environment. This evaluation is influenced either by the software and the way functions are available, or by the implemented handling devices. Various criteria have been defined. We thus intend to identify the advantages and the disadvantages of each handling device for the following qualifications of effectiveness: • Handiness (many actions, time of the actions, position of people); • Precision; • Intuitiveness (time of catch in hand); • Representativeness of the awaited results; • Transparency of the object that one holds in his hands, i.e. the faculty to use the interface while doing other actions. The followed method consists in defining a campaign of experiments based on a set of scenarios. They are combined with two different configurations (working on the table and working on a wall) and for a panel of users. The evaluation will allow identifying improvements for the handling devices. We have chosen the table and the wall because in collaborative situation of discussion and thinking, as they are the more common supports. The following table shows the quantitative variables to be collected. 4.2 The Set-Up of the User Study 15 subjects participated in our user study. They are all researchers in different fields: mathematics, mechanical design and computer scientists. They were not paid. 7 of the volunteers were female and 8 volunteers were male, aged Table 1. Measured quantitative criteria from 22 to 42 years. Their level of study was an MSc or a PhD in technical or scientific disciplines. Nearly half of the subjects knew the DMU software used (Product View) but were not experts in using it. Subjects were divided into design tasks experts, 2D/3D software users and innocent users. None of them had used such handling devices previously. This preliminary study was undertaken in order to determine basic tasks that would be representative of collaborative situations in design. Consequently, people without any specific knowledge in design or with handling devices should be able to participate in the experiments. In order to efficiently collect and exploit the results of the user study, three persons managed the experiments. The first person explained the task and conducted the tests. The second one observed how the users were interacting with sketches, drawings and CAD parts. This observer recorded all the achieved actions by the users for further analysis. Controls were visual but also measured for some parameters (time for example). The third person was guiding through a questionnaire. This questionnaire was designed to get a qualitative and subjective feedback of our user interfaces. Then, a scenario was defined by a grid for the observer evaluation, an instruction form for the users and the final questionnaire for the users. An objective of half an hour per user for achieving the scenarios was defined. 3.3 The First Targeted Task: Handling a 2D Object The final aim of the first prototype was deduced from the analysis of the designers' activities. Our intent was to propose a new type of interactions between designers involved in a creativity session. Criteria Measured parameter Reference value Velocity for achieving the task Time Time obtained with a mouse Precision Distance between 3D parts at the end of the task Precision obtained with a mouse Richness of marking possibilities Quantity and readability of the text / symbols / sketches Marking made by an expert Level of collaboration Number of interactions / Time Number of interactions between experts with a mouse During such sessions, designers usually interacted with objects represented on sheets of paper: sketches, drawings, images, etc. Their activities consisted of drawing, writing or marking. Sometimes they redrew a section of the paper using a bigger scale. During paper-based sessions, each designer has its own tools (pencil, pen, ruler, rubber, etc) and may use them after or during another designer interaction. During software-based sessions, each designer has to wait for another designer interaction before taking the devices and then interacts. This is the exact situation that we intend to improve in order to be close to paper-based situations. To simulate direct interactions, we selected the following functions because of their representativeness; they are the first ones to be used by designers as well as the most often used: • Visualize images; • Enlarge / reduce image; • Draw hand-made sketches; • Write readable text. The 2D image editor software associated with the pen device is proposes four basic functions that will be used for testing: • to open a 2D document, • to modify the scale of the display, • to create a separated level, • to draw on this level. The first two functions concern the visualization of the 2D image. The final two functions allow managing marking. 3.4 The Second Targeted Task: Handling a CAD Part in the 3 Dimensions Usually design project reviews aim at controlling large assemblies of 3D models by visualizing them on large vertical screens. Stakeholders interact in oral mode between them and one operator, which is the only one to manipulate the 3D models. During informal design reviews involving a few designers, these look on a traditional workstation screen but only one is able to interact with the workstation at a time. Most common functions concern the visualization and the visual analysis of the assemblies in order to have an overview of the parts made by other teams and to control the coherency of the whole parts definition and position. As a consequence, the functions to be implemented are: • Visualize 3D parts and assemblies; • Make positive / negative zooms, translations and rotations of the 3D scene; • Translate and/or rotate selected parts with regard to the whole assembly; • Make dynamic sections of the parts in order to "look inside" the assemblies. 4 EVALUATION OF THE PROTOTYPES We achieved the two prototypes and we tested them in very basic situations. These first experiments should be considered as a cognitive walkthrough based user study [26]. They validate the technical approach but they are not significant enough for evaluating the interest or limitations of our approach in real business situations. Fig. 4. Pen device for marking (left) then pen and Wiimote devices in a 3D situation (right) The set of users especially, is not heterogeneous enough, but they are all familiar with the tasks and all of them had never used such a pen-device. Moreover, the conditions were laboratory. Before using the system, a quick calibration step is required: the corners of the screen useful zone on the table are identified with the pen to allow the first Wiimote identifying the working zone. Then the pen and the second Wiimote can be used with both 2D and 3D software. Figure 4 illustrates the way the pen is used to add marking on a 2D image and how it is combined to the Wiimote to 3D tasks. Considering the initial needs that guide the design of the prototypes, three user-centered scenarios have been formalized: • 1st scenario: handling of a 3D CAD assembly. • 2nd scenario: marking on a 2D document. • 3rd scenario: drawing in a creative context. In the first scenario two users start the exercise with three CAD parts that are not assembled. They are expected to modify their relative positioning in order to approach the ideal positioning of the parts. Each one has at least one part to move. The useful functions are zooming, rotating and translating. The exercise must be achieved in fifteen minutes, this period of time was defined in order to avoid users from learning about the devices and modify the results of the tests. Fig. 5. The first scenario on the wall configuration This situation (Fig. 5) allows an evaluation the way the users collaborate and work with the two handling devices: the pen and the Wiimote, even if the situation is very close to a mouse use situation. The pen is used to point and select objects or actions and the Wiimote device is used to achieve dynamic movements of the 3D part such as translation, rotation, etc. The second scenario (Fig. 6) is dedicated to one single user. A screenshot representing a 3D assembly is shown. The assembly contains many positioning errors and the user has to identify them and make marking on the screenshot to describe them most precisely. This experiment is focused on the way the pen device is used and especially for writing. It lasts ten minutes maximum. Fig. 6. The second scenario: resulting marking The third scenario is supposed to take place during a creativity session where two users have to exchange to define a new design. A sketch of a car design is projected onto the screen. The users have to modify the design of the car by erasing and drawing upon it. They have different objectives that should generate interactions between them and even conflicts: the first user must introduce sharp edges with hard angles and the second user must maximize glass surfaces. This scenario also requires two users but is more focused on the way the pen device helps to formalize ideas through sketches. Only the pen device is used. It lasts five minutes. Two experts achieved the scenarios in order to measure initial values and calibrate the future observations. Each user will have a similar environment but two main configurations were tested: horizontal or vertical screen. 5 RESULTS AND DISCUSSION The experiments have generated a lot of information resulting from the observations, the measures and the answers of the users' panel to the final questionnaire. Here, general results are presented as the numerous quantities of data are not entirely exploited. First of all, all the scenarios were achieved before the time limit. All users were able to manipulate the different handling devices after a very brief description of their use. Then several statistics were generated. Concerning the first scenario dedicated to 3D manipulations, nearly 1 user upon 4 only prefers the handling devices to the mouse. This corresponds to the fact that manipulating 3D objects requires strong precision. Moreover, the users had to learn how to use the handling device, and they were more expert with mouse use. Analyzing these statistics in greater depth, we established that nearly 90% of the users found the handling devices easy to understand with 3D manipulations, and 70% that they foster the exchanges between users. The average time of this scenario was 13 minutes (from 8 to 15 minutes) and the time to understand how to use the devices was 2 minutes approximately (between 1 minute minimum and 3 minutes maximum). Nevertheless 56% said that the handling device would help them with a design project review. And there were only 11% to validate the fact that the handling devices are more precise, more easy-to-use, and quicker to achieve a task than a mouse. Therefore, two key points as possible improvements have been identified: • First, the implemented devices do not require specific learning and they have an added value compared to the mouse for several 3D tasks; • Second, they suffer from a lack of precision that reduces their added-value in a 3D context. Furthermore, the scenario was built for a generic validation and several kinds of tasks were defined and tested. The fact that perhaps different tasks should be associated to different types of devices of which each is more specialized, must be considered. This conclusion is also justified by the following results that allow comparing the performances of the handling devices with respect to basic tasks: selection of an object, translation, rotation, positioning and zooming. The positioning as a combination of selections and links between objects is a very technical task and the handling devices facilities have a very bad evaluation (less than 2 upon 6). More basic and non-technical tasks such as selection and zooming have a good evaluation: nearly 4 upon 6. Finally, rotation and translation are intermediate tasks considering the technical level asked to the user, and they have also a good evaluation (3.3). Such tasks may be already known by most users who knew CAD systems or video games. As far as the scenarios 2 and 3 dedicated to 2D interactions are concerned, the general ratio is reversed and 3 upon 4 users prefer the pen device than the mouse, arguing that the gestures are more "natural" than using the mouse. This can be explained by the fact that using a pen corresponds to years and years of apprenticeship since childhood. More precisely, 70% admitted that the pen device was easy to understand. There were also 70% to say that it helped with exchanges between users and 75% that it was helpful for creative sessions or annotations/marking. Finally, the pen device seems more precise and simple to 30% of the users. During the tests, users felt that the pen device will work as a traditional pen, but they were surprised by the fact that it was similar but not identical. A traditional pen has a thin and precise lead but the pen device has a larger lead and the location of the numerical projection depends on the angle between the pen device and the table. This point is also a source of improvements for the pen device. Detailed results illustrate better this conclusion by underlining the lack of facilities in the case of marking: the evaluation is only 2.7 upon 6. The zoom task and the draw task were evaluated at 4 and 4.3 upon 6. A commercial pen is certainly the solution for our further experimentations. Using Nintendo Wiimote devices was a way to develop low cost prototypes to conduct experiments, see e.g. Duval [27]. One of the initial objectives was to propose new handling devices that will help collaboration during specific design activities. The origin was the induced curb on collaboration due to the existing keyboard and mouse devices, which cannot be easily shared between people working together on one computer. We consider the mixed and interactive interfaces/devices, in particular Tangible User Interface (TUI), as systems being able to mitigate these disadvantages. Leaving the paradigm of "virtual reality" where the user is in immersion in the virtual mock-up, we enter in the paradigm of "augmented virtuality" where the user interacts with the virtual mock-up by the way of real (i.e. physical) objects. They make it possible to add new types of handling devices, that is to say devices manipulable by the hand, - dedicated to very dedicated tasks such as design tasks - to the traditional keyboard and mouse, and even to commercial pen and SmartBoard's products that correspond to generic collaborative tasks. These new devices allow the user to carry out inputs corresponding to specific business functionalities. The first results of the presented experiments demonstrate that the implemented handling devices have a real potential for achieving design tasks. Several basic tasks were made successfully by the user panel, and some of them seemed to be preformed with the handling devices than with a mouse (selection, writing, sketching for example). Some other tasks were too specific and showed the limits of our prototypes. Several improvements were then considered. First, the precision of the pen device must be improved by identifying better components and focusing on the real manipulations of the users. Second, we used existing software and thus we were obliged to use functions implemented for a mouse and a keyboard; as a consequence we must study similar functions optimized for the type of handling device that we proposed. For example, a rotation is possible dynamically with Product View by acting upon a thin circle around the part, if we replace it by a large circle, located on the part or outside the part, we may expect better results concerning the precision. Third, working on a table or on the wall may result in different perceptions by the users: our experiments did not analyze this point which might have influenced the results. More dedicated scenarios must be created in order to identify different practices than more specific devices. Finally, we used the same handling devices for several tasks. We have to analyze deeper each task and the obtained results to propose for some of them evolutions of the handling devices or even other types of handling devices. For example for a 3D manipulation as a rotation, a more specific handling device should be easier to understand and to manipulate. In addition, information to evaluate if the learning of the first scenario has an influence on the further scenarios should be gathered. It is the same if the impact of the wall or the table during the first scenarios is evaluated or not. 6 CONCLUSION AND FUTURE WORK Our approach, based on the use of a tabletop technology and physical devices aims at proposing a new way of interaction between designers and collaborative design IT tools. The achieved experiments validate this feasibility work by demonstrating the added value of the implemented handling devices compared to standard keyboard and mouse devices for most of basic tasks. Technical tasks also show some limitations: devices are still too generic for very technical tasks and there is a lack of precision for general tasks. Therefore, in future work their precision and the appropriateness of the devices should be improved. Further work will focus also on the business activities to improve both software behavior and devices. Therefore, first the IT environment in order to propose adequate functions for each handling device has to be improved. Second, more realistic scenarios in relation with the context of use (wall/table, one task/one device, etc.) in order to identify the real added value of dedicated physical devices vs. standard devices (mouse, commercial pens, etc.) should be identified. 7 REFERENCES [1] Prasad, B. (1996). Concurrent engineering fundamentals - vol. 1. Prentice-Hall, Englewood, Cliffs. [2] Gonzalez, V., Mark, G. (2005). Managing currents of work: Multi-tasking among multiple collaborations. Proceedings of the 9th European conference on CSCW, Paris, p. 143-162. [3] Legardeur, J., Merlo, C., Franchisteguy, I., Bareigts, C. (2003). Co-operation and coordination during the design process: Empirical studies and characterization. Proceedings of International CIRP Design Seminar, Grenoble, p. 385-396. [4] Klein, M., Sayama, H., Faratin, P., Bar-Yam, Y. (2003). The dynamics of collaborative design: Insights from complex systems and negotiation research. Concurrent Engineering: Research and Applications, vol. 11, no. 3, p. 201-209. [5] Gibson, J.J. (1977). The theory of affordances. In: Shaw, R.E., Bransford, J. (eds.), Perceiving, Acting and Knowing, Lawrence Erlbaum, Hillsdale. [6] Boujut, J.F., Blanco, E. (2003). Intermediary objects as a means to foster co-operation in engineering design. Journal of Computer Supported Collaborative Work, vol. 12, no. 2, p. 205-219. [7] Maher, M.L., Rutherford, J.H. (1997). A model for synchronous collaborative design using CAD and database management. Research in Engineering Design, vol. 9, p. 85-98. [8] Rosenman, M.A., Gero, J.S. (1996). Modelling multiple views of design objects in a collaborative CAD environment. Computer-Aided Design, vol. 99 no. 4, p. 193-205. [9] Moreau, G., Fuchs, P. (2001). Virtual reality in the design process: application to design review and ergonomic studies. Proceedings of ESS, Marseille, p. 123-130. [10] Lehner, V., DeFanti, T. (1997). Distributed virtual reality: Supporting remote collaboration in vehicle design. IEEE Computer Graphics and Applications, vol. 17, no. 2, p. 13-17. [11] Yoon, J., Oishi, J., Nawyn, J., Kobayashi, K., Gupta, N. (2004). FishPong: Encouraging human-to-human interaction in informal social environments. Proceedings of the ACM conference CSCW, Chicago, p. 374-377. [12] van der Auweraer, H. (2008). Frontloading design engineering through virtual prototyping and virtual reality: Industrial applications. Proceedings of the International Symposium series on Tools and Methods of Competitive Engineering TMCE, Izmir. [13] Johnson, G., Gross, M., Hong, J., Do, E. (2009). Computational support for sketching in design: A review. Foundations and Trend in Human-Computer Interaction, vol. 2, no. 1, p. 1-93. [14] Rivière, G., Couture, N. (2008). The design of a tribal tabletop. Proceedings of IEEE International Workshop on Horizontal Interactive Human-Computer Systems (Tabletop), p. 29-30. [15] Stefik, M., Foster, G., Bobrow, D.G., Kahn, K., Lanning, S., Suchman, L. (1987). Beyond the chalkboard: computer support for collaboration and problem solving in meetings. Communications of the ACM, vol. 30, no. 1, p. 32-47. [16] Wellner, P. (1993). Interacting with paper on the DigitalDesk. Communications of the ACM, vol. 36, no. 7, p. 86-96. [17] Bérard, F. (2003). The magic table: computer-vision based augmentation of a whiteboard for creative meetings. Proceedings of the Workshop on Projector-Camera Systems (Procams), IEEE International Conference in Computer Vision, Nice. [18] Rekimoto, J. (2002). SmartSkin: An infrastructure for freehand manipulation on interactive surfaces. Proceedings of CHI conference on Human factors in computing systems, p. 113-120. [19] Tuddenham, P., Robinson, P. (2007). T3: Rapid prototyping of high-resolution and mixed-presence tabletop application. Proceedings of IEEE International Workshop on Horizontal Interactive Human-Computer Systems, p. 11-18. [20] Verlinden, J., Horvath, I. (2008). Enabling interactive augmented prototyping by a portable hardware and a plug-in-based software architecture. Strojniški vestnik -Journal of Mechanical Engineering, vol. 54, no. 6, p. 458-470. [21] Subrahmonia, J., Zimmerman, T. (2000). Pen computing: Challenges and applications. International Conference on Pattern Recognition, vol. 2, p. 20-60. [22] Bowman, D.A., Kruijff, E., LaViola, J.J., Poupyrev, I. (2005). 3D user interfaces: theory and practice, Addison Wesley. [23] Fitzmaurice, G., Ishii, H., Buxton, W. (1995). Bricks: laying the foundations for graspable user interfaces. Proceedings of the 13th SIGCHI conference on human factors in computing systems, p. 442-449. [24] Kobayashi, K., Hirano, M., Narita, A., Ishii, H.A. (2003). Tangible interface for IP network simulation. Proceedings of CHI conference on Human factors in computing systems, p. 800-801. [25] Couture, N., Rivière, G., Reuter, P. (2008). GeoTUI: A tangible user interface for geoscience. Proceedings of the second ACM International Conference on Tangible and Embedded Interaction TEI, p. 89-96. [26] Polson, P.G., Lewis, C., Rieman, J., Wharton, C. (1992). Cognitive walkthroughs: a method for theory-based evaluation of user interfaces. Journal of Man-Machine Studies, vol. 36, no. 5, p. 741-773. [27] Duval, T., Fleury, C., Nouailhas, B., Aguerreche, L. (2008). Collaborative exploration of 3D scientific data. ACM Symposium on Virtual Reality Software and Technology, p. 303-304.