Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 90 Analysing picking errors in vision picking systems Ela Vidovič 1 , Brigita Gajšek 1 1 University of Maribor/Faculty of Logistics, Celje, Slovenia [Corresponding Author indicated by an asterisk *] Abstract — Vision picking empowers users with access to real-time digital order information, while freeing them from handheld radio frequency devices. The smart glasses, as an example of vision picking enabler, provide visual and voice cues to guide order pickers. The glasses mostly also have installed navigation features that can sense the order picker’s position in the warehouse. This paper explores picking errors in vision systems with literature review and experimental work in laboratory environment. The results show the effectiveness of applying vision picking systems for the purposes of active error prevention, when they are compared to established methods, such as paper-picking and using cart mounted displays. A serious competitor to vision picking systems are pick-to-light systems. The strong advantage of vision picking system is that most of the errors are detected early in the process and not at the customer’s site. The cost of fixing the error is thus minimal. Most errors consequently directly influence order picker’s productivity in negative sense. Nonetheless, the distinctive feature of the system is extremely efficient error detection. Key words – vision picking, error prevention, head-mounted display, smart glasses, pick-by-vision. I. INTRODUCTION The order picking is the process of withdrawing items from storage sites according to customer’s order. It is the most expensive and most labour-intensive process in warehouse, causing up to 55 % of all warehousing costs (Thomson et al., 1996; Vujica Herzog et al., 2018). Experts attribute this to the high proportion of warehouses that employ classical man-to-goods method for order picking (Murray, 2017). The situation is likely to remain unchanged due to low starting costs and the flexibility it offers. However, characteristics of man-to-goods systems are often low productivity as well as a high rate of human (order picker) errors (Rammelmeier et al., 2011). Picking errors mostly involve picking the wrong item, unpicked item, wrong number of pieces, wrong batch and expired date of use. The damage that the error causes to the company depends on the activity in the process and physical location where the error is detected. Errors discovered in the warehouse/company mainly cause only additional manipulation costs, while errors detected by the customers can also lead to a deterioration of the company’s market reputation, loss of customers, penalties, negative publicity, physical harm, damage to health or even death. Errors prevention and their early detection are therefore important for a stable and successful business operation. They can be accomplished reactively, by employing 100 % additional control, or proactively, by preventing them from occurring (Rammelmeier at al., 2011). In general, proactive solutions are more cost-effective and as such often the preferred method. Active error-prevention methods are usually achieved by implementing technological solutions, such as voice picking, pick-to-light and the use of handheld scanners. Another possible solution made widely available with recent advancements in augmented reality (AR) technology is vision-based picking. This solution is still in the testing phase in laboratories, with pilot applications emerging in companies. Systems are based on the use of head-mounted displays (HMDs) and sensors to determine the user's location in warehouse. Display resistant to dust, dirt, and water is often mounted on the frame of glasses. This kind of equipment is named as smart glasses. Users may wear glasses with or without lenses and/or a safety helmet. Smart glasses have enabled Wi-Fi and Bluetooth connectivity, therefore users may search online, interact with company software in real-time, and access the Internet. Users can interact with the smart glasses with voice, touch and head movements if glasses have integrated touchpad, voice control, and head tracking. They provide most of the features and capabilities of a modern Android smartphone, minus cellular connectivity, in a hands-free wearable device. A lower investment cost is most often emphasized as an advantage of the voice system over other hands-free picking solutions like pick-by-voice and pick-by-light (D’Halleweyn & Pleysier, 2015). The hardware cost, infrastructure cost and training costs are much lower and the solution is very flexible. The introduction of a vision picking system is very attractive because of promised low investment cost. However, the effects of use are less well researched. The paper researches types and frequency Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 91 of picking errors in order picking with the vision system. We compare the use of the vision system with other systems used in practice. Additionally, the paper contributes to the understanding of the performance of vision picking systems in practice. For this purpose, we reviewed the scientific literature and performed a practical test in a laboratory setting. II. THEORETICAL BACKGROUND The order picking process is the central process in the warehouses where order pickers prepare the dispatches for end customer orders. In order picking systems from a total quantity of parts (the assortment) subsets (items) are assorted due to a customer order (Langen, 2001; Rammelmeier et al., 2011). The most common type of order picking is the manual order on the principle of man-to-goods. The goods are usually provided statically on shelves. The order picker usually receives the process- relevant information in the form of a list. The order picking process has a significant influence on areas like distribution and production. The average order picker prepares over 1.000 different order positions in a work shift. The possibility of making an error is high. In response to this, different solutions are implemented to reduce the risk of error and increase the efficiency of work. One of them are vision systems or method pick-by-vision as an element of them. A. Pick-by-vision Pick-by-vision is a kind of picking method, which can base, among different technologies, on AR technology and operates by providing the order picker, equipped with a HMD, with visual instructions. AR is defined by 3 main characteristics, namely combines real and virtual environment, interactions in real time, and enabled spatial relationship between real and virtual environment)(Azuma, 1997). Pick-by-vision systems display the information, which is necessary for the picking process, in different formats (text, symbols, pictures, directional signs). Key elements of general pick-by-vision system are display in a form of HMD, computer as scene generator, a kind of input device and tracking system (Rammelmeier et al., 2011). A mobile computer in the size of handheld device, which is worn on the user’s body, controls the HMD. The interaction between computer and order picker takes place via adjustment/confirmation button or by voice or by user’s head movements. Pick-by-vision systems are divided on systems without tracking system and systems with tracking system. Pick-by-Vision systems without tracking display each item line of a picking order on the HMD statically. Carrying the pick list is abolished, because all information for picking is visualized at the HMD. Order picker has free hands. He receives information about item number, item quantity, storage shelf and in some cases also a picture of a product by visual channel. For advanced representation of information the user’s position in a work environment and orientation of his/her head must continuously be monitored by a tracking system. Pick-by-vision systems with tracking system provide a dynamic visualization for wayfinding in the warehouse in addition to the data of pick-by-vision systems without tracking. Pick-by-vision systems are recognized as systems with high potential for error prevention because the fact, that needed instructions are permanently visible to the user on the HMD. For example, the voice instructions of a pick-by-voice system are only available to the user for a short period (Rammelmeier et al., 2011). B. Picking errors Picking is the most error prone process in a warehouse (Li et al., 2012). Various kinds of picking errors can occur, for example, picking the wrong items, wrong quantity of specific item, omitted item. They can occur at any activity along the picking process, from reading the instructions, to searching for location, picking, sorting or even packing the items (Li et al., 2012). A major disadvantage of man- to-goods picking is high error rate, which varies between different organizational and technical/technological implementations. The occurrence of errors is usually described with picking error rate in Equation (1): error rate [%] = (number of defective items ⋅ 100 %) / number of items (1) The error rate of a conventional order picking system, in which order pickers pick items with help of paper sheet, is on average about 0.26 % (Rammelmeier at al., 2011). Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 92 Errors can be classified according to different division criteria. For the purposes of this paper, we divide picking errors according to the type of error, namely, we divide them into 5 subgroups. The resulting categories are mutually exclusive. Those 5 subgroups are (Günthner et al, 2009; Lolling, 2003; Dullinger, 2005; Rammelmeier at al., 2011): • mispick - false item as a substitute or in addition to the correct products; • wrong quantity - the number of the correct item is too high or too low; • omission error - an order line item has been forgotten; • condition error: an incorrect action was carried out on the item, for example damaged item, expired, improperly labeled, incorrect placement on the picking cart... • additional item - an additional item has been picked alongside the correctly picked items. III. METHODOLOGY The order picking process is the central process in the warehouses. Due to work intensity, more than 1,000 picking tasks per worker in one shift, there is a high probability of error. The importance of taking preventative measures is gaining in importance due to the increasing organization of business customers in just-in-sequence manner and increasing the volume of online sales in the business-to- customer (B2C) retail channel. Vision picking is envisioned as one of better technological ways to combat errors. We posed several research questions with aim to help clarify the dilemma of the use of smart glasses for picking purposes: RQ1: Does implementation of vision picking impact picking error rates? RQ2: What error types appear during vision picking? RQ3: Are vision picking systems user-friendly? The research was conducted in three phases. The first phase involves the analysis of scientific papers, reports and publications that have resulted from the experimental work. The second phase represents our own experimental work with smart glasses in laboratory environment. The third phase places the results of the second phase in the wider framework of the work of foreign authors so far. A. Literature review Our literature review is based on open sourced scientific databases. We also included several reports from universities of Münich, Stuttgart and Bremen. Selected literature was limited to sources in English language describing results from experimental testing of vision-based picking systems. Sixteen different sources describing eleven experimental studies carried out between 2008 and 2015 were included in further analyses. Table 1 gives an overview of all sources included in the study, including year of release, type of publication, number of participants tested and picking methods used. Tab. 1 Sources included in literature review ID of experiment Author(s) Publication type Number of tested persons Picking method 1 Reif & Walch (2008) Scientific paper 17 HMD, HMD(AR), PbV, PbP 2 Reif et al. (2009) Conference paper 16 HMD, PbP Reif & Günthner (2009) Scientific paper Günthner et al. (2009) University report 3 Schwerdtfege et al. (2009) Conference paper 19 HMD, HMD(AR), PbP Rammelmeier et al. (2011) University report 4 Iben et al. (2009) Scientific paper 16 HMD, PbP 5 Weaver et al. (2010) Conference paper 12 HMD, PbV, PbP, PbP(graphical) Baumann (2013) University report 6 Schwerdtfege et al. (2011) Scientific paper 34 HMD(AR) 7 Guo et al. (2014) Conference paper 8 HMD, CMD, PbL, PbP 8 Herter (2014) University report 16 PbL, PbVi, PbP, PbV Pickl (2014) University report 9 Wu et al. (2015) Conference paper 8 HMD, PbL Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 93 10 Funk (2015) Conference paper 16 HMD(AR), PbV, PbP, OpAR 11 Guo et al. (2015) Scientific paper 12 HMD HMD – head mounted display; HMD(AR) – head mounted display and location tracking; PbV – pick-by- voice; PbP – pick-by-paper; CMD – pick-by-cart; PbL – pick-by-light; PbV – pick-by-voice; OpAR – order pick AR; PbVi – pick by vision B. Experimental work in laboratory environment We tested the effects of using head mounted display in a picking process, namely Vuzix M300 Smart glasses, to the number and type of picking errors. A testing warehouse environment was established at the Faculty of Mechanical Engineering, University of Maribor. The protocol of performed research is described below, and summarised in Fig 1. 37 different items were stored in warehouse rack, on 4 shelves, with capacity of 60 storage locations. To each location and each item was assigned a unique identification and marked with a unique QR code.14 persons, mostly students between age 21 and 44, tested selected HMD. Participants were firstly introduced to the experiment. They must become familiar with laboratory environment, smart glasses and the picking protocol. Introduction phase lasted approximately half an hour. After that, each participant performed 4h of continuous picking without brakes. Fig. 1 Methodology for experimental work Participant used eyeglass frame on which a LCD display is placed in front of right eye. Communication with the vision system was possible using buttons on the eyeglass frame. Instructions Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 94 were given in English language, displayed on a LCD display. Participant was permitted to move to the following picking activity after successful confirmation scan of location ID or item ID, as required in protocol. A technical assistant, who provided working instructions, technical support and was responsible for the recording of the picking process, constantly supervised the work. The result of an experimental work in laboratory environment was 14 film. In parallel, questionnaire was prepared to describe the experience of working with smart glasses in a systematic way. Each participant completed it after the picking experience. IV. RESULTS A. Literature review Similar studies to ours have already been conducted in the past (Table 1). Their results are diverse and in many cases contradictory. A significant difference can be recognized in the frequency of observed error. 9 of 11 analysed studies evaluated pick-by-vision approach on the basis of direct comparisons with other picking approaches, two of them compared pick-by-vision systems which differed on a type of used HMD. Tab. 2 Results from literature review ID of experiment Author(s) Experimentally determined error rate 1 Reif & Walch (2008) 0.12 % (HMD) 2 Reif et al. (2009) 1.23 % (HMD) 0.7 % (HMD (AR)) Reif & Günthner (2009) Günthner et al. (2009) 3 Schwerdtfege et al. (2009) 1.23 % (HMD) 0.7 % (HMD (AR)) Rammelmeier et al. (2011) 4 Iben et al. (2009) 0.74 % (HMD) 5 Weaver et al. (2010) 0.1 % (HMD) Baumann (2013) 6 Schwerdtfege et al. (2011) 0 % (HMD (AR) frame guidance) 6 % (HMD (AR) tunel guidance) 17 % (HMD (AR) arrow guidance) 7 Guo et al. (2014) 0.6 % (HMD) 8 Herter (2014) 9.625 % (HMD (AR) initiated) 0.125 % (HMD (AR) actual) Pickl (2014) 9 Wu et al. (2015) 1.0 % (HMD) 10 Funk (2015) 9.75 % (HMD) 11 Guo et al. (2015) 2.18 % (HMD (AR) blurred) 2.13 % (HMD (AR) transluced) Error rates at vision systems based on HMD moves in boundaries between 0 % (Schwerdtfege et al., 2011) and 9.75 % (Funk et al., 2015). A big difference between the identified error rates could be explained to some extent by the type of used confirmation mode. With confirmation mode order picker reports to the visual system that the task is completed. As a result, the next picking task may be released. For example, after completion of required task order picker presses for the purpose of verifying dedicated button on the frame of the glasses. 3 studies report error rates 0.1% (Weaver et al., 2010) and 0,12 % (Reif & Günthner, 2009) when confirmation mode was realized in a form of voice messages. When confirmation mode was realized in a form of push button placed on the user's waist reported error rate was 1.23 % (Schwerdtfege et al., 2009). In other listed studies, participants achieved error rates 0.6 % (Guo et al., 2014), 1.0 % (Wu et al., 2015), 2.13 % and 2.18 % (Guo et al., 2015). These studies did not use conventional confirmation modes. Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 95 Instead of them, the Wizard-of-oz method was used. In this case, the Wizard or Experiment Leader constantly monitors the progress in picking process. When the required task is completed, Wizard initiates a confirmation action that triggers the release of the following task. The Wizard also detects the occurrence of errors made by participant in the experiment. Weaver and co-authors (2010) reported one of the lowest error rate 0.1 %. The result could be low because they did not take into account process errors such as putting items in wrong container. The authors mentioned the appearance of this specific error, but from the paper it is not clear, how many times this error was actually noticed and how error rate would change if this would be taken into account in the calculation. This study was the only one without wrong quantity error although the configuration of the system allowed its appearance. Similarly, Herter (2014) reported 0.125 % error rate when vision system is in place. Author revealed interesting relationship between initial errors and actual errors. Initial errors is a group of three types of errors, namely reaching by hand into a wrong box on either the shelf or the cart, wrong placement of the cart, and a box containing more or less items than it should. In case of reaching by hand into a wrong place, the participant must not necessarily pick an item. By recording these initial errors, we wanted to show how often such errors result in an actual error. An actual error was wrong parts or wrong amount of parts in a box at the end of the task. In pick-by-vision an average of 9.375 initial errors per participant resulted in 1.25 actual errors. Among other approaches (PbL, PbP, PbV) pick-by-vision approach had the highest initial error rate of 9.625 % per participant but these initial errors only resulted in 0.125 % actual errors. Guo et al. (2015) recorded the highest error rates (2.18 % and 2.13 %, respectively) between studies in Table 1. The authors mentioned a short introduction phase before starting with regular work as a possible cause for the result. Five studies, listed in Table 1, recorded the observed errors by five subtypes. The result of their observations is shown in Figure 2. Fig. 2 Error occurrence in previous studies Summarised, the most frequent errors are false item selection (44.9 %) and wrong quantity (39.9 % of all errors). Somewhat less frequently occurs omission error (9.7 %) followed by additional item (1.5 %) and condition error (4.0 %). Participants responses have shown that they perceive work using pick-by-vision as more accurate and faster than PbP, PbV, PbL systems. This also proved to be statistically significant from the collected Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 96 quantitative data. The authors of scientific literature note a steep learning curve and a high level of motivation to work with PbVi. B. Experimental work in laboratory environment Fourteen participants made 216 errors during 2,619 tasks. Of these, smart glasses detected 183 errors (84.7 % of all errors) and each time remind participant to take corrective action. Resumption of work was possible after elimination of the error. These initial errors are important for observation because they have a major impact on productivity. In our system configuration, average increase in task with error duration was 68.5 % (Table 3) because of additional activities to eliminate initial errors. We have recorded 26 tasks with more than one error. 23 participants made two errors in one task, one participant made 3 errors in one task and twice 4 errors were made in one task. The most frequent error was scan of wrong location, 114 iterations (52.7 % of all errors). The second most frequent error was item scan instead of location scan, 27 iterations (12.5 %). Detailes are presented in Table 4. Tab. 3 Comparison of task duration with and without error Error type Average task duration (with error) A Average task duration (no error) B Diference (A-B) [s] Diference [%] Initial error Scan of wrong item 40.0 40.7 -0.7 -1.6 Scan of wrong ID code on the item 66.4 40.7 25.8 63.4 Item scan instead of location scan 70.1 40.7 29.4 72.4 Location scan instead of item scan 68.7 40.7 28.1 69.1 Scan of wrong location 62.8 40.7 22.1 54.4 Oher type of wrong scan 102.9 40.6 62.2 153.1 Total average: 68.5 40.6 27.8 68.5 Actual error Item on wrong final location (condition error) 52.2 40.7 11.5 28.3 wrong item pick (mispick) 81.1 40.7 40.4 99.4 wrong quantity 70.4 40.7 29.7 73.1 Total average: 67.9 40.6 32.3 79.5 Tab. 4 Comparison of task duration with and without error Error type Errors / 2,619 tasks Errors / type of error Initial error Scan of wrong item 1 173 Scan of wrong ID code on the item 27 Item scan instead of location scan 28 Location scan instead of item scan 3 Scan of wrong location 114 10 Oher type of wrong scan 10 33 Actual error Item on wrong final location (condition error) 11 wrong item pick (mispick) 2 wrong quantity 20 Total 216 216 Despite the preventive measures within the software (scan of ID code on target location, scan of ID code on the item) experimental work finished with 33 undetected or actual errors (16.02 % of all errors, 1.2 % error rate). In 11 cases, items were put on wrong location regardless of checking the destination's suitability with a control scan. 20 times participants moved wrong quantity of items. There was no system control for prevention of these errors. In two cases, participants moved wrong item regardless of checking the item's suitability with a control scan. Calculated error rate of used vision Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 97 system is 1.2 %. If we eliminate actual errors that was almost impossible because of built-in system control for error prevention, calculated error decreases to 0.5 % error rate. Figure 6 shows graphically the error rates in groups of ten tasks, sequentially as work was progressed. The error rate of initial errors between groups of ten tasks during the work progress remain quite the same, the linear trend line is constant (k = 0.018). The error rate of actual errors, however, at first glance is growing significantly, as the work progressed. Therefore, we also show a trend lines and error rates in case that wrong item and wrong location are excluded. The direction coefficient of this linear, trending function also approaches zero with k = 0.013. The number of errors increases with time and participants work experience in timeframe of 4 hours of continuous use. The trend line is increasing slightly despite approaching the zero of the direction coefficient. Increasing could be associated with increased fatigue. Fig. 6 Error rates in relation with work progress Participants learn how to use smart glasses and about the picking protocol within an hour or less. Most participants perceive working with glasses as fast. Some, however, observed a slowdown in the performance of smart glasses after three hours of work, when the glasses were slightly discharged. More than 60% of participants had experienced right eye problems after prolonged use of smart glasses. The display was placed in front of right eye. Participants often mentioned the need for frequent blinking. A major problem that we did not find in the literature review is the difficulty in reading the displayed instructions on the smart glasses’ display due to the inability to focus the view. 85.7 % of participants had at least occasionally to close their left eye while they were trying to sharpen their vision on the right eye. The phenomenon occurred when the view was changed from that to the shelf on that to a display positioned close to the eye. Almost all participants described wearing smart glasses as physically disturbing. They frequently reported ear pain due to the weight of the device. Rarely, they reported pain in the nose area where smarts glasses sit, as well as burning sensation in the eyes and headaches. 78.6% of participants reported at least a slight difficulty in parallel reading instructions and moving around the storage environment. Participants also evaluated the entire experience of using smart glasses using the NASA TLX Questionnaire, based on which effort index of 49.79 was calculated (Figure 7). The NASA Task Load Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 98 Index (NASA-TLX) is a widely used, subjective, multidimensional assessment tool that rates perceived workload in order to assess in our case a work with smart glasses. Mental Demand describes how much mental and perceptual activity was required. Physical Demand describes how much physical activity was required. Temporal Demand describes how much time pressure did the participant feel due to the pace at which the task elements occurred. Overall Performance describes how successful was participant in performing the task. Effort describes how hard did participant have to work (mentally and physically) to accomplish his/her level of performance. Frustration Level describes how irritated, stressed, and annoyed versus content, relaxed, and complacent did participant feel during the task. On average, Performance got the highest score, 15.76 % of all points, but difference between participants are huge. We can conclude that each participant had very individually experienced working with smart glasses. The participant, whose smart glasses kept sliding from his head (Gračner et al., 2019), certainly felt the strongest physical demand and effort. Fig. 7 Results from NASA TLX Questionnaire V. DISCUSION The calculated error rate of 0.5 % from our laboratory experiment matches with results from comparable scientific studies, which mostly repot error rates between 0.1 % and 0.7 %. The types of observed errors are similar with the exception on omission error. In our case, this type of error did not occur due to build-in fuse in software. It was technically impossible to omit a task from an order list. Our results show a large number of initial errors. Herter (2014) also stated similar conclusion. This is also in line with the observation that order pickers make more errors using HMDs than order pickers who are using other methods or technologies. However, user of HMDs achieve lower actual error rates because build-in preventive elements or systems. Such systems are able to detect a large variety of errors immediately they occur. We would like to emphasize the importance of system configuration and its major impact on error rates. More fuses has the system built-in, lower is its error rate. Fuses prevent the initiated errors from becoming actual errors. A low error rate, however, does not mean that system reaches maximal employee productivity. More initial errors lead to more time spending to correct them. However, this time is probably less expensive than penalties or losing customers. Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 99 It is important to take the time to introduce workers to new technologies and procedures, as this can significantly help reduce errors and increase productivity. Consistent with previous findings, participants described the work pace as fast. We also detect a steep learning curve. Calculated NASA TLX effort score (49.79) is among the highest in comparison to previous studies, but it is still comparable to them. Through laboratory work, we, together with participants, have compiled a comprehensive list of recommendations for the manufacturer of smart glasses and future users, who need to pay a lot of attention to the design of the device, functionality and, indirectly, the composition of the picking system as a whole. For our participants, the smart glasses were too heavy, the cables were annoying, and they could not fully adjust the device to the different anatomical characteristics of individuals. Participants did not like scanning the ID codes on the lower and higher shelves, as they had to bend or pull the body up. RQ1: Does implementation of vision picking impact picking error rates? Yes, implementation of vision picking impact picking error rates. Actual error rates will be decreased due to build-in fuses. Prevention and error detection can be achieved with software, real time location system, installed sensors and cameras, machine vision etc. RQ2: What error types appear during vision picking? Types of appeared errors depend on system configuration. Installation of prevention fuses can cause that some types of errors will not be present or very rare. Generally, initial errors are quit common in vision systems. It depend on system configuration how many will be detected and corrected. For sure, some will proceed to actual errors. All 5 types of actual errors can theoretically appear, but excellent and perfectly configured system will produce almost none. RQ3: Are vision picking systems user-friendly? Each vision system must be evaluated individually. No general assessment is possible. They are certainly more user friendly than PbP, since much less effort is required to do the same job and no search for information is needed. More user-friendly systems has ergonomically designed HMD. HMDs are in a phase of intensive evolution and we believed that the most user friendly version is still in development phase. User must be aware that he will wear device at least 8 hours per day and that picking is the most physically demanded activity in warehouse. VI. CONCLUSIONS The use of vision systems in the picking process results in lower actual error rates than conventional methods like PbP. In vision system, during picking, the most common errors were the selection of the wrong product, the wrong quantity, and the process errors. The errors in our experiment can mostly be attributed to the lack of concentration in connection with the inability of the system to detect all types of errors. Use of vision system does not always lead to most effective picking method. Since the correction of initial errors results in additional work, the productivity of the picker in lowered. The consequences of the errors are mainly reflected in the prolonged picking time and reduced motivation of the picker, however in real life implication the results may be more severe. Despite a steep learning curve, high rates of motivation and high users’ acceptance rates, the vision system offers many drawbacks. The fact that all participants in of our experiment characterized the wearing of glasses as physically disruptive, coupled with some reported inability to sharpen their vision, pain in the eyes, nausea and headaches, suggests that designers of vision systems have to take care about its elements from the view point of ergonomics, productivity, and energy consumption. REFERENCES 1. N. Vujica Herzog, B. Buchmeister, A. Beharic and B. Gajšek, “Visual and optometric issues with smart glasses in Industry 4.0 working environment”, Advances in Production Engineering & Management, vol. 13, no. 4, pp. 417-428, 2018. https://doi.org/10.14743/apem2018.4.300 Logistics & Sustainable Transport Vol. 11, No. 1, February 2020, 90-100 doi: 10.2478/jlst-2020-0006 100 2. J. A. Tompkins, Y. A. White, E. H. Bozer, and J. M. A. Tanchoco, “Facilities Planning”, 2nd ed., John Wiley & Sons, Inc., Hoboken, NJ. 1996. 3. A. D’Halleweyn and L. Pleysier, “PERSBERICHT: Smart glasses in logistiek snel terugverdiend”, 2015. https://vil.be/2015/persbericht-smart-glasses-in-logistiek-snel-terugverdiend/#.XdTjo_ZFwuV 4. K. H. Langen, “Strategien für die Kommissionierung mit Regalbediengeräten, Karussellägern und Sortern, Jahrbuch der Logistik, Verlagsgruppe Handelsblatt”, Düsseldorf, 2001. 5. R. T. Azuma, “A survey of Augmented Reality, Presence: Teleoperators and Virtual Environments, Hughes Research Laboratories”, Malibu, 1997. 6. X. Li, I. Y. H. Chen, S. Thomas and B. A. MacDonald, "Using kinect for monitoring warehouse order picking operations”, in Proceedings of Australasian Conference on Robotics and Automation, vol. 15, pp. 6, 2012. 7. W. A. Günthner, N. Blomeyer, R. Reif, M. Schedlbauer, “Pick-by-Vision: Augmented Reality unterstützte Kommissionierung”, TU München, Abschlussbericht, 2009. 8. A. Lolling, “Analyse der menschlichen Zuverlässigkeit bei Kommissioniertätigkeiten”, Shaker Verlag, Aachen, 2003. 9. K. H. Dullinger, “Das richtige KommissionierKonzept - eine klare Notwendigkeit”, Jahrbuch Logistik, Verlagsgruppe Handelsblatt, Düsseldorf, 2005. 10. B. Schwerdtfege, R. Reif, W. A. Günthner, G. Kinker, D. Hamacher, L. Schega, I. Bockelmann, F. Doil and J. Tümler, “Pick- by-Vision: A first Stess Test”, IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR pp. 115 – 124, 2009. https://doi.org/10.1109/ISMAR.2009.5336484 11. B. Schwerdtfege, R. Reif, A. W. Günthner and G. Klinker, “Pick-by-vision: there is something to pick at the end of the augmented tunnel”, Virtual Reality, vol. 15, no. 2–3, pp. 213–223, 2011. 12. M. Murray, “Order Picking In The Warehouse - Supply Chain / Logistics”, The Balance, 2017. https://www.thebalance.com/order-picking-in-the-warehouse-2221190 13. T. Rammelmeier, S. Galka, W. A. Günthner, “Active Prevention of Picking Errors by Employing Pick-by-vision”, Schenk, M., 4th International Doctoral Students Workshop on Logistics, Otto-von-Guericke Universität Magdeburg, Magdeburg, pp. 79–83, 2011. 14. R. Reif and A. W. Günthner, “Pick-by-vision: augmented reality supported order picking”, The Visual Computer, vol. 25, no. 5–7, pp. 461–467, 2009. 15. R. Reif, B. Schwerdtfeger and G. Klinker, “Pick-by-Vision comes on age: evaluation of an augmented reality supported picking system in a real storage environment”, in 6th Internation-al Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, Pretoria, pp. 23–31, 2009. 16. R. Reif and D. Walch, “Augmented & Virtual Reality applications in the feild of logistics”, The Visual Computer, vol. 24, no. 11, pp. 987–994, 2008. 17. H. Iben, C. Ruthenbeck and T. Klung, “Visual Based Picking Supported by Context Awareness: Comparing Picking Performance Using Paper-based Lists Versus List Presented on a Head Hounted Display with Contextual support”, Proceedings of the international conference on Multimodal interfaces, pp. 281–288, 2009. 18. A. K. Weaver, H. Baumann, T. Starner, H. Iben and M. Lawo, “An Empirical Task Analysis of Warehouse Order Picking Using Head-Mounted Displays”, Conference on Human Factors in Computing Systems – Proceedings, vol. 3, pp. 1695–1704, 2010. https://doi.org/10.1145/1753326.1753580 19. H. Baumann, “Order picking supported by mobile computing. University of Bremen”, 2013. https://d- nb.info/1072047004/34 20. A. Guo, S. Raghu, X. Xie, S. Ismail, X. Luo, J. Simoneau, S. Gilliland, H. Baumann, C. Southern and T. Starner, “A Comparison of Order Picking Assisted by Head-Up Display (HUD), Cart-Mounted Display (CMD). Light, and Paper Pick List”, ACM Internat. Symposium on Wearable Computers, Seattle, pp. 71–78, 2014. 21. A. Guo, X. Wu, Z. Shen, T. Starner, H. Baumann and S. Gilliland, “Order Picking with Haed-Up Displays”, Computer, vol. 48, no. 6, pp. 16–24, 2015. 22. S. Pickl, “Augmented Reality for Order Picking Using Wearable Computers with Head-Mounted Displays”, University of Stuttgart, 2014. https://elib.uni-stuttgart.de/handle/11682/348 23. J. Herter, “Augmented Reality supported Order Picking using Projected User Interfaces”, University of Stuttgart, 2014. https://d-nb.info/1063936284/34 24. X. Wu, M. Haynes, Y. Zhang, Z. Jiang, Z. Shen, A. Guo, T. Strarner and S. Gilliland, “Comparing Order Picking Assisted byHead-Up Display versus Pick-by-Light with Explicit Pick Confirmation”, ResearchGate, 2015. https://www.researchgate.net/publication/301372398_Comparing_order_picking_assisted_by_head- up_display_versus_pick-by-light_with_explicit_pick_confirmation 25. M. Funk, A. S. Shurazi, S. Mayer, L. Lischke and A. Schmit, “Pick from here! An interactive mobile cart using in-situ projection for order picking”, in ACM International Joint Conference on Pervasive and Ubiquitous Computing, Osaka, pp. 601–609, 2015. AUTHORS A. Ela Vidovič, is a student at the Faculty of Logistics, University of Maribor, Celje, Slovenia. B. Brigita Gajšek, Ph.D., is Professor at the Faculty of Logistics, University of Maribor, Celje, Slovenia (e- mail: brigita.gajsek@um.si). Manuscript received by 26. 11. 2019. Published as submitted by the author(s).