ELEKTROTEHNIŠKI VESTNIK 89(3): 81–93, 2022 ORIGINAL SCIENTIFIC PAPER An augmented reality application for depicting space using the principles of linear perspective Katarina Bebar 1 , Bea Tomšiˇ c Amon 1 , Franc Solina 2 1 Faculty of Education, University of Ljubljana, Department of Visual Art Education, Kardeljeva plošˇ cad 16, 1000 Ljubljana, Slovenia 2 Faculty of Computer and Information Science, University of Ljubljana, Veˇ cna pot 113, 1000 Ljubljana, Slovenia E-mail: katarina.bebar@gmail.com, bea.tomsic@pef.uni-lj.si, franc.solina@fri.uni-lj.si Abstract. The linear perspective drawing system is taught in schools with the operational aim of developing a sense of how to construct the illusion of space, and with the overall aim of developing observation, spatial imagery and visualisation skills. Despite such aims, it has become clear through teaching practice in the first years of faculties that students have considerable difficulties when it comes to the visual representation of the space that surrounds them. Because mobile applications are a contemporary, effective and, above all, accessible teaching tools for all, we have designed and tested the effectiveness of an innovative didactic tool, a purpose-developed mobile application for the Android system, based on augmented reality, which acts as an interface and, through digitisation, draws attention to the nature and the regularities that arise during the transfer from the real space to the visual space, or to the differences between the experience and the understanding of the world through interfaces and without them. Since the results of our research have shown that Augmented reality-based mobile application in linear perspective drawing classes can help solve problems with spatial image representations and raise awareness of linear perspective concepts, in this paper we present the design and conditions for its operation. Keywords: augmented reality, mobile learning, spatial perception, art education, linear perspective, new media didactics Prikaz prostora na osnovi linearne perspektive s pomoˇ cjo obogatene resniˇ cnosti Sistem risanja linearne perspektive v šolah uˇ cimo z opera- tivnim ciljem razvijanja obˇ cutka za gradnjo iluzije prostora ter s splošnim ciljem razvoja sposobnosti opazovanja, prostorske predstavljivosti in vizualizacije. Kljub tovrstnim ciljem pa se je skozi pedagoško prakso v prvih letnikih fakultet izkazalo, da imajo, kadar gre za likovno reprezentacijo prostora, ki jih obkroža, študenti precejšnje težave. Ker so mobilne aplikacije sodobno, uˇ cinkovito in predvsem vsem dostopno uˇ cno orodje, smo zasnovali in preizkusili uˇ cinkovitost inovativnega didak- tiˇ cnega orodja, namensko razvite mobilne aplikacije za sistem Android, ki temelji na obogateni resniˇ cnosti in deluje kot vmesnik ter z digitalizacijo opozarja na naravo in zakonitosti, ki se pojavljajo pri prehodu iz realnega v vizualni prostor, oziroma na razlike med izkušnjo in razumevanjem sveta prek vmesnikov in svetom brez njih. Ker so rezultati naše raziskave pokazali, da lahko mobilna aplikacija, ki temelji na obogateni resniˇ cnosti, pri pouku risanja linearne perspektive pomaga pri reševanju težav s prostorskimi predstavami slik in ozavešˇ canju o konceptih linearne perspektive, v tem prispevku predstavl- jamo zasnovo in pogoje za njeno delovanje. Received 26 April 2022 Accepted 17 May 2022 1 INTRODUCTION In 1974 Arnheim [1] wrote a statement that transcends the period from which it comes in both directions, namely, that the pursuit of surpassing or enhancing works of art through the application of newer materials and techniques is entirely consistent with the practice of contemporary artists and teachers. According to Hock- ney [2], this very impulse or yearning to push the boundaries of what is possible in the field of representing space led to the discovery of a range of optical devices that were further developed into new media tools and devices in creation and teaching that are still in use today and opened up a whole new world to artists and teach- ers. In line with the European digital and pedagogical orientation and development, today students are increas- ingly presented with teaching content using pedagogical approaches that are very different from the traditional ones, thus allowing students to have learning experiences using the new technologies and their content, which Radu says are more accessible than ever before [3]. Creating high quality didactic tools and environments that motivate learning about and understanding things that are not directly accessible to the senses has become 82 BEBAR, TOMŠI ˇ C AMON, SOLINA one of the greatest challenges in educational science [4]. 1.1 The invention and development of linear per- spective The actual invention of a system for drawing linear perspective was apparently first described in Baghdad in the 11th century by the mathematician Ibn al Haitham ali Alhazen, as is known to the Western world. The basis for the system of drawing according to the principles of linear perspective apparently came from his studies of optics [5]. The same historical aspect or origin of linear perspective is also described by Edgerton [6], who, like Hockney [2], emphasizes that linear perspective was not invented in the Renaissance, but rather rediscovered, when it was attributed to the ancient Greeks rather than to Arabic science. Belting [5] notes that ancient Greece knew the concept of perspective but did not have the scientific apparatus to develop such a system. Ptolemy in the 2nd century AD already knew perspective but called it “scenografia.” The geometric method for drawing linear perspective was also described by Vitruvius as early as in the 1st century BC. Throughout history, visual artists have invented many analog optical devices, such as lenses, concave mirrors, the camera lucida, and the camera obscura, in an effort to better understand human perception of space and in an effort to bring it as close as possible to its real image in their pictorial representations of space. Hockney [2] claims that many Western painters, such as Dürer, Cagnacci, Hals, Rembrandt, Velázquez, Holbein, etc., had used such optical devices since the early 15th century. In these theses that Hockney developed with C. M. Falco, he proposed that the progress in the realistic representation of the world in Western art since the Renaissance was mainly due to optical instruments, the use of which was to give painters a new insight into the way to view and represent real space according to the newly discovered principles of linear perspective. In the 21st century, linear perspective is no longer exclusively the domain of art and mathematics, but a concept that we use in various conceptual transformations and operate with in a broader framework; in the fields of visual arts, computer games, theatre and film scenography, art history, sociology, etc. [7, 8, 9]. 1.2 Use of linear perspective in art education The Slovenian Curriculum for Art Education [10] states that “The fundamental task of art education is the development of a student’s artistic ability or competence, which results from an understanding of visual (natural, personal, social, and cultural) space and is expressed in the active transformation of that space into an art space”. Gardner [11] understands the artistic ability or competence as the ability to imagine space and defines it (along with the ability to perceive the visual world correctly, to make transformations or modifications of original perceptions, and to recreate aspects of one’s visual experiences) as an integral part of spatial intelli- gence. Gardner [11] adds that “we activate spatial skills when recognizing objects and scenes and when work- ing with graphic representations of two-dimensional or three-dimensional versions of real-world scenes or other symbols, such as maps, diagrams, or geometric figures”. In order to effectively perceive and understand space and the objects within it, we need to develop spatial skills from an early age, which are, among other things, “closely related” to representations of three- dimensional space on a two-dimensional surface [12]. Therefore, Tomšiˇ c ˇ Cerkez and Zupanˇ ciˇ c [13] refer to the development of spatial skills or spatial perception as “one of the most important goals of arts education”. This is because, roughly speaking, we understand learning as a process of acquiring skills and under- standing, and the consequence of this process is the ability to perform and/or understand something that we could not perform or understand before [4]. Therefore, for the best possible representation of space according to the principles of linear perspective or of objects on a two-dimensional surface, it is not enough for children to be well acquainted with the real features of space; they must also be able to skilfully coordinate the different shapes with the features of visual perception that enable us to see two-dimensional, drawn objects as three-dimensional ones. The construction of appropri- ate spatial representations or perspective drawings can be performed by children only on the condition that they are well acquainted with the theoretical starting points and receive high-quality didactic approaches in the classroom [14]. However, individual differences in children’s development should also be taken into ac- count, as they are reflected in “maturation and learning processes, development of psychomotor skills, cognition and acquisition of knowledge about the environment, as well as in their development of skills and the need to demonstrate knowledge” [15]. This development can be divided into several phases or periods (e.g.: doodling, symbolic function, etc.), although it should be empha- sized that the time of onset and duration of each phase may vary. The tendency to represent space in a linear perspective peaks in children around the age of fourteen [12], because only then are they able to understand and apply the mathematical knowledge required to correctly represent space according to the principles of drawing a linear perspective [16, 11]. 1.3 Virtual and augmented reality in education “The use of diverse and high quality teaching aids and tools is certainly one of the most important didactic decisions today in the teacher’s planning of art lessons. We consider teaching tools to be of a high quality if they are age appropriate, understandable, clear, have a suitable format to be shown to all students in the AUGMENTED REALITY APPLICATION FOR DEPICTING SPACE USING THE PRINCIPLES OF LINEAR PERSPECTIVE 83 class, and are also suitable for display with tools” [17]. In recent years, there has been a strong interest in the use of new technologies [18, 19] that can bring virtual information into the learning process or the real, physical environment of students via smartphones, webcams, and glasses, allowing them to interact with a virtual content [3]. In contrast to the early 1990s, when we could only experience virtuality via a bulky laptop-connected screen, the latter is now available to all smartphone owners, whose hardware and software allow digitally transmitted content to be combined with virtual information in real time and almost anywhere within the confines of real space [20, 21]. Education in which we use portable new media such as cell phones, tablets, and laptops to teach or disseminate new material is now referred to as mobile education [22]. Because this is a relatively new phenomenon, the theoretical basis of mobile education is still in its infancy [23]. When we talk about virtuality in the classroom, we are usually talking about learning materials, information, or experiences that we provide to students through vir- tual or augmented reality. The main difference between the two is that virtual reality completely replaces real space, while augmented reality merely complements it [24], thus enhancing the user’s sensory and cognitive reality [25]. Experiences in a virtual environment are simulated. For this reason, they can be similar to the real environment or completely different from it [26]. In augmented reality, the concept of experience is dif- ferent, namely experiences that are directly related to the real environment and the events that take place in it. Thus, virtual reality for educational purposes can be used in cases where we want students to experience situations that are dangerous or impossible to experience in the real environment (e.g., presentations and scientific studies of various cultural and historical objects and/or events). Augmented reality in the educational process can be used especially when we use virtual elements to illustrate phenomena and other things that we would otherwise not be able to perceive with our senses at a given moment in the real world. Their use also offers, among other things, the possibility of manipulating virtual objects based on the real world. Using interaction techniques supported by augmented reality technology, users can change the position, shape, and/or graphical properties of virtual objects, move around them, and view them from different angles. Augmented reality, which has the ability to implement virtual elements seemingly in the real world, can thus be a medium that improves the efficiency and attractiveness of learning and teaching through a unique experience of combin- ing or simultaneously experiencing the real and virtual worlds. Augmented reality allows teachers to convey infor- mation and students to interact between the real and virtual worlds in ways they have never known before [27]. Recent research in the field of using augmented reality in the teaching process also shows that the latter successfully addresses different learning styles and in- creases the motivation of students involved in the class- room. For example, research by Saltan and Arslan [28] has shown that augmented reality learning can address multiple learning styles simultaneously (visual, auditory, and kinaesthetic), unlike many traditional teaching tools. The results of Radu’s analysis [3], which included a study of twenty-six publications comparing the learning effectiveness of students with and without augmented reality-based applications, show that the use of aug- mented reality improves the effectiveness of learning spatial concepts, comprehension of learning material, collaboration with peers, and increases motivation. Radu makes a list of positive and negative effects of aug- mented reality on students, identifies important aspects of individual didactic tools or teaching aids when used in the classroom, and defines their impact on learning in three stages. Our hypothesis is that using computer vision tech- niques to depict the perspective lines and the vanishing point overlayed on a live video image, captured by a mobile phone, sensitises the users to correctly interpret the structure of the observed 3D environment. In this way they could correctly draw the environment using the tools of linear perspective. This hypothesis is tested by comparing two groups of students with the same task—to draw their observed environment using linear perspective. It is shown that the students who had the experience of using the new application perform their task better and faster. 2 MATERIALS AND METHODS Children learn about linear perspective in the nineth grade of primary school, since it is only at this age that they reach the appropriate artistic maturity or level of artistic expression that enables them to understand the schematisation and systematic nature of representing space according to the principles of linear perspective. The fact that space is represented on a two-dimensional surface according to the principles of linear perspective can be recognised by the fact that the size, shape and position of objects are determined by a network of imaginary lines that converge in a single point on the horizon. This point is called the vanishing point and the skyline is usually called the horizon which separates the ground and the sky in the drawing. The horizon is always at the height of the viewer’s eyes and determines the height from which we want to show the scene we are drawing [29]. 84 BEBAR, TOMŠI ˇ C AMON, SOLINA To correctly depict linear perspective, the following conditions must be met: • the eye is at rest, • the image plane is perpendicular to the ground plane, • the distance between the eye and the image plane and the distance between the eye and the observed object are constant. Compliance with the above conditions assures a correct representation of space on a two-dimensional surface with a well-defined geometric beginning [30], which gives the impression of depth or three-dimensionality. Drawing linear perspective can also be an excellent tool to determine the level of development of spatial representation. This is because pedagogical practice in early college years has shown that students have prob- lems with visual representation of space, resulting in unconvincing art products whose spatial representations do not seem likely—when drawing linear perspective, they tilt and twist space, often place the horizon too high, in an unnatural position, they bend lines that should be straight, etc. To solve the problem of understanding and representing the space according to the principles of lin- ear perspective, we have developed a mobile application for the Android system based on augmented reality. It draws attention to the differences between experiencing the world through interfaces and without them, and enables an easier transition from real (through digital) to art space. We integrated new media technology, i.e., a cell phone with an augmented reality-based application, into the educational process with the specific goal that users, while observing their visual environment through the application of perspective infrastructure, to become aware of their own observation to the extent that their drawing representations improve in terms of correctly constructed drawings of the space according to the principles of linear perspective. The objective goal is to stimulate the students to raise questions such as why they see as they see, how they construct an image according to the requirements of optical realism, and what lies behind the images. Our research group has extensive experience in devel- oping educational applications and applications involv- ing image processing using computer vision methods such as a novel user interface for video observation over the Internet [31], dynamic anamorphosis as a special computer-generated user interface adapting itself to the position of the user in the space [32], we studied the problem of missing eye contact in videoconferencing and proposing a simple method for improving the eye contact [33], developed educational applications for teaching spacial design [34] and worked on user inter- face design for people with severe learning difficulties [35]. Therefore, we approached the research problem described in the hypothesis with confidence that we can Figure 1. Matrix test, taken from lectures by ˇ Crtomir Frelih (Source: personal archive of Katarina Bebar) solve it. 2.1 Mobile application for detection of linear per- spective The idea for the application comes from and is based on lectures and research by ˇ Crtomir Frelih (Figure 1). Repeating his experiments with a matrix similar to those used by teachers when a transparency is placed over a two-dimensional photograph of real space on which projection rays are drawn—the transparency is moved until the rays intersect with the visible edges of the architecture—shows that already a simple perspective grid thrown over a photograph helps in understanding vision mechanisms in the way that the Renaissance re- discovered with the construction rules of linear perspec- tive. Using such tools certainly sensitises students to the question of creating an image according to the rules of linear perspective allowing for an artistic reconstruction of visual space. To use the application in our research, the analog design of the above learning device is upgraded and transformed into a new media didactic tool as an ap- plication for Android mobile phones. Using the data of the new media enables easy accessibility, mass use, portability and, last but not least, the possibility to have its content customized. The mobile application finds and records elements of linear perspective (vanishing point, orthogonal lines and horizon) in the space perceived by the phone camera and displayed in real time on the phone screen, and keeping them on the screen even when moving and changing the angle of the camera view, as shown in examples in Figures 2 and 3. See also the video demonstration of our application [36]. Applications similar to ours found on the World Wide Web are primarily aimed at architects and, at first glance, give an impression that similar applications already exist on the market (Morpholio trace [37] and Procreate [38]), but they differ from ours in their principle of AUGMENTED REALITY APPLICATION FOR DEPICTING SPACE USING THE PRINCIPLES OF LINEAR PERSPECTIVE 85 Figure 2. Demonstration of how our mobile app works by moving the mobile phone from the center position to the right side. The perspective lines, the vanishing point and the horizon move accordingly. Photo: Katarina Bebar Figure 3. Demonstration of how our mobile app works by moving the mobile phone from the center position towards the ceiling. The perspective lines, the vanishing point and the horizon move accordingly. Photo: Katarina Bebar 86 BEBAR, TOMŠI ˇ C AMON, SOLINA operation and interaction with the user. Most appli- cations that work with perspective grids (matrices) to visualise perspective elements and/or simplify rendering according to these rules work on the basis of static images. This means that the applications record the perspective elements on a photograph, so they cannot effectively illustrate the real-time changes that occur when the camera is moved. Another difference worth mentioning is that these applications require the user to determine the vanishing point themselves and align the lines of the drawn perspective lines with the edges of the image. This means that users of these applications must be well acquainted with the laws of drawing linear perspective. Our application is developed in the Computer Vision Laboratory at the Faculty of Computer and Information Science, University of Ljubljana [39]. It uses computer vision methods to detect straight edges in the captured images and combines them into perspective lines to illustrate the horizon and the vanishing point in real time, thus sensitising the user to the perception of three- dimensional space and the role of perspective lines in its two-dimensional representation [36]. Our main task is to perceive the space in different conditions and raise awareness of its composition by automatically detecting edges in an image of the space captured in real time and recording a matrix over it with all the key elements of linear perspective. This means that the user can track changes in proportions and relationships that occur when the angle between the image plane and the distance, the distance between the eye and the image plane, or more specifically, when the conditions for drawing a linear perspective are changed. Unlike other applications, our application autonomously detects/determines and illustrates the vanishing point and aligns the lines of computer-generated perspective lines with the edges in the image that follow the movement of the user or camera in space [36]. The user, thus enabled, needs no prior knowledge of linear perspective principles to use it. 2.2 Description of our mobile didactic application We have developed a mobile phone application that searches for perspective lines in captured images in real time, thus further sensitising the user to the perception of three-dimensional space and the role of perspective lines in its two-dimensional representation. The application supports the following functionalities: • detection of linear perspective elements in an image captured by a mobile phone camera, depicting perspective lines, clear lines and horizon; • extraction of linear perspective elements or per- spective grids in the captured image; • changing the proportions and relationships between the elements of the linear perspective continuously and dynamically as the device is moved. The application needs: • the mobile device to run an Android mobile oper- ating system and a suitable lighting of the scene or interior space captured by the camera on the mobile device. The application uses the following technologies and tools: 1) OpenCV software library • Android version 3.4.2; • and the following classes in the software library: • Mat – a class for working with multidi- mensional matrices to store the pixels of a captured image and used as input to various image processing methods; • Size – a class to define the size to resize the source image; • Point – a class for working with points to store information about the found point and used to define distances and quadrants; • Rect – class for defining and working with rectangles; • Vec and Scalar – classes to store vector data, used for storing data about distances and colours; • LineSegmentDetector – a class containing methods for finding line segments; and the following functions: • resize – to resize the image, • cvtColor – converting the image from one colour space to another, • pointPolygonTest – checking that the selected point is inside the selected polygon, • line – drawing a line in the image. For the display on the computer we also used the CommandLineParser class. It makes it easier to pass arguments when running the program, VideoParser and VideoWriter classes which allows capturing and displaying individual images from the video. The imshow function is used to display the processed images. 2) Android mobile operating system • The Android NDK is used to code part of the application in native code using programming languages such as C and C++. The main part of the application is written in the C++ programming language, as this allowed us to make it portable, i.e. the application is run both on a computer and on a mobile device. The C++ programming language also implements the OpenCV library. • The view, user interface and image capture from the camera are programmed in the Java programming language enabling calling native methods when using the Android NDK. AUGMENTED REALITY APPLICATION FOR DEPICTING SPACE USING THE PRINCIPLES OF LINEAR PERSPECTIVE 87 3) Tools • Integrated development environments An- droid Studio are used (to support the devel- opment of applications, user interfaces, de- bugging on the device of choice and code translation) and CLion to support the writing of C++ code, debugging (using GDB) and code translation (using CMake and GCC). 2.2.1 Operation of our application The operation of the application is divided into the following phases or steps, which consist first of image processing and mathematical operations to find elements of the linear perspective and place them in a real-virtual space. Figure 4. Captured image of a corridor Phase I: Camera captures the image of a corridor (Figure 4). Figure 5. Conversion of the colour image to greyscale Phase II: Image preparation for further processing The image is scaled down to a width of 640 pixels and a height of 360 pixels (reduction of the original image size along the x and y axes) to avoid stepped edges. The colour image is then converted to greyscale. These are the requirements for the optimal performance of the Line Segment Detector (LSD) algorithm in the next phase (Figure 5). Phase III: LSD (Line Segment Detector) processing A main tool of computer vision methods is the edge detection method. It detects distinct changes between pixels in an image that correlate with changes in the real space: • discontinuities in the depth, • discontinuities in the surface orientation, • change in the material properties, and • change in the illumination level, etc. Ideally, the result of using an edge detector on a captured image is a set of interconnected curves that characterize all of the above changes or situations. Using an edge detection algorithm in an image significantly reduces the amount of data to be processed, thereby filtering out information that is considered less relevant, while still preserving the important structural features of the image. The edges extracted in this way are separated into two groups: angle-dependent edges and non-angle- dependent edges. The independent edges usually reflect properties inherent to three-dimensional objects (such as texture and surface shape), while the angle-dependent edges, which can move or change with changes in the angle of view, usually reflect the geometry of the scene. One of the many algorithms for detecting or finding edges is the LSD algorithm. It is used in the development of our application, mainly because of its fast computa- tion capability (which means a good performance of the application or fast responsiveness to changes in the real space image in real time). The LSD algorithm is a set of different mathematical methods whose common goal is to identify points in a digital image where the brightness of the image changes significantly, or points where there is a brightness discontinuity [40]. These points are usually organised in a set of curved line segments called edges. To understand how the LSD algorithm works, the concepts of the contour, gradient and edge are important. The gradient represents the change when the grey level of an image changes from dark to light or vice versa. When gradient changes are fast, so are the transitions between grey levels in the image—especially when they occur over a short distance, these changes are observed as contours or edges in an image (Figure 6). Figure 6 shows the brightness discontinuity or edge segment in the image (left) and a zoom of the edge segment showing the gradient or transition between dark and light grey (right). The LSD algorithm computes gradients and thus edges using matrices or masks that assign each pixel one of the 256 greyscale values, where a value of 0 represents black and a value of 255 represents white. Sudden changes in the intensity of the greys in a certain area of the image, together with changes in the intensity of the pixels, i.e. the pixels to which the algorithm assigns the highest values, imply line segments or edges. The result of the LSD algorithm is a set of disconnected lines in the form of a curve, 88 BEBAR, TOMŠI ˇ C AMON, SOLINA Figure 6. Changes of the grey level at an edge in an image indicating a potential edge in the captured image. Phase IV: Filtering the edge lines In the first part of this phase, the algorithm first arranges the found edges (Figure 7) into quadrants centered on the calculated centers (if these are not yet calculated, the image centers are used, as we expect the user to initially point the phone towards the center), and then arranges them according to the quadrants and their slopes. This is done by looking at all the edges returned by the LSD algorithm and filtering them by quadrant. In the odd quadrants, it selects and keeps the edges with a slope between 0.2 and 1.2 radians (11.4 ◦ and 68.7 ◦ ), and in the even quadrants it selects and keeps the edges with a slope between -0.2 and -1.2 radians (-11.4 ◦ and - 68.7 ◦ ). Edges not in accordance with the specified slope are discarded. This gives a refined set of edges which contribute to the estimate of the position of the vanishing point (Figure 8). Figure 7. Original edge segments Combining shorter edge segments into longer seg- ments Incomplete image data and/or incomplete edge detec- tion algorithm results can lead to two types of deficien- cies: either a break in the curves (curves can be a set of unconnected points after applying the edge detection algorithms or spatial deviations from the ideal line, circle or ellipse. Our method detects and corrects such deficiencies. It works by combining the disconnected Figure 8. Filtered edge segments line segments of the detected edges in the transformed space with algorithms to form the so-called object can- didates or longer distances, which are important for the determination of the vanishing point. In our application, the algorithm does this in the following steps: 1) It arranges the edge segments according to their size from the longest to the shortest. 2) Around each selected edge segment, it makes a polygon that is extended to the center of the image in one direction and to the edge of the image in the other. 3) The shorter edges contained in the polygon are used to increase the length of the longer edges. Figure 9. Two short edge segments (within the white polygon), will be joined into a single edge segment. Phase V: Estimation of the vanishing point using the RANSAC algorithm Lines or edges perceived as parallel in real space are usually seen as converging in two-dimensional pho- tographs. If these edges are extended, they intersect at least at one point in the plane. This point is called the intersection point in mathematical terminology and the point of view or vanishing point in art theoretical language. In order to be able to determine the position of the centroid in real time on a dynamic image of a real space, the application first had to detect the edges in the image. AUGMENTED REALITY APPLICATION FOR DEPICTING SPACE USING THE PRINCIPLES OF LINEAR PERSPECTIVE 89 Figure 10. Two short edge segments are joined into a single edge segment. This is done by the edge detector. The colour image is converted into a greyscale image, from which the algorithm calculates the position of the edges based on the pixel greyscale intensity. The result of the processing at this point is a set of shorter distances which are filtered and merged into several longer distances. The application estimates the position of the vanishing point in the following six steps: 1) A two-dimensional graph in a transformed space is created where, for each edge segment, a point is entered in the graph with the slope on one axis and the offset of the line on which the selected edge segment lies on the other axis. 2) Given the length of the selected line, the points in the transformed space are further weighted. This means that longer distances have more power in determining the distance from which the origin is obtained—in our case, this looks like entering the selected distance several times in the graph, depending on its length. 3) The RANSAC algorithm is then used to fit the lines to the points in the transformed space. RANSAC (Random Sample Consensus) is an algorithm whose primary function is to find a mathematical model in a set of collected data [41]. The algorithm works by selecting the smallest possible number of random samples from all the data, analysing them and defining a mathematical model that represents the entire data set. Finally, it checks its goodness of fit or how well the mathe- matical model describes the given data set. It does this by calculating for each sample an estimate of the deviation from the resulting mathematical model. As the algorithm is based on chance, it does not guarantee the discovery of the correct result, so it is necessary to define the maximum number of trials within its scope. This ensures that if the algorithm is unable to define a suitable mathematical model from the selected samples, image processing is abandoned. Figure 11. Example of how RANSAC performs. The task is to find the line which is best supported by data points. 4) The resulting lines lying on the collinear points are then mapped back to the points in the original image space. 5) The resulting points represent an estimate of the current vanishing point (for the current image). 6) To avoid jerky movements, the average of the positions of the centers from ten previous images are averaged for the current vanishing point. Phase VI: Determination and plotting the perspective lines To determine the perspective lines, all the lines ob- tained in Phase IV shall be filtered as perspective lines if they intersect in the vanishing point or are less than 15 pixels from the vanishing point. Finally, the elements obtained in this way (the vanishing point and the perspective lines) are scaled back to the size of the original captured image and drawn over it (Figure 12). Figure 12. Perspective lines and the vanishing point in the captured image 3 PRACTICAL EVALUATION OF OUR MOBILE APPLICATION The sample of the study, which was conducted as a quan- titative study and also included a qualitative analysis of the results consisted of 150 students of art education and architecture. Due to the nature of the study, the sample was conditioned by courses or subjects that cover 90 BEBAR, TOMŠI ˇ C AMON, SOLINA the basics of perspective drawing of spaces and are basically conducted in the first years of study at the two faculties. The study was a single-factor, non-random pedagogical experiment with two comparison groups (experimental and control) and modalities. Both groups initially benefited from the traditional implementation of the teaching and learning process using traditional didactic tools and resources such as photographs and other images. Then the experimental group started to use the mobile app as a tool to solve an art task (Figure 14), while the control group solved the art task without the mobile app (Figure 13). As part of the evaluation, they made a drawing that focused on spatial keys, taking into account the conditions for drawing a linear perspective with a 90-degree slope of the project plane. The students were asked to draw within the parameters of a real closed space or corridor, the artistic expression took three school hours. The students in the experimental group (EG) received a link to the application by email on the day of the evaluation. At the beginning they downloaded the application on their phone and began using it according to the instructions. The students were asked to observe changes in various proportions and relationships occuring when the angle between the image plane and the distance and/or the distance between the eye and the image plane changes. In this way, they took different positions, i.e. spatial views, and observed changes in the construction of the perspective drawing in real time on the screen. They were not allowed to use the application while drawing as they might take a screenshot in a certain spatial view and perform the drawing or the art task by simply copying it. Based on the pedagogical evaluation, differences were determined in the quality of artistic representation of the real space according to the principles of drawing a linear perspective between the experimental (EG) and the control group (CG); we were interested to know • whether the students perceived and became aware of the problems that arose when transferring from the real space to the art space more quickly and consequently solved them more efficiently than without the application; • how the mobile application affects the perception and understanding of the differences between real and artistic space; • and how it affects the learning process, students’ motivation and the implementation of art tasks. The results of the evaluation show that the EG students solved the art tasks significantly better and faster than the CG students, as shown in Figs. 13 and 14. The first drawing (Figure 13) is made by a CG student who was not familiar with the application. In his drawing there are irregularities such as: a tilted and inverted space, multiple vanishing points and hori- zons, deformation of lines (orthogonal lines ) and their curvature, similar to the fisheye effect of the camera, which is what we call the effect of a circular lens that creates a distorted, curved image of the object due to the wide angle. The aesthetics of the student’s drawing also indicates that the drawing was made quickly, the use of spatial keys is present (reducing the size of differentiation of details) but indistinct (e.g. low intensity of lines and shading). The result is a work of art that is not convincing enough. Figure 14 presents a drawing made by an EG student who was familiar with our application. In his draw- ing there is only one horizon with a properly defined vanishing point and proper framing. The perspective drawing looks more complete and is also better laid out in terms of composition. The spatial keys are used more intensively, e.g. through a correct differentiation of details and light intensity. The drawing is more expressive, the lines are well drawn and express the student’s personal style. The use of the application significantly affects the differences in the student’s motivation to complete the art task. A comparison of the results of the CG and EG students shows that 92.2 % of the EG students were motivated to solve the art task, compared to 58.5 % of the CG students. The use of the presented augmented reality-based mobile application proves to be an effective tool to draw the space according to the principles of linear perspective and it raises awareness of the back- ground and methods of constructing spatial drawings, and it qualifies itself as a tool that helps teachers and students to achieve their goal, provided that teachers are well prepared for their task. Like any other didactic tool that we use in the class- room, the application has positive and negative features. Among the most positive are the high motivational effect and its simple use, since the mobile phone is a device with which almost every teacher and student is familiar. Other important features of the application are also its capacity for simultaneous drawing of the orthogonal lines, viewpoint and horizon in real time on the image of real space. AUGMENTED REALITY APPLICATION FOR DEPICTING SPACE USING THE PRINCIPLES OF LINEAR PERSPECTIVE 91 Figure 13. Drawing by a CG student with hand-drawn perspective lines. The perspective lines do not intersect in a single vanishing point and the horizon is not defined uniquely. Figure 14. Drawing by an EG student. The perspective lines intersect as expected in a single vanishing point. 92 BEBAR, TOMŠI ˇ C AMON, SOLINA 4 CONCLUSIONS When using our application for depicting the space according to the priciples of linear perspective good lighting and a suitable space are very important. To use the application the concepts or contents should be known on presentation and visualisation of the spatial keys to create an illusion of depth such as differentiation of details, light intensity and textures. The application is meant to be only used as a supplement to other teaching and learning activities. Attention should be paid to a proper teaching and learning process and the interaction between teacher and student. It is important when and in what way the application is included in the teaching process. Using our application can provide a good basis for developing short debates and simultaneously raise questions on the topic, all of which takes time to con- sider or participate in preparing particular lessons. The focus of our further research will be on drawing complex spatial formations using the principles of drawing with linear perspective. REFERENCES [1] Rudolf Arnheim. Art and Visual Perception: A Psychol- ogy of the Creative Eye. University of California Press, 1974. [2] David Hockney. Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters. London: Thames & Hudson, 2006. [3] Iulian Radu. “Augmented reality in education: A meta- review and cross-media analysis”. In: Personal and Ubiquitous Computing 18 (2014), 1533–1543. DOI: 10. 1007/s00779-013-0747-y. [4] Peter Goodyear and Symeon Retalis. “Learning, tech- nology and design”. In: Technology-Enhanced learn- ing. Brill Sense, 2010, pp. 1–27. DOI: 10 . 1163 / 9789460910623_002. [5] Hans Belting. Florence and Baghdad: Renaissance Art and Arab Science. Cambridge, MA: Belknap Press, 2011. [6] Samuel Y . Edgerton. The Renaissance Rediscovery of Linear Perspective. New York: Basic Books, 1975. [7] S. Aguilera. A New Perspective, Universal Edition. El Sobrante, CA: Artistech Books, 2008. [8] Aleksandar ˇ Cuˇ cakovi´ c and Marijana Paunovi´ c. “Per- spective in Stage Design: An Application of Principles of Anamorphosis in Spatial Visualisation”. In: Nexus Network Journal 18.3 (2016), pp. 758–743. DOI: 10. 1007/s00004-016-0297-5. [9] James Elkins. The Poetics of Perspective. Cornell Uni- versity Press, 1994. [10] Uˇ cni naˇ crt za Likovno vzgojo. Tech. rep. Ljubljana: Zavod RS za šolstvo, 2011. [11] Howard E. Gardner. Multiple Intelligences: The Theory In Practice, A Reader. Basic Books, 1993. [12] Janja Batiˇ c. Arhitekturno oblikovanje pri likovni vzgoji v osnovni šoli. Ljubljana: Založba Genija, 2010. [13] Beatriz Tomšiˇ c ˇ Cerkez and Domen Zupanˇ ciˇ c. Play Space [Prostor igre]. Ljubljana: Univerza v Ljubljani, Pedagoška fakulteta in Fakulteta za arhitekturo, 2011. [14] Alexandra Shlahova. “Problems in the Perception of Perspective in Drawing”. In: Journal of Art & Design Education 19.1 (2002), pp. 102–109. DOI: 10 . 1111 / 1468-5949.00207. [15] Matjaž Duh and Tomaž Vrliˇ c. Likovna vzgoja v prvi tri- adi devetletne osnovne šole. Ljubljana: Založba Rokus, 2003. [16] Jožef Muhoviˇ c. Über die Natur des Raumes = De natura spatii. Ljubljana: Jožef Muhoviˇ c (rokopis), 1988. [17] Tonka Tacol. Likovno izražanje. Ljubljana: Debora, 2003. [18] Jorge Bacca-Acosta, Silvia Baldiris, Ramón Fabregat, Sabine Graf, and Kinshuk. “Augmented Reality Trends in Education: A Systematic Review of Research and Applications”. In: Educational Technology and Society 17.4 (2014), pp. 133–149. DOI: 10256/17763. [19] Nor Saidin, Noor Abd Halim, and Noraffandy Yahaya. “A review of research on augmented reality in edu- cation: advantages and applications”. In: Personal and Ubiquitous Computing 8.13 (2015), pp. 1–8. DOI: 10. 5539/ies.v8n13p1. [20] Mark Billinghurst and Andreas Duenser. “Augmented Reality in the Classroom”. In: Computer 45.7 (2012), pp. 56–63. DOI: 10.1109/MC.2012.111. [21] Christopher Wasko. “What Teachers Need to Know About Augmented Reality Enhanced Learning Environ- ments”. In: International Journal of Computer Appli- cations 57 (2013), 17–21. DOI: 10.1007/s11528-013- 0672-y. [22] Dragana Glušac. Elektronsko uˇ cenje. Zrenjanin: Tehniˇ cki fakultet “Mihajlo Pupin”, Univerzitet u Novom Sadu, 2012. [23] Matthew Kearney, Sandra Schuck, Kevin Burden, and Peter Aubusson. “Viewing mobile learning from a pedagogical perspective”. In: Research in Learning Technology 20 (2012). DOI: 10.3402/rlt.v20i0.14406. [24] Ronald T. Azuma. “A Survey of Augmented Reality”. In: Presence: Teleoperators and Virtual Environments 6.4 (Aug. 1997), pp. 355–385. DOI: 10.1162/pres.1997. 6.4.355. [25] Shen Zheng. “Research on mobile learning based on augmented reality”. In: Open Journal of Social Sci- ences 3.12 (2015), pp. 179–182. DOI: 10 . 4236 / jss . 2015.312019. [26] Jaron Lanier. Dawn of the New Everything: Encoun- ters with Reality and Virtual Reality. Henry Holt and Company, 2017. [27] Mehmet Kesim and Yasin Ozarslan. “Augmented Real- ity in Education: Current Technologies and the Poten- tial for Education”. In: Procedia – Social and Behav- ioral Sciences 47 (2012), pp. 297–302. DOI: 10.1016/ j.sbspro.2012.06.654. [28] Fatih Saltan and Ömer Arslan. “The Use of Augmented Reality in Formal Education: A Scoping Review”. In: Eurasia Journal of Mathematics, Science and Tech- nology Education 2.13 (2017), pp. 503–520. DOI: 10. 12973/eurasia.2017.00628a. [29] Tone Raˇ cki. Vešˇ cina risanja 1. Ljubljana: Javni sklad RS za kulturne dejavnosti, 2006. [30] Radovan Ivanˇ cevi´ c. Perspektive. Zagreb: Školska kn- jiga, 1996. [31] Bor Prihavec and Franc Solina. “User interface for video observation over the internet”. In: Journal of net- work and computer applications 21.4 (1998), pp. 219– 237. DOI: 10.1006/jnca.1999.0074. AUGMENTED REALITY APPLICATION FOR DEPICTING SPACE USING THE PRINCIPLES OF LINEAR PERSPECTIVE 93 [32] Robert Ravnik, Borut Batagelj, Bojan Kverh, and Franc Solina. “Dynamic anamorphosis as a special, computer- generated user interface”. In: Interacting with comput- ers 26.1 (2014), pp. 46–62. DOI: 10.1093/iwc/iwt027. [33] Aleš Jakliˇ c, Franc Solina, and Luka Šajn. “User inter- face for a better eye contact in videoconferencing”. In: Displays 46 (2017), pp. 25–36. DOI: 10.1016/j.displa. 2016.12.002. [34] Tilen Žbona, David Možina, Klemen Petrovˇ ciˇ c, Luka Debevec, Franc Solina, and Borut Batagelj. “Uporaba novih medijev pri pouˇ cevanju prostorskega oblikovanja v osnovni šoli”. In: Vzgoja in izobraževanje v infor- macijski družbi - VIVID 2014: zbornik referatov = Education in information society : conference proceed- ings. Ed. by Vladislav Rajkoviˇ c, Mojca Bernik, and Uroš Rajkoviˇ c. Kranj: Fakulteta za organizacijske vede, 2014, pp. 259–264. [35] Erika Pavlin, Žiga Elsner, Tadej Jagodnik, Borut Batagelj, and Franc Solina. “From illustrations to an interactive art installation”. In: Journal of Information, Communication and Ethics in Society 13.2 (2015), pp. 130–145. DOI: 10.1108/JICES-02-2014-0007. [36] Real-time reconstruction of linear perspective (video of a demonstration). URL: https://youtu.be/xi2WwDqopzo (visited on 01/29/2022). [37] Morpholio Trace. URL: https://www.morpholioapps. com/trace/ (visited on 01/29/2022). [38] Procreate. URL: https : / / procreate . art (visited on 01/29/2022). [39] Franc Solina, Katarina Bebar, Borut Batagelj, and Juš Debelak. “Detektor linearne perspektive”. In: Decades, Speculum Artium 2018 (13.-15. september), 10. med- narodni festival novomedijske kulture. Trbovlje, Slove- nia, 2018. [40] Rafael Grompone von Gioi, Jérémie Jakubowicz, Jean- Michel Morel, and Gregory Randall. “LSD: a Line Segment Detector”. In: Image Processing On Line 2012.2 (2012), pp. 35–55. DOI: 10 . 5201 / ipol . 2012 . gjmr-lsd. [41] Robert C. Bolles and Martin A. Fischler. “A RANSAC- Based Approach to Model Fitting and Its Application to Finding Cylinders in Range Data”. In: Proceedings of the 7th International Joint Conference on Artifi- cial Intelligence - Volume 2. IJCAI’81. Vancouver, BC, Canada: Morgan Kaufmann Publishers Inc., 1981, 637–643. DOI: 10.5555/1623264.1623272. Katarina Bebar received her Professor Diploma in Art Education in 2009 and her Ph.D. degree in 2021 from the University of Ljubljana, Slovenia, Faculty of Education, Department of Fine Art Education. Her interests are focused in didactics, pedagogy, digitization, virtualization and advanced technologies. Her fields of activity are the transfer of new contents through new media into the creative, cultural and economic sector. In 2011–2012 she worked as an Assistant of Art Education at the University of Ljubljana. She then joined the economy sector where she worked in the aerospace industry. From 2018–2019 she was a Researcher in the field of Education at the Faculty of Computer and Information Science, University of Ljubljana, and at the end of 2019 she joined the Cabinet of the Slovenian Minister of Culture. Currently, she is with the Slovenian Association of Fine Arts Societies. Bea Tomšiˇ c Amon is an associate professor of Didactics of Art Education at the Department of Art Education at the Faculty of Education, University of Ljubljana. In 1987 she graduated as an architect from the Faculty of Architecture and Urbanism in Buenos Aires, Argentina. In 1993 she graduated from the Academy of Fine Arts, University in Ljubljana, and then received her M.Sc. degree in Sociology of Culture from the Faculty of Arts in Ljubljana and her Ph.D. degree on Experiential Learning and Spatial Design from the Faculty of Education, Ljubljana. Her areas of interest are visual art education, pedagogy of architecture, spatial perception, theory of architecture, geometry and art, and interdisciplinary education. Franc Solina is a full professor of computer science at the University of Ljubljana, Faculty of Computer and Information Science, Slovenia. He received his Ph.D. degree in computer and information science from the University of Pennsylvania in the USA. His research interests include 3D modelling from images and the use of computer vision in human–computer interaction, heritage science, and in art installations. He is a Fellow member of IAPR, a Life Senior member of IEEE, a member of ICOMOS, Slovenian Association of Fine Arts Societies, Slovenian Academy of Engineering and a regular member of the European Academy of Sciences and Arts in Salzburg.