138 Organizacija, V olume 58 Issue 2, May 2025 Research Papers 1 Received: 16th January 2025; Accepted: 22nd March 2025 The Influence of Benefits and Trust on Consumers’ Attitudes towards Artificial Intelligence: The Moderating Role of Threats Matjaž IRŠIČ, Tomaž GJERGJEK University of Maribor, Faculty of Economics and Business, Department of Marketing, Maribor, Slovenia, matjaz.irsic@um.si, tomaz.gjergjek@student.um.si Background/Purpose: This article explores consumers’ perception of the benefits of intelligent service robots (ISR) in the purchasing process, their trust in artificial intelligence (AI), their perception of AI-related threats, and the im- pact of these variables on consumer attitudes toward AI. Additionally, the study examines the moderating effect of perceived AI-related threats on the relationship between perceived benefits and trust on one side and the formation of consumer attitudes toward AI on the other. Methods: The research was conducted in the first half of 2024 on a judgmental sample of 224 employed consum - ers in the Republic of Slovenia. Data were collected through a structured online questionnaire. For the empirical analysis, a non-parametric approach using SEM-PLS modelling was applied to examine relationships between the studied research constructs. Results: The findings indicate that perceived benefits of ISR have a strong and positive impact on consumer at- titudes toward AI, while perceived AI-related threats strongly and negatively influence these attitudes. Moreover, the results reveal that perceived AI-related threats significantly and negatively moderate the effect of consumers’ perceived trust in AI on the formation of their attitudes toward AI. Conclusion: The results of this study contribute significantly to the theoretical understanding of employed consum - ers’ attitudes toward AI. They also provide practical implications for companies in developing predictive models of consumer behaviour and defining effective marketing strategies to encourage AI adoption in the purchasing process. Keywords: Artificial intelligence (AI), Consumer attitudes, Perceived AI-related threats, Perceived benefits of intelli- gent service robots (ISR), Perceived consumer trust JEL Classification: M21, M31 DOI: 10.2478/orga-2025-0009 1 Introduction Artificial intelligence (AI) refers to technologies capa- ble of performing tasks that typically require human in- telligence (Stein et al., 2024), such as visual perception, speech recognition, decision-making, and natural language processing. AI systems are designed to learn from experi- ence and improve over time using algorithms and statisti- cal models (Ahmad et al., 2023; Russell & Norvig, 2010). Consequently, AI has a transformative impact on how we live and work (Lockey et al., 2021), enhancing efficiency, accuracy, and decision-making. 139 Organizacija, V olume 58 Issue 2, May 2025 Research Papers AI has numerous applications across various fields and industries, including healthcare, finance, retail, transporta- tion, education, and marketing (Cavallo, 2019; Cao, 2022; Bughin et al., 2018; Bharadiya, 2023; Özüdoğru & Cakir, 2021; Huang & Rust, 2018). The marketing industry, in particular, has widely adopted AI, streamlining various market exchange pro- cesses such as customer segmentation and personalized advertising. AI can analyse customer data to identify be- havioural patterns and provide personalized recommenda- tions and advertisements based on customer preferences and purchase history (Basha, 2023). It supports the evo- lution of marketing toward automated, data-driven value creation, optimizing operations by automating tasks and enabling precise marketing strategies (Kirova & Boneva, 2024; Martinez-Lopez & Casillas, 2013). Additionally, it enhances product and service customization by analysing consumer purchases and interests (Trawnih et al., 2022; Shank et al., 2019). AI is transforming the way companies interact with customers, leading to improved customer experiences and satisfaction. AI technologies, such as chatbots, virtual as- sistants, and predictive analytics, offer numerous benefits to consumers by enhancing service quality, personalizing experiences, and increasing purchasing efficiency (Aksu & Sener, 2024; Trawnih et al., 2022; Xu et al., 2021). As a result, the rapid adoption of AI is reshaping the consum- er buying process and significantly influencing consumer behaviour, including attitudes toward AI (Mendez-Suarez et al., 2024). Recent research indicates that AI has a significant im- pact on consumer trust (Chi & Vu, 2022). Studies have observed a positive relationship between empathetic AI re- sponses and consumer trust, as they improve communica- tion quality between AI systems and consumers, fostering AI acceptance as a service agent (Chi & Vu, 2022; Huang & Rust, 2018). Previous research has primarily focused on factors such as transparency, explainability, accuracy, reliability, automation, anthropomorphism, and mass data extraction as key antecedents and challenges of trust in AI technology (Lockey et al., 2021; Hasan et al., 2021; Zari- fis & Cheng, 2022). However, there is a lack of detailed research examining consumer trust and the benefits of AI as key factors influencing consumer attitudes toward AI. On the other hand, the adoption of AI technologies has raised concerns regarding privacy, security, and job dis- placement (Mirbabaie et al., 2022). Therefore, it is essen- tial to understand consumers’ perceived experiences with AI, both in the buying process and in general, as these perceptions shape their attitudes toward AI and influence their willingness to engage with AI technologies (Kieslich et al., 2021). Negative attitudes toward AI may lead to skepticism regarding its capabilities, concerns about po- tential risks and ethical implications, and ultimately, re- duced adoption (Ikkatai et al., 2022). Additionally, some consumers exhibit significant hesitation and fear toward autonomous systems (Hinks, 2020). We argue that consumer attitudes toward AI technol- ogies are a crucial factor strongly influencing behaviour- al patterns and the willingness to adopt AI in the buying process. While consumers recognize the benefits of AI and trust its capabilities, they also perceive potential threats, such as job displacement, changes in work tasks, ethical and security dilemmas, and other possible negative conse- quences of AI implementation in different environments. However, research exploring this “dual role” of consum- ers—both recognizing the advantages and perceiving threats of AI—remains limited. This study contributes to the theoretical understanding of consumer attitudes toward AI and the factors influenc- ing their formation by focusing on employed consumers. This approach provides a more comprehensive assessment of attitudes toward AI, exploring the interplay between the perceived benefits of AI in the purchasing process, general trust in AI, and perceived risks of AI both in the purchasing process and the workplace. To address the identified gaps in the literature, we for- mulated the following research questions: (a) How do the potential benefits of AI for consumers, consumer trust in AI, and perceived threats of AI influence consumer attitudes toward AI? (b) Do perceived threats moderate the relationship between perceived benefits, trust, and consumer attitudes toward AI? Furthermore, the findings of this research are expected to provide valuable insights for policymakers and compa- nies, helping them design and market AI-based products and services that address consumer concerns and prefer- ences, mitigate perceived threats, and overcome adoption barriers to improve consumer attitudes toward AI. 2 Literature Review and Hypothesis Development 2.1 Consumers’ benefits of AI To fully exploit the economic and societal advantages of AI technologies, it is vital for companies to comprehend and quantify their benefits for consumers (Ahmad et al., 2023) in order to know how do they feel about their AI products to market them better (Haleem et al., 2022). Per- ceived benefits are beliefs about the positive outcomes as- sociated with a cognitive, affective or behaviour response of consumers to a real or perceived threat (Chandon et al., 2000; Liu et al., 2012). Grewal et. al (2021) suggest that realized and anticipated benefits of AI for consumers based on customized offers achieved through data-led personali- zation, optimization, and innovation. 140 Organizacija, V olume 58 Issue 2, May 2025 Research Papers According to the majority of researchers, there are a few benefits of AI for consumers: enhances decision-mak- ing and problem solving (Sivarajah et al., 2017; Topol, 2019; Bastani et al., 2021; Chen et al., 2019), increases efficiency and productivity, customization (Grewal et al., 2021) as well as enhances consumers’ experience (Trawnih et al., 2024), which is relating to the interactions between the consumer and the company during the consumer’ jour- ney, and encompasses multiple dimensions: emotional, cognitive, behavioural, sensorial, and social (Puntoni et al., 2021; Lemon and Verhoef, 2016; Brakus et al., 2009). From the marketing point of view, last mentioned ben- efit of AI, i.e. enhanced consumer experience, significant- ly reshapes exchanging processes by enhancing customer engagement through interaction and increasing efficiency (Xu et al., 2021). By analysing customer data, AI can cre- ate a detailed profile of each consumer and use this infor- mation to provide customized recommendations and offers (Kadambi et al., 2018). The data capture experience pro- vides benefits to consumers because it can make them feel as if they are served by the AI: the provision of personal data allows consumers access to customized services, in- formation, and entertainment, often for free (Puntoni et al., 2021). Consumers in buying process often face with intelli- gent customer service robots (i.e. chatbots and virtual as- sistants), which can significantly influence their experience with AI. Chatbots are automated software programs that can simulate conversation with human users. They can be used to provide customer support, answer common ques- tions, and provide recommendations. Virtual assistants are similar to chatbots but are designed to provide more per- sonalized assistance to users (Jenkins, 2021) by offering quick and efficient support and reducing wait times. They can also be available 24/7, providing consumers with ac- cess to support outside of regular business hours. Consequently, companies can improve consumer satis- faction and loyalty, leading to increased revenue and con- sumer retention. Chatbots and virtual assistants can also reduce the need for human support staff, leading to cost savings for companies. Predictive analytics can be used to identify trends and patterns in consumer behaviour, which can be used to develop targeted marketing campaigns and identify new opportunities for growth (Mariani et al., 2023). Despite of a number of researches on specific elements of consumers’ benefits of AI, and factors through which we can explain cognitive, affective and behavioural reactions of consumers in relation to implementation of AI technol- ogy, there is still a research gap. To fill this gap, our research tries to contribute to more comprehensive insight into different viewpoints of con- sumers’ benefits of specific manifestation of AI (i.e. intel- ligent consumer service robots), and a potential impact of these benefits on consumers’ attitudes towards AI. Suggested by Gao et al. (2022), potential consumers’ stimuli of AI can fall into five groups: perceived interactiv- ity of consumers, perceived personalization of consumers, consumers’ engagement, consumers’ value co-creation, and consumers’ ability readiness. In our opinion, first four groups of stimuli, suggested by Gao et al. (2022), have the characteristics of consumers’ benefits of AI as well. Perceived interactivity and personalization are two of the most critical stimuli, with the former relating to con- sumers’ subjective assessment of their interaction with AI technology overall (Scardamalia and Bereiter, 2014; Gao et al., 2022) and the latter relating to the potential of AI technology to provide consumers with customized and personalized services (Neuhofer et al., 2015; Gao et al., 2022). AI based devices with high levels of interactivity not only enable consumers to engage, but also provide them with opportunities to share information and emotion- al support with others (Roy et al., 2019; Gao et al., 2022). In addition to these, high levels of personalized offerings provide consumers with customized suggestions or solu- tions through algorithmic analysis to satisfy their personal preferences and needs (Heer, 2019; Gao et al. 2022). Consumers engagement is mental state of consumers who are creating experiences with a company in a specific service relationship (Brodie et al., 2011). AI systems are only useful if consumers recognize the suggestions pro- vided by AI before they can accept the AI itself (Gao et al., 2022). Among the actors involved in the value co-creation process, consumers have been identified as a particularly significant contributor that companies can effectively ex- ploit (Tran and Vu, 2021). According to Zhang and Chen (2008), companies focus on co-creation with consumers can help to gain new competences, and to achieve a more competitive advantage for them. On the other side, con- sumers’ cooperation with companies and their empower- ment in process of creating a new product (AI technology) influence their level of perceived benefits, received from the AI, and their level of satisfaction as well. In our opinion such framework can offer a good start- ing point to hypothesize: Hypothesis 1: Consumers’ perceived benefits of in- telligent consumer service robots (ICSRs) have a positive and significant effect on consumers’ attitudes towards AI technology. 2.2 Consumers’ trust in AI Although there is no universally accepted scholarly definition of this concept, we can define trust as ‘a belief by one party in a relationship that the other party will not act against his or her interests, where this belief is held without undue doubt or suspicion and in the absence of detailed information about the actions of the other party’ 141 Organizacija, V olume 58 Issue 2, May 2025 Research Papers (Tomkins, 2001; Laaksonen et al., 2008). One party may trust the other party’s benevolence (a belief that on party acts in the interests of the other), honesty (a belief that the other party’s word is reliable and credible), and compe- tence (a belief that the other party has the necessary exper- tise to per-form as required) (Buttle, 2010). Therefore, trust is a vital aspect of consumers’ behav- iour, influencing the attitudes and decision-making pro- cesses of consumers towards products and services (Rous- seau et al., 1998) and is linked to consumers’ expectation of services provided by companies (Chi and Vu, 2022), namely the two components of trust are the intention to accept vulnerability based on positive expectations of con- sumers (Lockey et al., 2021). In the context of AI, trust can be defined as the willing- ness of individuals to rely on AI systems and accept their recommendations or decisions. Trust in AI can be influ- enced by various factors, including the perceived reliabil- ity, competence, and ethical standards of the system and its operators (Mayer et al., 1995). Deeper understanding of consumers’ trust based on AI system features to con- sumers’ motivation and responses has yet to be reached. From this perspective, consumers’ trust in AI is defined as a common ground of belief from consumers to AI devices (Chi and Vu, 2022). As AI technologies are increasingly integrated into various aspects of daily life, the importance of trust in AI is growing (Wang et al., 2019). Trust plays a crucial role in ensuring the safe and effective use of AI, as well as promoting public acceptance of these technologies. Some researchers have shown that consumers are more likely to adopt and use new technologies when they trust the tech- nology and its providers (Riegelsberger et al., 2003). On the other hand, lack of trust in technology can lead to re- sistance and reluctance to use it. Therefore, building and maintaining trust is essential for the successful adoption and integration of AI technologies into every day’ buying processes of consumers (Komiak and Benbasat, 2006). However, building trust in AI is not always easy, as AI systems often operate in complex and opaque ways, mak- ing it difficult for consumers to understand how decisions are made (Lu et al., 2025). Additionally, concerns about privacy, security, and bias can erode trust in AI systems (Kaplan and Haenlein, 2019). As a result, there is a need for greater transparency and accountability in AI systems to increase trust and confidence in their use (European Commission, 2020). Another challenge to building trust in AI is the lack of regulation and standardization in the industry. As AI tech- nologies continue to evolve and develop, there is a need for clear guidelines and standards to ensure the ethical and responsible use of AI. This will not only help build trust among consumers but also promote innovation and growth in the industry (Floridi et al., 2018). The adoption of new technologies by the public is strongly influenced by the level of trust that individuals have in those technologies (Siau and Wang, 2018). This is especially true for AI technologies, which are often viewed as complex and potentially dangerous. Research has shown that trust is a key factor in the adoption of AI technologies, and that lack of trust can be a significant barrier to adop- tion (Hasan et al., 2021; Venkatesh et al., 2003). One of the main reasons why trust is important for the adoption of new technologies is that it reduces uncertainty and perceived risk. When individuals are uncertain about the potential risks and benefits of a new technology, they may be hesitant to adopt it. Trust helps to reduce this un- certainty by providing individuals with a sense of confi- dence that the technology will perform as expected and that their personal information will be protected (Zarifis and Cheng, 2022; Morgan and Hunt, 1994). Another important factor in the role of trust in the adoption of AI technologies is the social influence of trust. Consumers are often influenced by the opinions and be- haviours of others when making decisions about new tech- nologies. If individuals perceive that others trust a new technology, they are more likely to adopt it themselves. On the other hand, if there is a lack of trust in a new tech- nology, this can lead to a negative perception and reduced adoption (Lockey et al., 2021; Luhmann, 1988). In our opinion, trust plays a crucial role in consumers’ adoption of AI technologies. To promote the adoption of AI, it is important for developers of AI and policymakers to prioritize building trust with the consumers by address- ing concerns related to transparency, ethics, and security. By building trust, AI technologies can be adopted more widely and effectively, leading to their potential benefits and positive consumers’ attitudes towards AI. Hence, our hypothesis is proposed as follows: Hypothesis 2: Consumers’ trust in AI has a positive and significant effect on consumers’ attitudes towards AI technology. 2.3 Consumers’ perceived threats of AI Consumers as general public (outside the buying pro- cess) show, despite of perceived benefits of AI, some con- siderable restraint when it comes to the broad societal dif- fusion of AI applications that might even border on actual fear of such technology (Kieslich et al., 2021; Hinks, 2020; McClure, 2018; Liang, 2017). Understanding both, bene- fits and threats, enables companies a more comprehensive approach to threats assessment (Ahmad et al., 2023; Tepy- lo et al., 2023). If companies’ know how people feel about their AI products, they can market them better (Ahmad et al., 2023; Haleem et al., 2022). There are numerous articles discussing the threats of AI tools for general public. The majority of researchers define the following reasons of threats: job displacement 142 Organizacija, V olume 58 Issue 2, May 2025 Research Papers (Mirbabaie et al., 2022), economic inequality (Brynjolfs- son and McAfee, 2014), ethical and legal reasons (Huang et al., 2023; Wach et al., 2023; Kieslich et al., 2021), lack of transparency (Jones, 2018), potential for different types of bias (Buolamwini and Gebru, 2018), and risk of poten- tial misuse and abuse (Tufekci, 2018). AI has the potential to automate many tasks that are currently performed by humans, which may lead to job loss and unemployment. Recent research has suggested that up to 47% of US jobs are at risk of automation in the next few decades (Frey and Osborne, 2017). While some new jobs may be created by the development of AI, the displacement of jobs is likely to have a significant impact on the labour market and may disproportionately affect low-skilled workers and those in industries that are most susceptible to automation, such as manufacturing and transportation (Mirbabaie et al., 2022; Autor, 2015). The displacement of jobs can also lead to economic inequality. Those who are most impacted by job loss may not have the skills or resources to adapt to new jobs or in- dustries, which can lead to long-term unemployment and reduced income. This may exacerbate existing economic inequalities and create a widening gap between the rich and poor (Brynjolfsson and McAfee, 2014). In addition, the development of AI may create a new class of “winner- takes-all” industries, where a few companies and individ- uals benefit greatly from the advances in AI technology, while others are left behind (Brynjolfsson and McAfee, 2014). As AI technology continues to advance, there are growing concerns about its ethical and legal implications. One of the main ethical concerns surrounding AI is the po- tential for the technology to be used in ways that violate privacy and human rights. Facial recognition technology has been criticized for its potential use in mass surveil- lance and tracking of individuals without their consent (Huang et al., 2023; Wach et al., 2023; Kieslich et al., 2021; Crawford and Calo, 2016). The possibility for AI to be prejudiced or racist is yet another ethical worry. Be- cause AI systems are trained on historical data, they may learn and perpetuate existing biases and inequalities. This might result in unfairness in the recruiting, financing, and criminal justice systems. In addition, the lack of diversity in the tech industry may contribute to biased AI systems, as the people de- signing and developing these systems may not represent the diversity of the population they are intended to serve (O’Neil, 2016). There are also legal concerns surrounding AI, particularly in the area of liability. As AI systems be- come more autonomous and make decisions that impact human lives, questions arise about who is responsible if something goes wrong (Mirbabaie et al., 2022; Calo, 2015). One of the major challenges with AI systems is their lack of transparency and potential for bias. AI systems can be very complex, and it can be difficult to understand how they make decisions. This lack of transparency can make it difficult to identify errors or biases in the system, which can have significant consequences (Jones, 2018). One way in which bias can manifest in AI systems is through biased data. AI systems learn from the data they are trained on, and if that data is biased, the system can learn to make biased decisions. Specifically, if a facial recognition system is trained on a dataset that is predom- inantly male and white, the system may not perform as well on images of women or people with darker skin tones. This can have serious implications for areas such as law enforcement or hiring decisions (Buolamwini and Gebru, 2018). In addition to biased data, AI systems can also per- petuate and amplify existing social biases. If an AI system is trained on data that reflects existing gender or racial bi- ases, the system may learn to perpetuate these biases in its decisions. This can lead to discrimination and exacerbate existing inequalities (O’Neil, 2016). While AI has the potential to bring significant benefits to consumers, there is also a risk of potential misuse and abuse. This can occur in a variety of ways, such as the use of AI for malicious purposes (cyberattacks or the spread of misinformation) (Ye et al., 2016), or the unintended consequences of AI systems (perpetuation of biases or the amplification of harmful behaviours) (O’Neil, 2016). This can lead to discriminatory outcomes, such as biased hir- ing decisions or the denial of access to services for certain groups of people. AI systems can amplify harmful behav- iours, such as the spread of hate speech or the promotion of extremist content, by prioritizing engagement over ac- curacy or truth (Tufekci, 2018). According to some previous researches, threats of AI are at first processed cognitively (Kieslich et al., 2021; Witte, 1992) and, therefore, can shape consumers’ atti- tudes towards AI. Consequently, we hypothesize: Hypothesis 3: Consumers’ perceived threats of AI have a negative and significant effect on consumers’ attitudes towards AI technology. 2.4 Consumers’ Attitudes towards AI According to Eagly and Chaiken (1993), attitudes are described as “evaluative judgments about objects, people, or events that are expressed by positive or negative affect, cognition, or behaviour”. Positive, negative, or neutral at- titudes as evaluations can be communicated with affective, cognitive, and behavioural reactions (Fishbein and Ajzen, 1975). There are a number of factors that affect how attitudes are formed, i.e. personal beliefs, social influence, as well as cognitive processes, such as perception and learning. Personal beliefs refer to an individual’s thoughts and con- victions about an object or issue. Experiences, socializa- 143 Organizacija, V olume 58 Issue 2, May 2025 Research Papers tion, and media exposure can all have an impact on these beliefs (Ajzen and Fishbein, 1980). Social influence refers to the impact that others have on individual’s attitudes and behaviour. It can take many forms, including conformity, social comparison, and persuasion (Cialdini and Gold- stein, 2004). In order to make sense of their surroundings, people organize and interpret sensory data through a pro- cess known as perception. Contrarily, learning describes the process by which people pick up new facts and under- standing about a subject. Both, perception and learning can shape an individual’s attitudes towards an object or issue (Petty and Cacioppo, 1986). To successfully design, develop, launch, communi- cate, and promote new AI-intensive products, companies must first understand their consumers’ attitudes towards AI, as current consumer perceptions appear to be divided (Mendez-Suarez et al., 2024). It is essential to understand consumers’ views on AI; thus, reducing perceived risks, enhancing potential benefits, strengthen their trust, and diminish perceived threats. Consumers with more favour- able attitudes towards AI are more likely to hold positive views of AI and more receptive attitude toward AI in mar- keting communications (Lobera et. Al., 2020; Chen et al., 2022; Mendez-Suarez et al., 2024). Several theoretical frameworks have been proposed to explain how individuals form attitudes towards new technologies such as AI. Technology Acceptance Model (TAM) developed by Davis (1989) posits that perceived usefulness and perceived ease of use are the primary de- terminants of an individual’s intention to use a technology. This model has been used to study public attitudes towards a wide range of technologies, including AI (Venkatesh et al., 2003). Another relevant theoretical framework is the Social Cognitive Theory (SCT) developed by Bandura (1986). According to SCT, individuals learn attitudes and behav- iours through observation and modelling of others, as well as through their own experiences (Bandura, 1986). In the context of AI, SCT could be applied to understand how individuals form attitudes towards AI based on their ex- posure to AI technologies and their perceptions of AI in the media. The Technology Risk Framework (TRF) developed by Slovic (1999) is another relevant framework. The TRF suggests that public attitudes towards technologies are in- fluenced by three main factors: dread risk, unknown risk, and personal control. Dread risk refers to the perceived potential for a technology to cause catastrophic harm, un- known risk refers to uncertainties surrounding the technol- ogy, and personal control refers to the perceived ability of an individual to control the risks associated with the tech- nology (Slovic, 1999). Attitude-Behavioural Intention (ABI) model devel- oped by Moon and Kim (2001) suggests that attitudes towards AI are influenced by perceived usefulness, per- ceived ease of use, and perceived risks associated with AI. These attitudes, in turn, influence an individual’s intention to use or not use AI. Another relevant model is the Cognitive-Affective-Co- native (CAC) model proposed by Cacioppo et al. (2007). This model suggests that attitudes towards AI are formed through cognitive (i.e. beliefs about AI), affective (i.e. emotions towards AI), and conative (i.e. behavioural pro- cesses). This model has been used to study attitudes of individuals towards a range of technologies, including AI (Kraus, 2017; Stein et al. 2024). While theoretical frameworks and models provide a useful starting point for understanding consumers’ atti- tudes towards AI, empirical studies are necessary to gain a more comprehensive and nuanced understanding of these attitudes. Nevertheless, a growing body of research has explored consumers’ attitudes towards AI, examining fac- tors such as trust, risk perception, benefits, drawbacks, and ethical considerations, there are still gaps and limitations in the literature that need to be addressed, if we investigate consumers’ attitudes towards AI. Therefore, it seems to be a good platform for empirical research. 2.5 The moderating role of consumers’ perceived threats of AI In order to get comprehensive insight into consumers’ attitudes towards AI as a consequence of their perceived benefits of AI and perceived trust in AI, it is of great im- portance not to overlook consumers’ perceived threats of AI. Although consumers evaluate specified benefits of AI and develop a particular level of trust in it during the buy- ing process, they inevitable face different threats of AI in every day’ life, which are not necessarily derived as a con- sequence of their interaction and experiences in the buying process. Such threats can arise as a result of different fac- tors, as for example: personal opinion, their readiness to adopt AI devices, and a huge number of influences from external environment (i.e. social, economic, cultural, tech- nological, educational etc.). Therefore, we posit that consumers’ perceived threats of AI may moderate, i.e. effect strength of the impact of consumers’ potential benefits of AI and consumers’ per- ceived trust in AI. Thus, our study proposes: Hypothesis 4: Consumers’ perceived threats of AI negatively and significantly moderates the effect of con- sumers’ perceived benefits of AI on consumers’ attitudes towards AI. Hypothesis 5: Consumers’ perceived threats of AI neg- atively and significantly moderates the effect of consum- ers’ perceived trust in AI on consumers’ attitudes towards AI. 144 Organizacija, V olume 58 Issue 2, May 2025 Research Papers 3 Research Methodology and Results 3.1 Sample and collection of data The data for the empirical research was collected through a highly structured online questionnaire from Jan- uary 2024 to June 2024. The respondents were employed consumers aged 18 to 64 in the Republic of Slovenia who had used intelligent consumer service robots (ICSRs) in their purchasing process. In the first step, the questionnaire was distributed to a convenient sample of 600 respondents, using filter ques- tions regarding their age range, employment status, and experience with ICSRs in the purchasing process. In the second step, a non-random judgmental sampling method was applied to select valid responses based on the required respondent parameters for our research. Among the re- ceived questionnaires, 224 were deemed valid. A chi-square test of early and late respondents showed Table 1: Respondents’ demographic characteristics no significant differences (p > 0.05) in gender, age, years of employment, or monthly income. Therefore, the possi- bility of non-response bias was ruled out. The character- istics of the respondents in terms of gender, age, years of employment, and monthly income are presented in Table 1. 3.2 Analysis of data The research is quantitative using non-parametric ap- proach to SEM-PLS modelling of relations between the main research constructs: consumers’ perceived benefits of ICSRs, consumers’ perceived trust in AI, and consumers’ perceived threats of AI as independent research constructs on one side, as well as consumers’ attitudes towards AI as dependent research construct. In addition to these, the moderating impact of consumers’ perceived threats of AI was analysed. Figure 1 shows to us the conceptual frame- work developed. Criteria Frequency % Gender Male 123 54,9 Female 101 45,1 Age 18 – 24 years old 58 25,9 25 – 34 years old 76 33,9 35 – 49 years old 72 32,1 50 – 64 years old 18 8,1 Working years Below 3 years 37 16,5 3 – 5 years 55 24,6 5 – 10 years 51 22,8 Above 10 years 81 36,1 Monthly income Below 1000 euro 39 17,4 1000 – 1500 euro 78 34,8 1501 – 2000 euro 67 29,9 Above 2000 euro 40 17,9 145 Organizacija, V olume 58 Issue 2, May 2025 Research Papers Figure 1: Conceptual framework 3.2.1 Measurement model All the items for main constructs that we have used in our empirical study have been collected by the relevant authors, who empirically investigated the constructs ana- lysed in our research, and have been measured by five- point Likert scale (5 – strongly agree to 1 – strongly dis- agree). The items of consumers’ perceived benefits of intelli- gence consumer service robots (ICSRs) scale were gen- erated by literature reviews. Finally, we derived from S-O-R framework, suggested by Jacoby (2002), Koo and Ju (2010) and modified by Gao et al. (2022), which covers different aspects of possible consumers’ stimuli appear- ing as possible consumers’ benefits. They act as external stimuli (S), can affect consumers’ internal cognitions and emotions (O), and eventually drive their behaviour re- sponses (R). According to such comprehensive definition, perceived consumers’ benefits of intelligent consumer ser- vice robots (ICSRs) may fall into four groups of benefits: perceived interactivity, perceived personalization, custom- er engagement, and value co-creation (Gao et al., 2022) with 19 items. Consumers’ perceived trust in AI have been measured by six items, which are validated by Pelau et al. (2021) and implemented by Chi and Vu (2022), who have investi- gated the impact of anthropomorphism, empathy response, and interaction on the customer trust in AI. Therefore, such measurement may fit our research objectives too. The items for measuring consumers’ perceived threats of AI, adapted for our empirical research, derived from psychometric instrument to measure threats, suggested and conducted by Ahmad et al. (2023) and encompass 14 items. Consumers’ attitudes towards AI have been measured by ATTARI-12 Scale of attitudes, suggested by Stein et al. (2024), which incorporates 12 items with the psycholog- ical trichotomy of cognition, emotion, and behaviour as the main components of attitude as well as captures both positive and negative aspects of the attitude towards AI. Therefore, by opinion of the authors, it eliminates some weaknesses of other known scales for attitudes measure- ment, i.e. General Attitudes Towards Artificial Intelligence Scale – GAAIS (Schepman and Rodway, 2020), the Atti- tudes Towards Artificial Intelligence Scale – ATAI (Sin- dermann et al., 2020), AI Anxiety Scale – AIAS (Wang and Wang, 2019), and the Threats of Artificial Intelligence Scale – TAI (Kieslich et al., 2021). First of all, we tested the convergent validity of re- search constructs using item loadings, Cronbach alpha coefficient (CA), average variance extracted (A VE), and composite reliability (CR). The results of PLS analysis show to us that all research constructs and items indicated satisfactory average vari- ance extracted (A VE), Cronbach’s alpha coefficient (CA), composite reliability (CR) and item loadings (all loadings are higher than 0.65 for the sample size n = 224). Therefore, we can conclude that they demonstrate overall satisfactory 146 Organizacija, V olume 58 Issue 2, May 2025 Research Papers discriminant validity and reliability and satisfactory con- vergent validity. Detail list of all construct items, means, standard deviations, Cronbach’s alpha, A VE as well as CR values and item loadings are provided in table 2. The validity of research constructs in our reflective measurement model and individual items was tested also by exploratory factor analysis in order to estimate the con- vergent validity. All items of our research constructs pos- sess main item loadings above 0.65, while side loadings are below 0.3 (Fornell and Larcker, 1981). According to such results, we can conclude that convergent validity is satisfactory. In addition to these, we tested the research constructs and items by HTMT criterion (Hetrotrait-Monotrait) to assess discriminant validity and indicate the research con- structs’ correlations, which is suggested by Henseler et al. (2015) and Kline (2015). The results in the table 3 show to us that the criterion for discriminant validity for all re- search constructs is achieved, because all values are lower than 0.85. Research Constructs Items M SD Item loadings CA CR AVE Consumers’ perceived benefits of ICSRs 2.88 0.80 0.71 0.85 0.66 CPB1 2.31 0.73 0.81 CPB2 3.85 0.76 0.86 CPB3 3.90 0.85 0.84 CPB4 2.21 0.71 0.79 CPB5 3.08 0.66 0.77 CPB6 2.14 0.89 0.75 CPB7 2.87 0.45 0.67 CPB8 3.18 0.76 0.68 CPB9 2.06 0.64 0.66 CPB10 4.02 1.05 0.78 CPB11 3.15 0.78 0.69 CPB12 2.85 0.62 0.81 CPB13 2.76 0.89 0.74 CPB14 3.10 0.98 0.68 CPB15 2.23 0.56 0.71 CPB16 2.06 1.04 0.72 CPB17 2.89 1.19 0.66 CPB18 3.15 0.93 0.71 CPB19 2.85 0.65 0.79 Consumers‘ perceived trust in AI 3.96 0.75 0.68 0.73 0.66 CPT1 4.05 0.60 0.71 CPT2 4.14 0.71 0.67 CPT3 3.65 0.75 0.66 CPT4 4.17 0.79 0.68 CPT5 3.90 0.94 0.69 CPT6 3.86 0.72 0.79 Table 2: Construct items, means (M), standard deviations (SD), Cronbach’s alpha (CA), average variance extraction (AVE), composite reliability (CR), and item loadings 147 Organizacija, V olume 58 Issue 2, May 2025 Research Papers Research Constructs Items M SD Item loadings CA CR AVE Consumers’ perceived threats of AI 3.50 0.85 0.81 0.85 0.77 CPTH1 3.61 1.13 0.74 CPTH2 4.05 1.39 0.66 CPTH3 3.55 0.92 0.67 CPTH4 4.08 0.95 0.73 CPTH5 4.16 0.69 0.69 CPTH6 4.03 0.77 0.70 CPTH7 3.78 0.62 0.68 CPTH8 3.32 0.96 0.79 CPTH9 3.09 0.74 0.67 CPTH10 2.92 1.13 0.84 CPTH11 3.01 0.60 0.74 CPTH12 3.45 0.73 0.78 CPTH13 2.88 0.61 0.67 CPTH14 3.14 0.71 0.66 Consumers‘ attitudes towards AI 3.75 0.77 0.77 0.89 0.85 CA1 4.03 0.78 0.71 CA2 4.15 0.82 0.73 CA3 3.87 0.75 0.69 CA4 3.94 0.63 0.72 CA5 3.85 0.96 0.79 CA6 4.06 0.88 0.78 CA7 3.67 0.65 0.67 CA8 3.05 0.89 0.66 CA9 3.15 0.92 0.66 CA10 4.02 0.77 0.81 CA11 3.55 0.62 0.82 CA12 3.67 0.61 0.67 Table 2: Construct items, means (M), standard deviations (SD), Cronbach’s alpha (CA), average variance extraction (AVE), composite reliability (CR), and item loadings (continue) Table 3: HTMT ratio for discriminant validity assessment Research constructs 1 2 3 4 1 Consumers’ perceived benefits of ICSRs 2 Consumers’ perceived trust in AI 0.812 3 Consumers’ attitudes towards AI 0.797 0.774 4 Consumers’ perceived threats of AI 0.841 0.816 0.825 148 Organizacija, V olume 58 Issue 2, May 2025 Research Papers 3.2.2 Structural Research Model Assessment and Results In the next step of our analysis we tested the structural research model, which is derived from the measurement model explained in the previous step, and tested research hypotheses. Suggested by Hair et al. (2018), we had to assess the proportion of variance explained in order to determine the accuracy of the model’s predictions. In our research, the structural model explains 27% of the vari- ance of consumers’ attitudes towards AI (R2 = 0.27). Next, the Stone-Geisser cross-validated redundancy (Q2) was calculated, which gives us the information about the qual- ity of model prediction. Because in our study Q2 = 0.81, the perceived result fits the recommended range between 0 and 1. Thus, we can confirm the predictive relevance of our research. In the table 4, we present the results of hypotheses test- ing, including path coefficients (β), t-value, p-value, and final results. The results in table 4 reveal that the impact of consum- ers’ perceived benefits of ICSRs on consumers’ attitudes towards AI is positive and statistically significant, while the consumers’ perceived trust in AI has a positive but sta- tistically non-significant impact on consumers’ attitudes towards AI. In addition to these, the impact of consumers’ perceived threats of AI on consumers’ attitudes towards AI is negatively and statistically significant. Therefore, we can confirm the research hypotheses H1 and H3, but we cannot support the research hypothesis H2. The results of moderation effect of consumers’ per- ceived threats of AI on the impact of consumers’ perceived benefits of AI on consumers’ attitudes towards AI is not significant. On the other hand, the consumers’ perceived threats of AI significantly moderate the impact of consum- ers’ perceived trust in AI on consumers’ attitudes towards AI. Therefore, we can confirm research hypothesis H5, while the research hypothesis H4 is not supported. 4 Discussion 4.1 Theoretical and managerial implications In a world shaped by AI that are supposed to make hu- man life safer, healthier, and more convenient, it is impor- tant to understand how people (and particularly consum- ers) evaluate the very notion of AI – and to identify factors that account for notable variance in this regard (Stein et el., 2024). Therefore, their perception of AI become of great importance. Thus, a comprehensive insight in their attitudes (i.e. cognitive, affective, and behavioural compo- nent) towards AI significantly contribute to the knowledge of how do they feel and what are their possible reactions (usage of AI in buying processes as well in general in every day’ life). This research has provided comprehensive insights into the multifaceted landscape of consumers’ attitudes towards AI and factors that shape these attitudes. It con- structs an integrated analysis framework and research model of three independent research constructs to measure their impact on consumers’ attitudes towards AI, during which we explored a moderating influence one of them. The research, therefore, systematically expands the anal- yses of factors and their multi-collinearity that influence consumers’ attitudes towards AI in previous studies. In addition to this, the implementation of specific measure- ment framework for individual research constructs, based on previous studies and used for other purposes, strongly supported our research objectives and added to the value of our empirical study. The study researched five fundamental hypotheses, providing a deep understanding of the complex relations between consumers’ perception of benefits of ICSRs, trust in AI, threats of AI, and, consequently, their attitudes to- wards AI. In our opinion, the key findings of our research may significantly contribute to the highly growing field of consumers’ perception of AI. Table 4: Hypotheses testing results Research hypotheses β t-value p-value Results H1 Benefits - Attitudes 0.39 1.64 <0,001 Supported H2 Trust - Attitudes 0.08 3.23 >0,01 Not-Supported H3 Threats - Attitudes - 0.38 2.81 <0.001 Supported H4 Benefits - Threats - Attitudes - 0.09 0.87 >0.01 Not supported H5 Trust - Threats - Attitudes - 0.22 2.35 <0,01 Supported 149 Organizacija, V olume 58 Issue 2, May 2025 Research Papers Although a huge number of previous researches inves- tigated the role of different research constructs and varia- bles, including benefits and trust, influence customers in the process of shaping their attitudes towards AI (Lobera et al., 2020; Stein et al., 2024; Bergdahl et al., 2023; Ger- lich, 2023; Sartori and Bocca, 2023; Aksu and Sener, 2024), there is no previous research that has been focused on moderating role of consumers’ perceived threats of AI and their inter-relations with consumers’ perceived bene- fits and trust, which may shape their attitudes towards AI. In line with our hypothesis H3, it has been affirmed that consumers, who perceive AI as a threat, manifest more negative attitudes towards AI. Our research showed a significantly negative correlation (β = -0.39, p < 0.001) between their concerns about AI threats and their attitudes towards AI. This fact underlines the importance of dealing with consumers concern regarding how AI might affect different fields of their life, as it is essential for encourag- ing more positive attitude. However, according to our hypothesis H4, it is impor- tant to emphasize that a negative influence of perceived threats of AI does not reduce a positive and strong relation between consumers’ perceived benefits of a specific tool of AI (ICSRs), and their attitudes towards AI (β = 0.045, p < 0.001). This finding held particular relevance for chief marketing officers of companies, who care for their AI technology in buying processes as they shed light on the possible negative influences of consumers’ perceived threats of AI on their AI adoption. Thus, companies, which enable a use of AI technology for consumers in their pro- cess of selling, should strengthen a bundle of benefits of AI perceived by consumers, because perceived benefits, despite of possible threats, significantly impact consumers’ attitudes towards AI. By examining the impact of consumers’ perceived trust on their attitudes towards AI (hypothesis H2), the research offers a comprehensive understanding of consumers’ per- ceived threats as a moderator (hypothesis H5). Namely, there exists statistically non-significant positive impact of consumers’ perceived trust on their attitudes towards AI (β = 0.10). However, consumers’ perceived threats of AI statistically significantly and negatively moderate the relationship between consumers’ perceived trust and their attitudes towards AI (β = -0.22, p < 0.01). The results, ob- viously, have shown that AI providers should take into the consideration an important negative role of consumers’ perceived threats in shaping their attitudes of AI and try to eliminate an influence of such threats in consumers’ per- ception. Consequently, AI developers and policymakers should focus on specific threats perceived by consumers and adopt personalized approaches to effectively address them. In addition to these, the results of our research enable companies to better understand all three components of the customers’ attitude towards AI in the exchange pro- cess (i.e. cognitive, affective and behavioural component). Consumers’ experiences improved through AI-driven mar- keting activities can enhance effectiveness and efficiency of exchange processes between the companies and con- sumers. Hence, knowledge about the factors which influ- ence customers’ attitude may support the companies in the process of establishing predictive models of consumers’ behaviour, defining efficient marketing strategy aimed to encouraging adoption of AI technology in buying process (Hicham et al., 2023; Verma et al. 2021). 4.2 Research limitations and directions for future research Despite its contributions to understanding consumers’ attitudes towards AI, this research has several limitations. Firstly, the study focused only on non-random judgmental sample of consumers who use AI-related product (i.e. in- telligent consumer service robots), which may introduce sample bias and not fully represent the broader population. The reliability of results depends on respondents provid- ing honest and consistent answers, but self-reported data can be influenced by social desirability bias and limited understanding of AI concepts. Second, since data for both endogenous and exogenous research constructs were col- lected from the same respondents in the same location at the same time, there is a potential for research bias, such as common method bias (MacKenzie and Podsakoff, 2012). Third, the study was conducted over a limited period, af- fecting the depth of data collection and analysis. A more extensive study with a larger and more diverse sample size could provide deeper insights. Solely relying on surveys might benefit from including other methods like interviews or focus groups to enhance findings. Limited demographic information about respondents may hinder analysis of how factors such as age, gender, income, and working years influence attitudes toward AI. Finally, the rapidly evolv- ing field of AI means consumers’ perceptions may change over time, and this research represents a snapshot of their answers at a particular moment. These limitations should be considered when interpreting the findings. Considering these limitations, future research in this field can benefit from the following suggestions: longitu- dinal studies which should track changes in consumers’ attitudes over time to identify evolving trends and shifts as AI technology progresses; cross-cultural studies by ex- amining AI attitudes across different cultures can reveal unique concerns and expectations as well as offering a more nuanced understanding of global perceptions; in- depth qualitative research, such as interviews and focus groups in combination with quantitative surveys will help uncover the deeper reasons behind consumers’ attitudes that surveys alone might miss; contextual analysis with investigating how various applications and contexts of 150 Organizacija, V olume 58 Issue 2, May 2025 Research Papers AI impact consumers’ responses can provide insights into specific areas of concern. Analysing consumers’ attitudes toward AI applications in different industries will help address sector-specific con- cerns. Furthermore, some ethical considerations should be considered, because research into the ethical aspects of AI, including the development of ethical frameworks, is essen- tial for responsible AI development. Exploring factors that contribute to trust in AI, such as transparency and account- ability, can guide the creation of more trustworthy AI sys- tems. As AI technology continues to evolve, understand- ing and shaping consumers’ attitudes remains an ongoing process. Following the suggested areas for future research and addressing individual concerns and ethical issues can help provide clearer and more balanced perspectives on AI. This approach aims to benefit both the industry and society. As AI impacts various aspects of life, establish- ing a transparent and responsible relationship between AI and the public (not only consumers in buying process) is crucial. This research offers foundational insights that can guide future developments and improve the integration of AI into society. 4.3 Conclusion Understanding consumer attitudes towards AI is cru- cial for several reasons. First, it helps companies tailor their products and services to meet consumer expectations. The perceived benefits of AI, such as increased efficien - cy and personalized experiences, play a significant role in shaping these attitudes. Consumers who recognize the ad- vantages of AI are more likely to embrace it in their daily lives. However, the level of trust that consumers have in AI technologies can significantly influence their acceptance. Building this trust requires transparency, accountability, and ethical considerations by AI developers. Moreover, consumers’ perceived threats associated with AI cannot be overlooked. Concerns about privacy and data security often deter individuals from fully adopt- ing AI solutions. It is essential for companies to address these threats through clear communication and robust se- curity measures. The balance between perceived benefits and threats they recognize ultimately determines consumer sentiment. Companies need to ensure that their AI appli- cations enhance user experience without compromising personal safety. Finally, understanding consumer attitudes also enables policymakers to create regulations that protect users. By listening to consumer concerns, regulations can enhance trust in AI technologies, setting the groundwork for wider adoption. Therefore, comprehensively understanding con- sumer perceptions of AI—its benefits and threats, and the importance of trust—is essential for successful integration into society. This understanding ultimately drives innova- tion while ensuring that AI development remains aligned with public values and expectations. Literature Ahmad, M., Alhalaiqa, F., & Subih, M. (2023). Construct- ing and testing the psychometrics of an instrument to measure the attitudes, benefits, and threats associated with the use of Artificial Intelligence tools in higher education. Journal of Applied Learning & Teaching, 6(2), 114-120 Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behaviour. Englewood Cliffs, NJ: Prentice-Hall Aksu, S., & Sener, B. C. (2024). Factors Affecting Con- sumers’ Online Purchasing Attitudes Towards Ads Guided by Artificial Intelligence, Imgelem, 14, 373- 400. https://doi.org/10.53791/imgelem.1482365 Autor, D. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30. https://doi. org/10.1257/jep.29.3.3 Bandura, A. (1986). Social foundations of thought and ac- tion: A social cognitive theory. Prentice-Hall, Inc. Basha, M. (2023). Impact of artificial intelligence on marketing. East Asian Journal of Multidisciplinary Research (EAJMR), 2(3), 993-1004. https://doi. org/10.55927/eajmr.v2i3 Bastani, H., Bastani, O., & Park Sinchaisri, W. (2021). Improving human sequential decision-making with re- inforcement learning. arXiv. https://doi.org/10.48550/ arXiv.2108.08454 Bergdahl, J., Latikka, R., Celuch, M., Savolainen, I., Soares Mantere, E., Savela, N., & Oksanen, A. (2023). Self-determination and attitudes toward artificial in- telligence: Cross-national and longitudinal perspec- tives. Telematics and Informatics, 82. https://doi. org/10.1016/j.tele.2023.102013 Bharadiya, J. (2023). Artificial intelligence in transporta- tion systems: a critical review. American Journal of Computing and Engineering, 6(1), 34-45. https://doi. org/10.47672/ajce.1487 Brakus, J. J., Schmitt, B. H., & Zarantonello, L. (2009). Brand experience: What is it? How is it measured? Does it affect loyalty? Journal of Marketing, 73(3), 52-68. https://doi.org/10.1509/jmkg.73.3.052 Brodie, R. J., Hollebeek, I. D., Juric, B., & Llic, A. (2011). Customer engagement conceptual domain, funda- mental propositions, and implications for research. Journal of Service Research, 14(3), 1-20. https://doi. org/10.1177/1094670511411703 Brynjolfsson, E., & McAfee, A. (2014). The second ma- chine age: work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company 151 Organizacija, V olume 58 Issue 2, May 2025 Research Papers Bughin, J., Catlin, T., Hirt, M., & Willmott, P. (2018). Why digital strategies fail. McKinsey Quarterly, 1, 61–75 Buolamwini, J., & Gebru, T. (2018). Gender shades: in- tersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Re- search, 81, 1-15. In Conference on fairness, accounta- bility and transparency (pp. 77-91) Buttle, F. (2010). Customer Relationship Management; Concepts and Technologies. Butterworth-Heinemann Cacioppo, J. T., Tassinary, L. G., & Berntson, G. G. (Eds.). (2007). Handbook of psychophysiology (3rd ed.). Cambridge University Press Calo, R. (2015). Robotics and the Lessons of Cyberlaw. California Law Review, 103(3), 513-563 Cao, L. (2022). AI in finance: challenges, techniques, and opportunities. ACM Computing Surveys, 55(3), 1-38. https://doi.org/10.1145/3502289 Cavallo, J. (2019, September 10). Confronting the criti- cisms facing Watson for Oncology: A conversation with Nathan Levitan, MD, MBA. https://ascopost. com/issues/september-10-2019/confronting-the-criti- cisms-facing-watson-for-oncology/ Chandon, P., Wansink, B., & Laurent, G. (2000). A benefit congruency framework of sales promotion effective- ness. Journal of Marketing, 64(4), 65-81. https://doi. org/10.1509/jmkg.64.4.65.18071 Chen, Y ., Chen, J., Wang, C., & Zhang, C. (2019). Appli- cation of artificial intelligence in forest resource man- agement: a review. Forests, 9(4), 163 Chen, H., Chan-Olmsted, S., Kim, J., & Mayor Sanabria, I. (2022). Consumers’ perception on artificial intelligence applications in marketing communication. Qualitative Market Research: An International Journal, 25(1), 125-142. https://doi.org/10.1108/QMR-03-2021-0040 Chi, N. T. K., & Vu, N. H. (2022). Investigating the cus- tomer trust in artificial intelligence: The role of anthro- pomorphism, empathy response, and interaction. CAAI transactions on Intelligence Technology (pp. 260-273). The Institution of Engineering and Technology, Wiley. https://doi.org/10.1049/cit2.12133 Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: compliance and conformity. Annual Review of Psy- chology, 55, 591-621. https://doi.org/10.1146/annurev. psych.55.090902.142015 Crawford, K., & Calo, R. (2016). There is a Blind Spot in AI Research. Nature, 538, 311-313 Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340 Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Fort Worth, TX: Harcourt Brace Jovanovich European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust. Brussels, Belgium Fishbein, M., & Ajzen, I. (1975). Belief, attitude, inten- tion, and behaviour: An introduction to theory and re- search. Reading. MA: Addison-Wesley Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chaze- rand, P., Dignum, V ., ... & Luetge, C. (2018). AI4Peo- ple—An ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi. org/10.1007/s11023-018-9482-5 Fornell, C., & Larcker, D. F. (1981). Evaluating structur- al equation models with unobservable variables and measurement error. Journal of Marketing Research, 39-50. https://doi.org/10.2307/3151312 Frey, C. B., & Osborne, M. A. (2017). The future of employment: how susceptible are jobs to comput- erisation? Technological Forecasting and Social Change, 114, 254-280. https://doi.org/10.1016/j.tech- fore.2016.08.019 Gao, L., Li, G., Tsai, F., Gao, C., Zhu, M., & Qu, X. (2022). The impact of artificial intelligence stimuli on custom- er engagement and value co-creation: the moderating role of customer ability readiness. Journal of Research in Interactive Marketing, 17(2), 317-333. https://doi. org/10.1108/JRIM-10-2021-0260 Gerlich, M. (2023). Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Scienc- es, 12(9), 502. https://doi.org/10.3390/socsci12090502 Grewal, D., Guha, A., Satornino, C. B., & Schweiger, E. B. (2021). Artificial intelligence: The light and the darkness. Journal of Business Research, 136, 229-236. https://doi.org/10.1016/j.jbusres.2021.07.043 Hair, J., Risher, J., & Sarstedt, M. (2018). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2-24. https://doi.org/10.1108/ EBR-11-2018-0203 Haleem, A., Javaid, M., Qadri, M. A., Singh, R. P., & Su- man, R. (2022). Artificial intelligence (AI) applications for marketing: A literature-based study. International Journal of Intelligent Networks, 3, 119-132. https:// doi.org/10.1016/j.ijin.2022.08.005 Hasan, R., Shams, R., & Rahman, M. (2021). Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of Siri. Journal of Business Re- search, 131, 591-597. https://doi.org/10.1016/j.jbus- res.2020.12.012 Heer, J. (2019). Agency plus automation: designing AI into interactive systems. Proceedings of the National Academy of Sciences, 16(6), 1844-1850. https://doi. org/10.1073/pnas.1807184115 Henseler, J., Ringle, C, M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in vari- ance-based structural equation modelling. Journal of the Academy of Marketing Science, 43(1), 115-135. https://doi.org/10.1007/s1747-014-0403-8 Hicham, N., Nassera, H. & Karim, S. (2023). Strategic framework for leveraging artificial intelligence in fu- 152 Organizacija, V olume 58 Issue 2, May 2025 Research Papers ture marketing decision-making. Journal of Intelli- gent Management Decision, 2, 139-150. https://doi. org/10.56578/jimd020304 Hinks, T. (2020). Fear of robots and life satisfaction. In- ternational Journal of Social Robotics, 13, 327-340. https://doi.org/10.1007/s12369-020-00640-1 Huang, L., & Rust, R. T. (2018). Artificial intelligence in service: from expert systems to deep learning. Jour- nal of Service Research, 21(2), 155-172. https://doi. org/10.1177/1094670517752459 Huang, R., Tlili, A., Xu, L., Ying, C., Zheng, L., Metwal- ly, A. H. S., Ting, D., Chang, T., Wang, H., & Mason, J. (2023). Educational futures of intelligent synergies between humans, digital twins, avatars and robots-the iSTAR framework. Journal of Applied Learning & Teaching, 6(2), 1-16. https://doi.org/10.37074/ jalt.2023.6.2.33 Ikkatai, Y ., Hartwig, T., Takanashi, N., & Yokoyama, H. M. (2022). Octagon measurement: public attitudes to- ward AI ethics. International Journal of Human–Com- puter Interaction, 38(17), 1589–1606. https://doi.org/1 0.1080/10447318.2021.2009669 Jacoby, J. (2002). Stimulus-organism-response reconsid- ered: an evolutionary step in modelling (consumer) be- haviour. Journal of Consumer Psychology, 12(1), 51- 57. https://doi.org/10.1207/S15327663JCP1201_05 Jenkins, M. (2021). How AI is improving customer expe- rience. Forbes https://www.forbes.com/sites/mikejen- kins/2021/06/28/how-ai-is-improving-customer-expe- rience/?sh=54cf9b57669c Jones, C. (2018). The ethics of artificial intelligence. In M. Anderson & S. L. Anderson (Eds.), Machine eth- ics (pp. 137-160). Cambridge University Press. https:// doi.org/10.1017/CBO9780511978036 Kadambi, R., Kakade, S., & Tandon, A. (2018). Amazon recommendations: item-to-item collaborative filtering. Amazon.com, Inc. Kaplan, A. M., & Haenlein, M. (2019). Siri, Siri, in my hand: who’s the fairest in the land? On the interpreta- tions, illustrations, and implications of artificial intel- ligence. Business Horizons, 62(1), 15-25. https://doi. org/10.1016/j.bushor.2018.08.004 Kieslich, K., Luenich, M., & Marcinkowski, F. (2021). The Threats of Artificial Intelligence Scale (TAI). In- ternational Journal of Social Robotics, 13, 1563-1577. https://doi.org/10.1007/s12369-020-00734-w Kirova, M., & Boneva, M. (2024). Artificial intelligence: challenges and benefits for business, 14 th Interna- tional Scientific Conference “Business and Manage- ment 2024” (pp. 253-260). Vilnius Tech. https://doi. org/10.3846/bm.2024.1277 Kline, R. B. (2015). Principles and practice of structural equation modelling. Guilford Publications. Kohtamaki, M., Vesalainen, J., Henneberg, S., Naude, P., & Ventresca, M. J. (2012). Enabling relationship struc- tures and relationship performance improvement: The moderating role of relational capital. Industrial Mar- keting Management, 41(8), 1298-1309. https://doi. org/10.1016/j.indmarman.2012.08.001 Komiak, S. Y ., & Benbasat, I. (2006). The effects of per- sonalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 30(4), 941- 960 Koo, D. M., Ju, S. H. (2010). The interactional effects of atmospherics and perceptual curiosity on emotions and online shopping intention. Computers in Human Behaviour, 26(3), 377-388. https://doi.org/10.1016/j. chb.2009.11.009 Laaksonen, T., Pajunen, K., & Kulmala, H.I. (2008). Co-evolution of trust and dependence in customer-sup- plier relationships. Industrial Marketing Management, 37(8), 910-920. https://doi.org/10.1016/j.indmar- man.2007.06.007 Lemon, K. N., & Verhoef, P. C. (2016). Understanding customer experience throughout the customer jour- ney. Journal of Marketing, 80(6), 69-96. https://doi. org/10.1509/jm.15.0420 Liang, Y ., & Lee, S. A. (2017). Fear of autonomous ro- bots and artificial intelligence: evidence from national representative data with probability sampling. Interna- tional Journal of Social Robotics, 9, 379-384. https:// doi.org/10.1007/s12369-017-0401-3 Liu, M. T., Brock, J. L., Shi, G. C., Chu, R., & Tseng, TH. (2012). Perceived benefits, perceived risk, and trust. Asia pacific Journal of Marketing and Logistics, 25(2), 225-248. https://doi.org/10.1108/13555851311314031 Lobera, J., Fernandez Rodriguez, C. J., & Torres-Albero, C. (2020). Privacy, values and machines: Predicting opposition to artificial intelligence. Communication Studies, 71(3), 448-465. https://doi.org/10.1080/1051 0974.2020.1736114 Lockey, S., Gillespie, N., Holm, D., & Someh, I. A. (2021). A review of trust in artificial intelligence: challenges, vulnerabilities and future directions. Proceedings of the 54 th Hawaii International Conference on System Sciences (pp. 5463-5472). Hicss Lu, C. C. A., Yeh, C. C. R., & Lai, C. C. S. (2025). The role of intelligence, trust and interpersonal job characteris- tics in employees’ AI usage acceptance. Internation- al Journal of Hospitality Management, 126, 104032. https://doi.org/10.1016/j.ijhm.2024.104032 Luhmann, N. (1988). Familiarity, confidence, trust: prob- lems and alternatives. In D. Gambetta (Ed.): Trust: Making and breaking cooperative relations (pp. 94- 107). Blackwell MacKenzie, S. B. & Podsakoff, P. M. (2012). Common method bias in marketing: causes, mechanisms, and procedural remedies. Journal of Retailing, 88, 542- 555. http://dx.doi.org/10.10167j.jretai.2012.08.001 Mariani, M. M., Machado, I., & Nambisan, S. (2023). 153 Organizacija, V olume 58 Issue 2, May 2025 Research Papers Types of innovation and artificial intelligence: A sys- tematic quantitative literature review and research agenda. Journal of Business Research, 155, 1-14. https://doi.org/10.1016/j.jbusres.2022.113364 Mendez-Suarez, M., Delbello, L., de Vega de Unceta, A., & Ortega Larrea, A. L. (2024). Factors Affecting Consumers’ Attitudes Towards Artificial Intelligence. Journal of Promotion Management, 30(7), 1141-1158. https://doi.org/10.1080/10496491.2024.2367203 Martinez-Lopez, F. J. & Casillas, J. (2013). Artificial in- telligence-based systems applied in industrial market- ing: an historical overview, current and future insights. Industrial Marketing Management, 42(4), 489-495. https://doi.org/10.1016/j.indmarman.2023.03.001 Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Acade- my of management review, 20(3), 709-734. https://doi. org/10.5465/amr.1995.9508080335 McClure, P. K. (2018). “You are fired”, says the robot. So- cial Science Computer Review, 36. 139-156. https:// doi.org/10.1177/0894439317698637 Mirbabaie, M., Bruenker, F., Moellmann (Frick), N. R. J., & Stieglitz, S. (2022). The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electronic Markets, 32, 73-99. https://doi.org/10.1007/ s12525-021.00496-x Moon, J. W., & Kim, Y . G. (2001). Extending the TAM for a World-Wide-Web context. Information & Manage- ment, 38(4), 217-230. https://doi.org/10.1016/S0378- 7206(00)00061-6 Morgan, R. M., & Hunt, S. D. (1994). The commit- ment-trust theory of relationship marketing. The Journal of Marketing, 58(3), 20-38. https://doi. org/10.1177/002224299405800302 Neuhofer, B., Buhalis, D., & Ladkin, A. (2015). Smart technologies for personalized experiences: a case study in the hospitality domain. Electronic Markets, 25(3), 243-254. https://doi.org/10.1007/s12525-015-0182-1 O’Neil, C. (2016). Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Publishing Group Ozudogru, G. & Cakir, H. (2021). Investigation of pre-ser- vice teachers’ opinions about using non-linear digital storytelling method. Kastamonu Education Journal, 29(2), 452-459. https://doi.org/10.24106/kefder- gi.744216 Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomor- phic characteristics in the acceptance of artificial in- telligence in the service industry. Computers in Human Behaviour, 122, 106855. https://doi.org/10.1016/j. chb.2021.106855 Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in experimental social psychology, 19, 123- 205. Academic Press Puntoni, S., Walker Reczek, R., Giesler, M., & Botti, S. (2021). Consumers and Artificial Intelligence: An Ex- periential Perspective. Journal of Marketing, 85(1), 131-151. https://doi.org/10.1177/0022242920953847 Riegelsberger, J., Sasse, M. A., & McCarthy, J. D. (2003). Shiny happy people trust? Photos on e-commerce web- sites and consumer trust. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 121-128). https://doi.org/10.1145/642611.642634 Roy, S. K., Singh, G., & Hope, M. (2019). The rise of smart consumers role of smart servicescape and smart consumer experience co-creation. Journal of Market- ing Management, 35(15/16), 1480-1513. https://doi.or g/10.1080/0267257X.2019.1680569 Russell, S. J. & Norvig, P. (2010). Artificial intelligence: a modern approach (3rd ed.). Pearson. Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all: a cross-discipline view of trust. Academy of management review, 23(3), 393-404. https://doi.org/10.5465/amr.1998.926617 Sartori, L., & Bocca, G. (2023). Minding the gap(s): pub- lic perceptions of AI and socio-technical imaginaries. AI & Society, 38(2), 443-458. https://doi.org/10.1007/ s00146-022-01422-1 Scardamalia, M., & Bereiter, C. (2014). Smart technology for self-organizing processes. Smart Learning Envi- ronments, 1(1), 1-13. https://doi.org/10.1007/s40561- 014-0001-8 Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behaviour, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014 Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodri- guez, S. (2019). Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behaviour, 98, 256- 266. https://doi.org/10.1016/j.chb.2019.04.001 Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47-53. Sindermann, C., Sha, P., Zhou, M., Wernicke, J., Schmitt, H. S., Li, M., Sariyska, R., Stavrou, M., Becker, B., & Montag, C. (2020). Assessing the attitude towards ar- tificial intelligence: Introduction of a short measure in German, Chinese, and English language. Kuenstliche Intelligenz, 35(1), 109-118. https://doi.org/10.1007/ s13218-020-00689-0 Slovic, P. (1999). Trust, emotion, sex, politics, and science: Surveying the risk-assessment battle- field. Risk Analysis, 19(4), 689–701. https://doi. org/10.1111/j.1539-6924.1999.tb00439.x Sivarajah, U., Kamal, M., Irani, Z., & Weerakkody, V . (2017). Critical analysis of big data challenges and 154 Organizacija, V olume 58 Issue 2, May 2025 Research Papers analytical methods. Journal of Business Research, 70, 263-286. https://doi.org/10.1016/j.jbusres.2016.08.001 Stein, J. P., Messingschlager, T., Gnambs, T., Hutmacher, F., & Appel, M. (2024). Attitudes towards AI: meas- urement and associations with personality, Scientific Reports, 14: 2909. https://doi.org/10.1038/s41598- 024-53335-2 Tepylo, N., Straubinger, A., & Laliberte, J. (2023). Pub- lic perception of advanced aviation technologies: A review and roadmap to acceptance. Progress in Aero- space Sciences, 138, 100899. https://doi.org/10.1016/j. paerosci.2023.100899 Tomkins, C. (2001). Interdependencies, trust and informa- tion in relationships, alliances and networks. Account- ing, Organizations and Society, 26(2), 161-191. https:// doi.org/10.1016/S0361-3682(00)00018-0 Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Na- ture Medicine, 25(1), 44-56. https://doi.org/10.1038/ s41591.018-0300-7 Tran, T. B. H., & Vu, A. D. (2021). From custom- er value co-creation behaviour to customer per- ceived value. Journal of Marketing Management, 37(9/10), 993-1026. https://doi.org/10.1080/026725 7X.2021.1908398 Trawnih, A., Al-Masaeed, S., Alsoud, M., & Alkufahy, M. (2022). Understanding artificial intelligence experi- ence: A customer perspective. International Journal of Data and Network Science, 6, 1471-1484. https://doi. org/10.5267/j.ijdns.2022.5.004 Tufekci, Z. (2018). YouTube, the Great Radicalizer. The New York Times, 10(9), 18 Venkatesh, V ., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly, 27(3), 425-478 Verma, S., Sharma, R., Deb, S. & Maitra, D. (2021). Ar- tificial intelligence in marketing: systematic review and future research direction. International Journal of Information Management Data Insights, 1(1), 1-8. https://doi.org/10.1016/j.jjimei.2020.100002 Wach, K., Duong, C. D., Ejdys, J., Kazlauskaite, R., Ko- rzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziem- ba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Eco- nomics Review, 11(2), 7-24. https://doi.org/10.15678/ EBER.2023.110201 Wang, Y . Y ., & Wang, Y . S. (2019). Development and vali- dation of an artificial intelligence anxiety scale: An ini- tial application in predicting motivated learning behav- iour. Interactive Learning Environments, 27(1). https:// doi.org/10.1080/10494820.2019.1674887 Witte, K. (1992). Putting the fear back into fear ap- peals: the extended parallel process model. Com- munication Monographs, 59, 329-349. https://doi. org/10.1080/03637759209376276 Xu, Y ., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., & Zhang, J. (2021). Artificial intelligence: a powerful paradigm for scientific research. The Innovation, 2(4), https://doi.org/10.1016/j.xinn.2021.100179 Ye, H., Cheng, X., Yuan, M., Xu, L., Gao, J., & Cheng, C. (2016). A survey of security and privacy in big data. Conference Paper (pp. 268-272), Department of Net- work Optimization and Management, China Unicom Network Technology Research Insti- tute, Beijing, P.R.China. https://doi.org/10.1109/IS- CIT.2016.7751634 Zarifis, A., & Cheng, X. (2022). A model of trust in Fin- tech and trust in Insurtech: How Artificial Intelligence and the context influence it. Journal of Behaviour- al and Experimental Finance, 36, 1-7. https://doi. org/10.1016/j.jbef.2022.100739 Zhang, X., & Chen, R. Q. (2008). Examining the mecha- nism of the value co-creation with customers. Interna- tional Journal of Production Economics, 116(2), 242- 250. https://doi.org/10.1016/j.ijpe-2008.09.004 Matjaž Iršič holds a Ph.D. in Marketing and is an Assistant Professor at the University of Maribor. He is employed at the Faculty of Economics and Business at the University of Maribor, where he teaches marketing courses at undergraduate and postgraduate levels. His primary research interests lie in strategic marketing and relationship marketing. His scientific and professional work includes over 300 bibliographic units, among them 16 scientific articles, more than 20 professional articles, three textbooks, and four scientific monographs in the field of marketing. He has presented more than 30 papers at international conferences and has actively participated in fundamental and applied international research projects. Tomaž Gjergjek holds a Bachelor’s degree in Economics (UN) and is currently a master’s student in the Marketing Management program. He has experience in e-commerce, financial market management, and business growth automation. As an e-commerce manager, he runs an Amazon FBA business and specializes in market research, digital marketing, and sales campaign optimization. He has gained international experience through studies at universities across Europe, and his professional interests also include the impact of artificial intelligence on marketing and business strategies. 155 Organizacija, V olume 58 Issue 2, May 2025 Research Papers 156 Organizacija, V olume 58 Issue 2, May 2025 Research Papers Appendix: Scales of measurement Consumers’ perceived benefits of intelligent consumer service robots-ICSRs (5-point Likert scale ranging from strongly agree - 5 to strongly disagree - 1) Perceived interactivity ICSRs can accurately provide me with the information I need. When I encounter a problem, ISR can provide me with a solution. ICSRs can effectively collect consumer feedback. ICSRs can effectively promote two-way communication with a seller. Perceived personalization ICSRs store my preferences and offer me extra services based on my preferences. ICSRs do a pretty good job guessing what kinds of things I might want and making suggestions. ICSRs know what I want. ICSR setup can be personalized to my needs. The service provided by ICSRs is customized exactly to my question. Consumers’ engagement I feel like I can be myself when using ICSRs. The things I did with the ICSRs are in line with what I really wanted to do. Using ICSRs has become a part of my daily consumption. I think I have a strong emotional connection with ICSRs. Value co-creation I actively responded to the questions of the ICSRs so that the company can understand my needs. I participated in the solicitation or evaluation of new product/service ideas proposed by the ICSRs. I participated in the experience or promotion of new products recommended by the ICSRs. I actively gave feedback about my experience, questions, improvement suggestions to the ICSRs. I actively recommended that others use ICSRs to purchase products/services. I actively help other consumers solve their problems. Consumers’ perceived trust in AI (5-point Likert scale ranging from strongly agree - 5 to strongly disagree - 1) I trust that AI will take care of me. I trust that people are safe when interacting with AI. I trust that AI will deliver the best services. I trust that AI will recommend the best services for my needs and demands. I trust that AI will offer more efficient services than human beings. I trust that AI will offer a modern look to service firms. Consumers’ perceived threats of AI (5-point Likert scale ranging from strongly agree - 5 to strongly disagree - 1) AI causes a lack of human interaction. AI causes some legal issue problems. AI decreases creativity and critical thinking. AI tools do not replace classical off-line buying process. AI causes some security concerns. AI causes some technical issue problems. AI causes over-reliance on technology. AI causes some ethical dilemmas. Use of AI tools requires constantly need for Internet. Difficulty in handling complex task in buying process. Risk of acquire inaccurate / incorrect or biased information. Over-detailed, redundant, excessive content. Using AI tools will reduce some skills and abilities of person who use it. I see AI tools as a threat to human ethics. Consumers’ attitudes towards AI (5-point Likert scale ranging from strongly agree - 5 to strongly disagree - 1) AI will make the world a better place. (Cognitive) I have strong positive emotions about AI. (Affective) I want to use technologies that rely on AI. (Behavioural) AI has more advantages than disadvantages. (Cognitive) 157 Organizacija, V olume 58 Issue 2, May 2025 Research Papers I look forward to future AI developments. (Affective) AI offers solutions to many world problems. (Cognitive) I prefer technologies that feature AI. (Behavioural) I am not afraid of AI. (Affective) I would rather choose a technology with AI than one without it. (Behavioural) AI solves problems rather than creates them. (Cognitive) When I think about AI, I have mostly positive feelings. (Affective) I would not avoid technologies that are based on AI. (Behavioural)