Supporting the Ethical Use of Artificial Intelligence Applications in Universities – A Research Based on Students Opinions Anca Draghici Politehnica University of Timisoara, Romania anca.draghici@upt.ro Caius Luminosu Politehnica University of Timisoara, Romania caius.luminosu@upt.ro Angela Repanovici Transilvania University of Brasov, Romania arepanovici@unitbv.ro Manolis Koukourakis University of Crete, Library, Rethymno, Greece manolis@uoc.gr Valerij Dermol International School for Social and Business Studies, Celje, Slovenia valerij.dermol@mfdps.si Ilie Mihai Taucean Politehnica University of Timisoara, Romania ilie.taucean@upt.ro Purpose: The purpose of the research was to identify a feasible way to support the ethical use of AI in universities based on identifying how the AI software application ChatGPT is used in education at higher education institutions and with students in different majors. Study design/methodology/approach: The research methodology consisted of the following stages: (1) a survey based on a questionnaire designed considering the Technology Acceptance Model (TAM) framework to collect students’ opinions on the use of AI ChatGPT; (2) The results of the survey were used to design the materials of the AI Transmedia Campaign, but also to identify the best distribution channels of them; (3) After implementing the AI Transmedia Campaign, a feedback survey was developed, ascertaining the effectiveness of the approach to create an ethical behaviour towards the AI software application in general, and on AI ChatGPT, in particular. Findings: Research shows a gap in regulating the ethical use of AI applications in higher education. Thus, the AI Transmedia Campaign has been well received and appreciated by all categories of students. Originality/value: The research is international and was carried out in higher education institutions in Romania, Greece, and Slovenia (the size of the research sample, 2942, proves the scope of the study). The research results have demonstrated and characterised the students’ behaviour (the cognitive response and the intention to use) towards the use of AI ChatGPT and the utility of the AI Transmedia Campaign realised in the context of the implementation of the RespectNET project (2021-1- IT02-KA220-HED-000027578, https://respectnet.eu/). Introduction Artificial intelligence (AI) has proliferated in the last several years and is seen as a tool that can aid in our progress in a variety of areas, including healthcare (Arslan, 2023; Takagi et al., 2023), education (Alqahtani et al., 2023), image processing, natural language processing, smart robots, autonomous vehicles, speech, and facial recognition, etc. Numerous definitions of AI have been proposed since the initial models were made public. AI was first characterised by the pioneer International Journal of Management, Knowledge and Learning | ISSN 2232-5697 Volume 13 (2024) 81–92 | https://www.doi.org/10.53615/2232-5697.13.81-92 Marvin Minky as “enabling machines to do things that require human intelligence” (a definition corroborated by a study by Jiang et al., 2022). “Computational systems that are capable of engaging in human-like processes such as learning, adapting, synthesising, self-correcting, and using data for complex processing tasks” is how AI is characterised in other studies (Crompton & Burke, 2023). ChatGPT, the most popular AI- based tool from OpenAI, was launched in November 2022 (Bhattacharya et al., 2023). It is becoming increasingly well-liked in the public sphere, academic settings, and education (Li et al., 2023). According to the company’s official website, GPT, which stands for “generative pre- trained transforms,” is trained on a vast amount of text and code data (https://chataigpt.org). This allows it to analyse a vast amount of online data, which enables it to comprehend requirements and produce text that appears human (Li et al., 2023; Bhattacharya et al., 2023). As a result, it can compose formal emails, examine an application’s code, summarise a text, answer practically any type of query, and make research-based conclusions (Gozalo-Brizuela & Garrido-Merchán, 2023; Bhattacharya et al., 2023; Li et al., 2023). According to Bhattacharya et al. (2023), this AI solution also rejects improper or unjustified queries and tests using false premises. Moreover, Bhattacharya et al.’s (2023) study lends credence to the notion that AI ChatGPT poses a risk to higher education since students can utilise it to quickly produce essays, assignments, or projects without citing the sources of their thoughts. Similarly, this software might also be useful when we are at a loss for ideas. Furthermore, the study by Greitemeyer and Kastenmüller (2023) looks at the connection between students’ intentions to utilise AI ChatGPT for unethical activity and their personality attributes. They concluded that the only characteristic that significantly influenced students’ intentions to use AI ChatGPT for academic cheating was honesty-humility, and they believe that educational institutions and pertinent stakeholders should develop educational initiatives that instruct the public on how to use AI language models in a way that upholds academic integrity. In addition, according to the interesting research of Hao (2023) employing AI ChatGPT, American students performed very well on tests and assignments. AI ChatGPT was used by 89% of the study’s American participants to do homework, 53% to compose papers, 48% to assist with tests, and 22% to create a map of the topics it should cover. All these studies mentioned in the specialised literature of recent years demonstrate that AI applications are present in the life of universities, with students trying and testing various ways in which they can be useful. The literature has identified both positive and negative aspects of using AI software applications; there is a general fear in the academic community towards them (manifested among teachers, researchers, and students simultaneously). All this led to the motivation for the development of the present research. The objective of the research was to investigate the current situation in the use of ChatGPT by students from different universities (diagnostic survey-type study based on a specially designed questionnaire). For this first research phase, a scenario based on the Technology Acceptance Model (TAM) was developed. The results of the diagnostic study contributed to the development of the AI Campaign (the second research stage), which was carried out during the implementation of the Erasmus+ RespectNET project. The dissemination of the AI campaign in universities and the collection of feedback regarding its usefulness confirmed the academic community’s need for information on the ethical aspects of AI use. The work has three parts: one part dedicated to the methodological explanations, another to the results obtained, and finally, the research conclusions are presented. Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 82 The Research Scenario In the first part of the research, the developed scenario was based on the Technology Acceptance Model (TAM) framework (Park, 2009). Perceived utility, according to TAM, is the degree to which a person thinks that utilising a specific technology would improve their work performance or simplify their life. On the other side, perceived ease of use relates to how much a person thinks utilising technology would be effortless. According to TAM, these two elements directly impact a person’s attitude toward utilising technology, which in turn influences their intention to utilise it. The actual usage of the technology follows the intention to utilise it. TAM has been extensively used in many fields to comprehend and forecast behaviour related to the adoption and use of technology. It has been applied to research how well various technologies, from straightforward software programs to intricate information networks, are accepted (Tao et al., 2022; Bailey et al., 2022; Al-Adwan et al., 2023). Figure 1 shows the adopted structure of the questionnaire used in the survey. We get important information on students’ acceptance and use of ChatGPT in a learning environment by including these TAM aspects in our questionnaire. Finding out how students “felt about the ChatGPT” application was the first study’s main goal: to investigate how much students know about this program, whether they believe it may help them with their schoolwork, and how they anticipate artificial intelligence (AI) changing education in the future. Figure 1: The structure of the questionnaire used in the first survey. A survey based on a questionnaire was created and used to find out students’ opinions and perceptions about AI ChatGPT and how much they utilise it in creating their projects or learning. Twenty questions were established in the designed questionnaire, and the respondents were made aware of the confidentiality policy of the data collection process. The software program SurveyMonkey was used to gather data online. As a result, 2942 valid answers were gathered from the students enrolled in four universities: Politehnica University of Timisoara and Transilvania University of Brasov, both from Romania, University of Crete, Rethymno, Greece and International School of Social and Business Studies, Celje, Slovenia. The Microsoft Excel software was used for data analysis (Table 1). Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 83 In the second stage of the research, the aim was to identify the usefulness of the AI Campaign (spread in the same university populations) by collecting feedback using a Google Forms questionnaire (https://mfdps.1ka.si/a/1be487f0) that consists of 10 questions. The research objective was related to demonstrating the AI campaign’s usefulness in creating adequate, ethical behaviour in using AI software applications, such as AI ChatGPT. The Microsoft Excel software was used for data analysis and graphical representation of the research results (Table 1). Table 1: Methods and tools used in the research. Research stage Objectives Methods Tools 1. Investigation of students’ opinions on using AI ChatGPT Students’ opinions and perceptions about AI ChatGPT and their behaviour in using this software: usefulness, ease to use, use cases, attitude to use, intention to use - Survey based on a questionnaire; - Questionnaire developed based on TAM (20 questions) – Figure 1; - Distribution in 4 university students’ populations (2 from Romania, 1 from Slovenia and 1 from Greece); - Number of valid fill-up questionnaires that were processed: 2942 (1925 from Romania, 442 from Slovenia and 575 from Greece) Survey Monkey®, Excel 2. AI Campaign for creating an ethical behaviour Students’ opinions and perceptions about the created resources of the AI Campaign - Survey based on a questionnaire; - Questionnaire developed as a feedback collection (10 questions); - Distribution in 4 university students’ populations (2 from Romania, 1 from Slovenia and 1 from Greece); - Number of valid fill-up questionnaires that were processed: 312 (206 from Romania, 38 from Slovenia and 68 from Greece). Google forms, Excel Research Results – First Survey The research sample demography shows a gender distribution with 70% female and 29% male students. All university specialisations’ students were targeted (e.g., engineering, medicine, law, economics, communication and public relations, digital media, psychology, music, etc.), which was conducted to a balanced research sample structure. In addition, the distribution of responders by years of study was also balanced (52% were students from Bachelor’s study programmes, 34% were students from master programmes, and 14% were students from PhD programmes). Most students (65%) in the sample were between 20 and 25, with 25% being between 18 and 20, and only 10% were older than 25. Table 2 presents a synthesis of the most relevant research results obtained, as well as the debate over them, conclusions, and implications (only those questions from the questionnaire that contributed the most to the development of the AI Campaign were selected). Table 2: Investigation of students’ opinion on using AI ChatGPT - Research results. Research results Debates and conclusions - Although 46.08% of respondents agree that using AI ChatGPT for academic purposes is a good idea, they believe that this application’s replies may be more accurate and relevant; - 19.80% of the respondents are happy and think they will get the answers to their questions quickly when using the AI ChatGPT application, while 7.17% believe it is an improper use for an academic setting, and 26.96% are unsure of their thoughts. Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 84 - Research has shown that AI ChatGPT is popular among students, but they doubt the query results. - According to 45.92% of the study’s student participants, human connection is still essential for learning in some situations, even if AI ChatGPT can be helpful in some cases; - 39.12% of respondents think that to achieve a deeper comprehension and improve communication skills, human connection with teachers cannot be substituted by the AI ChatGPT application; Merely 5.78% of participants believed that AI ChatGPT suffices to provide them with all the necessary knowledge without the need for a coach, teacher. - Research has shown that AI ChatGPT is popular among students. - 37.37% of respondents say the AI ChatGPT could be enhanced by offering more precise and pertinent responses to challenging questions. In comparison, 49.13% of respondents think it should be improved by offering more recommendations and outside sources to expand the themes. Just 10.03% of users would want to see more multimedia options added, including voice. - Research has shown that AI ChatGPT is popular among students, and they know how to use its functionalities. - 39.12% of students from different universities think that regulations should be placed on AI ChatGPT in the university; 24.15% disagree. - 36.73% of respondents, a sizable portion, are unsure if a rule would be required; - Research shows a lack of ethical information about using AI applications in universities. - If AI ChatGPT is not utilised excessively, according to 56.12% of respondents, then using it in an academic setting can be deemed ethical; - 10.20% of respondents think that using the AI application is unethical no matter how frequently it is used; - 10.88% of respondents think that using it ethically “is never a bad idea”; - 22.79% of respondents were undecided; - Research shows a lack of ethical information about using AI applications in universities. Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 85 - Most survey participants (70.41%), who were students, think that ethical usage of AI ChatGPT and related technologies requires training; a mere 12.59% of respondents think this training would not be required. - Research shows the strong need for a RespectNET-related training programme (Use of AI in media and media bias, https://elearningproject.eu/lessons/use-of-ai- in-media-and-media-bias/) and the created AI Campaign (https://elearningproject.eu/lessons/artificial- inteligence-ai-in-university-communication/) - There is a divide among students on the necessity of a university policy addressing the usage of AI ChatGPT in the classroom; - While 22.53% of students disagreed, 26.28% said they were unsure; - Research shows a gap in regulating the ethical use of AI applications. - 34.13% of students do not think it is important to disclose that they have used AI ChatGPT in their work; - 42.32% of all students think it is appropriate to do so and explain how they employed the AI software application; - Research shows a gap in regulating the ethical use of AI applications. Results on AI Campaign Perception – Second Survey The Process of the AI Transmedia Campaign Development The first stage results of research conducted among students and other research conducted with teaching staff or universities (Draghici et al., 2023; Robescu et al., 2023) have shown that AI applications are becoming more and more popular. They usually operationalise the activities and eliminate routine or information activities (terminological, geographical, methodological, etc.) documentation. In addition, the extended study of (Robescu et al., 2023) shows the knowledge gap between the skills needed and the skills owned by university trainers related to media communication and respectful communication, considering AI applications and social media used. The problem of the ethical use of AI applications in the university academic space was the focus of the activities in implementing the Erasmus+ project RespectNET – Respectful Communication through Media Education Network (2021-1-IT02-KA220-HED-000027578, https://respectnet.eu/). Project partners have created different resources that are exploited on a large scale in different higher education institutions thanks to the project members’ promotion and dissemination campaigns. RespectNET-related training programme (Use of AI in media and media bias, https://elearningproject.eu/lessons/use-of-ai-in-media-and-media-bias/) and the created AI Campaign (https://elearningproject.eu/lessons/artificial-inteligence-ai-in- Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 86 university-communication/). Different channels of communication were used for dissemination: the learning platform and the RespectNET project web page, the YouTube channel (https://www.youtube.com/playlist?list=PL7Ij3-xTzrJJzL-cV7-AH-y5_GSVPje7p) and the FaceBook page (https://www.facebook.com/RespectNET.eu), Spotify (https://open.spotify.com/show/4d4lPIVXzj2YTDEknWGZiU) and Speaker from iHeart (https://www.spreaker.com/search?query=RespectNET) channels mainly used for podcasts. Feedback about the AI Campaign The educational materials created to support the AI Campaign in the university were also distributed to the students enrolled in different study programmes from the four universities that participated in the first survey. They viewed and evaluated the materials, considering them useful in generating ethical behaviour towards AI application use. Participants were asked to assess the usefulness of the created materials and the AI Campaign. The results of the collected feedback are presented in the following; details about the research methodology are presented in Table 2. The target population of students was the same as in the first stage of research; the research sample demography had almost the same profile. The questionnaire was distributed online (using the Google Forms application) via the virtual space created for the transmedia campaigns (https://elearningproject.eu/courses/respectnet- campaigns/). The first set of questions invites participants to assess the campaign against stated criteria on a Likert scale (5 = Very much; 4 = rather much; 3 = to some extent; 2 = a little; 1 = not at all), as shown in Figure 2. As shown in Figure 2, the respondents highly appreciated the AI Campaign and the information available in different forms and communication channels. Furthermore, all respondents declared they would highly recommend the transmedia campaign to peers and colleagues (100%). The research result was not expected because we know that actual students (generation Millennials and Z, which were dominant in the sample) learn quickly from each other and quickly discover the advantages of using technology. On the other hand, the result demonstrates that students need confirmation of their behaviour when using AI applications in university activities, and the Campaign offers them many examples of how to act responsibly. Figure 2: Results of the evaluation of the AI Campaign in the university against stated criteria. The next question required an open answer: “What was particularly good about the transmedia campaign AI in university?” For this article, we have considered only the relevant answer as Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 87 expressed by some of the students and which reflect their preference for achieving new knowledge (selected answers): • “The video content; • Presented are cases of opportunities, challenges, and threats of AI in universities. AI is a reality; we shall speak about it, considering it as a tool for which users need to know how to use it properly. Several media are used in campaigns. I think it is very good, and I would recommend such a campaign for universities, also as a dialogue with students and with civil society; • The topic and the contents are hot subjects now, and the materials may be a perfect starting point for further discussion among relevant stakeholders; • It is presented with different digital media and on different channels; the topic is crucial nowadays. I would recommend widely promoting it; also, some tips for universities on how to do similar campaigns harmonised with their culture; • Engaging campaigns, crucial in these times, very rich resources presented with different media - videos, articles, infographics; • The transmedia campaign explores a highly relevant theme, providing a great starting point for further learning. The well-crafted execution enhances its effectiveness, making it an engaging and informative experience. The campaign not only addresses the core theme effectively but also opens avenues for in-depth exploration and continued education; • Key issues and challenges of AI in higher education and elsewhere are indicated and explained; the materials serve as a good motivation to dig deeper into the matter.” Overall, the participants praised the crucial relevance of the AI Campaign in the university. Overall, respondents recommend and encourage universities to produce similar Campaigns on using AI in specific educational fields, e.g. medicine, law, economics, communication and public relations, digital media, psychology, music, etc. Thus, the collected feedback indicated a great need to understand ethics and ethical behaviour when using AI applications. Another recommendation was to continue and extend the discussions started by the AI Campaign materials by considering the cultural and local conditions and with new topics (using AI to create vibrant videos, multimedia materials, pictures, and non-realistic messages to promote the university, etc.). Regarding the following open question: “What improvements do you suggest for the transmedia AI Campaign in university communication?” the most relevant answers were: • “Better visuals in video and website; • I think it is good enough; it can serve as a learning tool also for NGOs; • No improvements are needed; perhaps such a campaign shall be promoted widely; • Translations into the national language would be beneficial; • Incorporate interactive elements such as quizzes, polls, or immersive experiences to engage your audience. Allow them to participate in the narrative and make choices that impact the storyline. This fosters a deeper connection with your AI campaign, making it more memorable and engaging for users; Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 88 • The key issue is disseminating the contents among the stakeholders and initiating the discussion; • I think only strong promotion of these campaigns is needed, and some useful tips on how to do such campaigns would be nice to be delivered in the learning process”. In this case, the recommendations for improvement highlighted the clarity and usefulness of the AI Campaign in the university. Some participants recommended promoting the AI Campaign and disseminating the content among the stakeholders and NGOs, considering the university’s third mission when supporting open science and community learning. Another important recommendation was to improve some visual aspects of the videos on the website (mainly those that have been developed using AI). The AI Campaign will have a better impact if the materials are translated into the national language, thus creating accessibility to a large audience. Considering the learning process associated with the AI Campaign, many students suggested incorporating interactive elements like quizzes, polls, or immersive experiences after a set of materials or multimedia material so they could check and verify if they correctly perceived the ethical behaviour. The secondary role of the second research stage, “collecting feedback about the AI Campaign,” was to see students’ acceptability towards such communication methods. To our surprise, all the respondents appreciated the variety of demonstrative materials regarding using AI in the university. After this campaign, they did not hide that they intensively used applications, such as AI ChatGPT (the most popular!), to provide information, documentation, or writing stories related to some essays they had as homework. Important expectations of ethical behaviour were clarified, and a best practice is the case of Politehnica University of Timisoara, Romania, where the Senate adopted a Decision on reporting the results generated by students (as well as teaching and research staff) when using different AI applications. Other universities also adopted this example of good practice. Also, a large part of the created infographics was intensively used on the notice boards of university faculties and departments so that students and teaching staff would know how to use AI applications. Conclusions and Final Remarks According to the collected data regarding their opinions, most of the students who took part in the first research stage were aware of the AI ChatGPT application. Specialised research indicates that AI can help universities by customising feedback, producing evaluations, and adjusting the learning process to meet the demands of various learner types. The results of the first survey are similar to the findings and conclusions of other authors (Bhattacharya et al. (2023); Li et al. (2023); Crompton and Burke, 2023). According to the research results, students in the sample think that AI applications may be used to search materials on a specific topic and can produce articles or papers that meet the assigned standards; refinement of the academic, scientific discourse is a dominant option when using AI ChatGPT. In addition, students discover that AI may assist them in problem-solving, inspire new ideas, and help produce programming assistance codes. Although students think AI ChatGPT is a useful tool for academic purposes, they also think that human connection is still important and cannot be replaced, either from the standpoint of the trainer/coach/mentor or the study group. Less than half of the respondents used the AI ChatGPT application throughout their university training in the previous year, per the data analysis. Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 89 Most survey student participants think that ethical usage of AI ChatGPT and other related technology applications require training (e.g., demonstrations and tutorials). Furthermore, nearly half of the students think that AI ChatGPT is used in the classroom and requires university policy regulations that should express the main lines of ethical behaviour. After surveying students on their perceptions of AI’s potential impact on the future, the research results demonstrate that over 50% of them felt favorably about the subject. Students think AI can help us break through an impasse, adjust to unanticipated circumstances, and simplify research tasks. A minority of students believe that artificial intelligence (AI) will harm society in the future because it might impede the growth and preservation of individual creativity and critical thinking, hence restricting human freedom and opportunities for information transmission. Students think that as AI advances, human intellect will sharply decline, and humans will become too comfortable to conduct study. Regarding the second stage of the research, in the context of the Erasmus+ RespectNET project implementation, results underlined the excellent acceptance of the AI Campaign (the created materials for the ethical use of AI) by the university communities. Students appreciate that the campaign materials are very good and useful for the early but extensive use of this software in universities. The feedback on the way of using AI applications in universities among the students highlighted a pressing need to explain and promote ethical behaviour. The running of the AI Campaign in the universities included in the research caused debates among students, professors and university decision-makers (rectors, deans, directors, and colleagues from the ethics committee and other prestigious researchers). Thus, numerous initiatives are envisioned to expand the campaign with specific elements for different areas of education, as well as for the case of scientific research. According to the opinions of the research sample, AI applications are expected to continue democratising and personalising education. Research results underlined that AI-enhanced classrooms, virtual reality learning, and adaptive learning tools are expected. Students were optimistic about AI in university activities but stressed the necessity for appropriate adoption and ongoing study of its effects on students and educators. Regulating AI applications used in universities is based on the programmatic documents, recommendations and guidelines issued at the international level, namely: • Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC; • Presidency Conclusions on the Charter of Fundamental Rights in the context of artificial intelligence and digital change (Doc. 11481/20 of 21 October 2020) • European Parliament resolution of 20 October 2020 with recommendations to the Commission on the civil liability regime for artificial intelligence (2020/2014(INL)); • European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies (2020/2015(INI)); • Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Teachers, developed and published in 2022 by the European Commission, Directorate-General for Education, Youth, Sport, and Culture; • You also have education: guide for Policymakers, published by UNESCO in 2021; Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 90 • Quick Start Guide to ChatGPT and Artificial Intelligence in Higher Education, published by UNESCO in April 2023. In the university, the use of AI tools (e.g., AI ChatGPT, Bard, Llama, Character.ai, Ernie Bot, etc.) should not be prohibited if the rules of ethics and deontology specific to the educational and research process are respected by users and the legal provisions at national/international level regarding the legitimacy of their use are not violated. Furthermore, AI offers a “toolbox of ways” to boost students’ engagement and interaction in the education/training process. Here are a few ideas (identified based on the recommendations found in the literature): • Interactive Learning with AI Tools: o Chatbots and Virtual TAS - AI-powered chatbots can act like virtual tutors, answering student questions 24/7 and providing personalised explanations. This can boost confidence and encourage students to participate more in class discussions (Labadze et al., 2023); o Gamification - Educational games with AI can make learning fun and interactive. AI can personalise the difficulty level and track student progress, keeping them motivated (Chen et al., 2020); • Immersive Learning with AI using Virtual Reality (VR) and Augmented Reality (AR) - VR and AR powered by AI can create immersive learning experiences that boost engagement (Lampropoulos, 2023); • AI-powered personalisation via Adaptive Learning - By analysing their strengths and weaknesses, AI can recommend targeted practice problems or adjust the difficulty of learning materials. This can empower students to take charge of their learning and improve their performance (Rane et al., 2023); • Enhancing Classroom Interaction Supported by Real-time Feedback - AI can analyse student responses and interactions, providing immediate feedback. This can help students identify areas needing improvement (Santosh, 2023). Finally, it should be remembered that AI is a tool to empower teachers, not replace them. By incorporating these interactive AI elements, teachers can create a more engaging and effective learning environment for all students. In future research, we will expand the geographical area of the investigations by including other universities and using the AI Campaign materials translated into the national languages so that the efficiency and effectiveness of the public relations process will be maximum. Acknowledgement The AI Transmedia Campaign was designed and spread in different higher education institutions with the support of the RespectNET project – “Respectful Communication through Media Education Network” (2021-1-IT02-KA220-HED-000027578, https://respectnet.eu/), co-financed by the European Commission (2021 - 2024). This paper and the communication reflect the views only of the authors, and the Commission cannot be held responsible for any use that may be made of the information contained therein. References Al-Adwan, A. S., Li, N., Al-Adwan, A., Abbasi, G. A., Albelbisi, N. A., & Habibi, A. (2023). Extending the technology acceptance model (TAM) to Predict University Students’ intentions to use metaverse-based learning platforms. Education and Information Technologies, 28(11), 15381-15413. Alqahtani, T., Badreldin, H. A., Alrashed, M., Alshaya, A. I., Alghamdi, S. S., bin Saleh, K., ... & Albekairy, A. M. (2023). The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Research in Social and Administrative Pharmacy. Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 91 Arslan, S. (2023). Exploring the potential of chat GPT in personalised obesity treatment. Annals of biomedical engineering, 51(9), 1887-1888. Bailey, D. R., Almusharraf, N., & Almusharraf, A. (2022). Video conferencing in the e-learning context: explaining learning outcome with the technology acceptance model. Education and Information Technologies, 27(6), 7679-7698. Bhattacharya, K., Bhattacharya, A. S., Bhattacharya, N., Yagnik, V. D., Garg, P., & Kumar, S. (2023). ChatGPT in surgical practice—a new kid on the block. Indian Journal of Surgery, 85(6), 1346-1349. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. Ieee Access, 8, 75264-75278. Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. Draghici, A., Repanovici, A., & Ng, P. K. (2023). Digital challenges to empower universities’ implication in the community. Human Systems Management, 42(2), 113-119. Gozalo-Brizuela, R., & Garrido-Merchan, E. C. (2023). ChatGPT is not all you need. A State of the Art Review of large Generative AI models. arXiv preprint arXiv:2301.04655. Greitemeyer, T., & Kastenmüller, A. (2023). HEXACO, the Dark Triad, and Chat GPT: Who is willing to commit academic cheating?. Heliyon, 9(9). Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14, 1181712. Jiang, Y., Li, X., Luo, H., Yin, S., & Kaynak, O. (2022). Quo vadis artificial intelligence?. Discover Artificial Intelligence, 2(1), 4. Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: systematic literature review. International Journal of Educational Technology in Higher Education, 20(1), 56. Lampropoulos, G. (2023). Augmented reality and artificial intelligence in education: Toward immersive intelligent tutoring systems. In Augmented reality and artificial intelligence: The fusion of advanced technologies (pp. 137-146). Cham: Springer Nature Switzerland. Li, W., Zhang, Y., & Chen, F. (2023). ChatGPT in colorectal surgery: a promising tool or a passing fad? Annals of biomedical engineering, 51(9), 1892-1897. Park, S. Y. (2009). An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e-learning. Journal of Educational Technology & Society, 12(3), 150-162. Rane, N., Choudhary, S., & Rane, J. (2023). Education 4.0 and 5.0: Integrating Artificial Intelligence (AI) for personalised and adaptive learning. Available at SSRN 4638365. RespectNET (2021). Respectful Communication through Media Education Network, Erasmus +, KA2 Strategic Partnership project, 2021-1-IT02-KA220- HED-000027578. Retrieved from: https://respectnet.eu/ (Access 31-03-2024) Robescu, D., Reiner, S., Trunk, A., & Draghici, A. (2023). Toward a Matrix of Competences for Respectful Communication in the University-Civil Society Context. In EDULEARN23 Proceedings (pp. 486-495). IATED. Santosh, G., (2023). The Future of Education: How Artificial Intelligence is Transforming Learning. Retrieved from: https://www.linkedin.com/pulse/future-education-how-artificial-intelligence-transforming-santosh- g-uqamc (Access 12-04-2024). Takagi, S., Watari, T., Erabi, A., & Sakaguchi, K. (2023). GPT-3.5 and GPT-4 performance on the Japanese Medical Licensing Examination: comparison study. JMIR Medical Education, 9(1), e48002. Tao, D., Fu, P., Wang, Y., Zhang, T., & Qu, X. (2022). Key characteristics in designing massive open online courses (MOOCs) for user acceptance: An application of the extended technology acceptance model. Interactive Learning Environments, 30(5), 882-895. Draghici et al. | Supporting the Ethical Use of Artificial Intelligence Applications in Universities 92