Ensuring Professionalism ofthe External Evaluation Commission: The Slovenian Case Study Karmen Rodman Nada Trunk Širca In 2006-2007, the Slovenian higher education (he) system took the first steps toward building a national model of institutional external evalu- ation (iee), which would be comparable with other European mod- els. In the first part of the article, the authors discuss the main ten- dencies within the European he area. This is followed by an outline of the developments in the field of quality assurance within Slovenian he, stressing the years 2006 and 2007. The scientific contribution of the article lies in the evaluation outcomes of the national pilot iees, with focus on the professional competences of the External evaluation com- mission (eec) members. Observation results stress the importance of the proper training of eec members. The authors propose that a sys- tematic follow-up on the eec work needs to be established and a code of ethics drawn up, highlighting the preferred values and principles of eec members. Key Words: quality assurance, higher education, external evaluation, institutional evaluation, external evaluation commission jel Classification: M42,128 Introduction The present article is the result of the research, implemented during the national pilot iees in the period 2006-2007. The Slovenian developing he quality assurance system uses as its reference points the guidelines provided by various European institutions for quality assurance in he. But in these documents little attention is paid to the eec members train- ing. It is indispensable to underline that the professional competences of the eec members' has a direct impact on the hei representatives' per- ception of iee. The role of the eec in iee, especially during a visit to the hei, is the key to successful understanding, experiencing and acceptance Karmen Rodman is a Lecturer at the Faculty of Management Koper, University of Primorska, Slovenia. Dr Nada Trunk Širca is Director ofthe Euro-Mediterranean University, Slovenia. Managing Global Transitions 6 (3): 301-315 of the iEE process and its outputs, provided by the representative of the evaluated heis. This was one of the reasons why the evaluation case study of pilot na- tional iees in four evaluated heis was carried out. Using observation as their research method of choice, the research team sought the answers to the question: What were the fields, stages and tools that should have been improved in pilot iees? Before the observation began, an observa- tion checklist had been drafted to ensure a coordinated approach of all observers involved. By the triangulation of three methods (observation, questionnaire survey and experience presentations) we tried to ensure a correct understanding of the conclusions made by observers and sustain the findings. The authors have come to the conclusion that, since the main inter- est of pilot iees lied in the assessment of national iee tools, the pro- fessional competences of the eec members had been partly neglected. Observation results thus stress the importance of the proper training of eec members in research methodology, he activities and the related na- tional legislation, besides the iee procedures. The authors propose that a systematic follow-up on the eec work needs to be established and a code of ethics drawn up, highlighting the preferred values and principles of eec members. In spite of all that, the development, improvement and comparability of the iees and the eec members are threatened as long as the national evaluation body is not affiliated in the European Association for Quality Assurance in Higher Education (enqa) and is not present in the European Quality Assurance Register (eqar). The potential future researches are presented in the last section. Theoretical Framework quality assurance in european higher education European he systems are facing many challenges, arising from their na- tional or international environments (Faganel, Trunk and Dolinšek 2005, 317; Srikanthan and Dalrymple 2003,126): liberalisation and the lifting of boundaries in the labour market, employment and education, market- isation (Logaj and Trnavcevic 2006, 79-80) and the fading of the divide separating the public sector from the private, the transition of heis from their status as elite institutions to mass institutions, and the adoption of life-long learning. The emergence of new educational programmes and heis is creating confusion and uncertainty for different he stakehold- ers. Simultaneously, the governments provide fewer funds (per capita) to meet those challenges. The increasingly competitive world requires development of knowl- edge society, and one of the Bologna process objectives is to provide the guidelines for quality assurance to the national quality assurance systems and likewise individual institutions for quality assurance. In addition to the Bologna declaration (1999), a more significant quality assurance orientation in the main European documents can be noted. Following the Rectors' Meeting in Salamanca, Prague Communique, Berlin Com- munique, Bergen Communique and London Communique (see http:// www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/), quality in he becomes the core notion in establishing a coordinated European he area. West Europe is trying to upgrade (self-)evaluations with institutional and programme accreditation. In the mirror are more managerial and leadership approaches to run the heis. In East (transition) Europe the institutional and programme accreditation guarantees for the quality, al- though the heis and the environment are not satisfied enough. The heis need the support in undertaking the challenges. They do not demand judgments on teaching and learning quality, but they ask for help to de- velop and improve the heis' strategic and quality management. Support of institutional and programme development is the main reason for im- plementation of (external) evaluations in these countries. In 'Bologna' countries the target is more the establishment of the European quality network, as enqa and Agencies' network, than the improvement of the existing national systems of accreditation and evaluation. In some Euro- pean countries quality assurance is an internal responsibility of each hei and is based on an internal evaluation of the institution's programmes. In other countries quality assurance system incorporates an external eval- uation or accreditation. In the first case, external peers evaluate pro- grammes and institutions, while, in the second case, an external inde- pendent agency grants a specific 'quality label' to programmes and in- stitutions which have met a set of pre-defined requirements (Orsingher 2006,1). Bearing in mind the autonomy of heis and considering the demand of the environment for transparency and accountability, the education sector has recognized a need for developing shared criteria and a com- mon methodology in quality assurance. At the same time it is important to leave enough room for innovation and diversity, because of the na- tional policy differences. Consequently, the Standards and guidelines for quality assurance in the European higher education area emerged, for- mulated by the enqa in cooperation with the European university asso- ciation (eua), European association of institutions in higher education (eurashe) and the European student information bureau (esib, today European students' union). The standards were adopted in the Berlin Communique (2003). 'There is no uniform model of he quality assessment in the eu [Eu- ropean Union], which would be agreed-upon, and it cannot be expected either - because this is the domain of national affairs, even though they do maintain a degree of international comparability' (Dolinšek, Trunk Širca, and Faganel 2005, 20). While many differences in implementation practices remain, various movements for the development of national and international quality assurance systems can be noticed, showing agreement on some common principles. These take the form of guide- lines, drafted by European institutions for quality assurance. The doc- uments are discussed in this article, following two assumptions. First, these guidelines are an optimal product of the experiences, concerning good practice. Second, these institutions and national agencies act as independent bodies, so their work is primarily targeted at creating iee strategies for achieving a maximum of positive effects in each hei as well as its wider environment. external evaluation commission in european higher education Seen as a tool for quality assurance, iee has many purposes. First, the iee shows accountability towards various stakeholders. Accountability is interpreted not just towards the funding authorities, looking for a good return on investments - as value for money, but as understanding quality as transformation of the participants that are potentially capable of con- sidering the concerns of all stakeholders' groups (Srikanthan and Dal- rymple 2003, 128). Unfortunately, this interpretation has been missing from all approaches to quality in he so far (Harvey 1998; Srikanthan and Dalrymple 2003,128). The very important purpose of iee is to improve the capability of sin- gle heis to define their target and choose the most suitable strategies, but having regard to the stakeholders' opinion and requirements. iee facili- tates the improvement cycle of single heis, because it serves as a diagnos- tic instrument and means of producing guidelines or planning inputs. This can in turn become a strategic tool of hei operation. As many au- thors suppose, the introduction of iee process and methodology enables transparency within European higher education and improves interna- tional credibility (Dolinšek, Trunk Širca, and Faganel 2005, 20), whilst it also provides the basis for funds allocation (Cuš 2006). The purpose is also to reinforce institutional development by disseminating examples of good practice in the areas of internal quality management and strategic change (eua 2005,4). The enqa's standards and guidelines see the use of the site-visits as one of the widely-used elements of external review processes (2007, 20- 21). The role of the eec in the iee, especially during a visit to the hei, is the key to a successful understanding, experiencing and acceptance of the iee process and its outputs from the hei representatives. The pro- fessional competences of the eec members have a direct impact on the hei representatives' perception of iee. The eua is trying to ensure the quality of its eec members by intro- ducing a requirement that each member be previously appointed rector or prorector and have successfully passed the examination (Kralj 2006, 4). Hereby the eua model disregards the invaluable insight of other stakeholders, which can significantly contribute to a comprehensive view of the hei quality. In addition to general criteria for eec members' selection (finheec 2006, 19; enqa 2007, 20), many institutions for quality assurance in higher education invest into training of the eec members for their tasks, considering the differences in their backgrounds, which range from the corporate sector to the non-profit sector. In addition, the majority of eec members are students with limited experiences and stakeholders with varying degrees of knowledge in national he legislation. Rossi's claim (2004, 27) that 'ideally, every evaluator should be familiar with the full repertoire of social research methods' poses a vague outline of the qual- ifications of eec members. finheec (2006, 20) arrange a special train- ing for the eec members, with the focus on the objectives and different phases of the iee process, the responsibilities of the eec and iee meth- ods. At the same time, the eec members become acquainted with the situation in quality assurance in Finland and abroad. A long-term mem- ber of evaluation commissions within the Quality Assurance Agency in Higher Education (qaa) underlined in the description of her experience (Broady-Preston 2002, 3) that 'the training emphasized the importance of teamwork rather than individual effort. For some academics, the pro- cess of open and free sharing of information was a difficult one to grasp'. The experience of the Inspectorates of Education in Europe, while partly consistent in its content with the work of quality assurance agen- cies, shows a multitude of models and criteria used for inspector recruit- ing and training among different European countries (Standaert 2000, 34-6): past experience in education, interviews, trainings, written tests on various themes, in-service training, 'in-house' refresher training, su- pervision of a mentor, tutor support, closely observation, etc. Generally speaking, a limited number of countries do have a well-defined approach to beginner staff training, but little attention is paid to the systematic, continuous on-the-job training of inspectors. quality assurance in Slovenian higher education Quality assurance in Slovenian heis has two branches: accreditation and evaluation. Accreditation is needed before new heis are founded or new educational programmes are offered, and is renewed every seven years. Complying with amendments to the 2004 Higher Education Act (zvis-d), article 80, these subsequent accreditation processes take into account the findings of institutional self-evaluation and external evalu- ation reports. These requirements additionally encouraged the he area to establish a national system of quality assurance, especially systemically coordinated iees, which would provide the heis with the desired com- parability and credibility within Europe. Consequently, in 2004, national Criteria for monitoring, assessment and assurance of quality in the higher education institutions, study pro- grammes, science and research, artistic and professional work (the Cri- teria) were adopted (Merila za spremljanje [...] 2004). The Criteria were based upon standards prescribed by enqa, eua and unesco. The other important landmark was the Act Amending the Higher Education Act (zvis - e) of September 2006, which grants the Council for Higher Educa- tion of the Republic of Slovenia independence, hereby meeting the nec- essary condition for the Council's independent and unbiased decision- making in the processes of assessing and assuring quality. The Criteria only represent the initial stage in introducing a compara- ble quality assurance system. In 2006, the Slovenian national commission for quality in higher education (ncqhe) was entrusted with elaborating iee procedures and testing the Criteria in practice. Due to this inten- tion, the ncqhe had to perform a pilot iees. The project was financed by the Ministry of higher education, science and technology. Once the testing phase is over, the iee procedure will be paid for by the hei itself, or rather - the iee will constitute the hei's investment. By investing in the culture of quality now, the hei expects to limit the cost for quality assurance in the future (Wagenaar 2006). Observing Visits at Evaluated Higher Education Institutions The enqa's guidelines (2007, 25) suggest that an institution, providing quality assurance structures, has in place internal quality assurance pro- cedures which include an internal feedback mechanism, an internal re- flection mechanism and an external feedback mechanism in order to inform and underpin its own development and improvement. The re- search group (hereinafter referred to as 'we') followed this recommenda- tion to gather the responses on eec work quality, as is apparent from the empirical section of this article. description of the methodology Due to the need for an all-encompassing research approach, where the researcher not only obtains the answers to his/her questions, but can also fully grasp the existing social situation and get a full picture of the group, organization, or relationship (Flere 2000, 81), we opted for the system- atic observation of visits to the evaluated heis. Standaert (2000,49) sug- gests that the 'real [quality] assessment can only be done by observing the inspector at work'. Since site-visits are a key part of the iee process, we decided to observe the entire course of site-visits at individual heis, which meant a two-day observation of on-site activity. All four heis, participating in the pilot iee project, were included in the observation processes and are hereinafter referred to as individual study cases. We undertook - by means of independent, in-depth insight into the activities of the iee participants in the pilot project - the task of identi- fying the fields, stages or tools to be improved and proposing corrective measures. This article, however, focuses only on one part of the findings, relating to the professional competences of eec members. This research is an evaluation case study, using a qualitative research approach. The observation plan was laid out in advance. An observation check- list was drawn up as the most important coordination tool the four ob- servers had. It was previously submitted for peer review to all the partic- ipating observers. During the eec training workshop, where a method- ological approach to hei visits was simulated, one observer was also tasked with verifying the applicability of the observation checklist. Observers were independent in the sense that they have not, either previously or at the time of the study, been involved in the pilot project or any other forms of the ncqhe activities. Our trust in their profes- sionalism was based on their current research work in the field of he, quality or evaluation. Immediately before the beginning of the study, all observers had been studying research methodology and they took part in the eec training. The observation included four observers, so as to min- imize the influence of the potential shortcomings of observers' personal- ities (Milic 1965, 382-6, in Flere 2000) and thus analyzed all study cases, which, in three instances, took place simultaneously. The observers were separated from the groups that were evaluated and watched the process without partaking in the interviews. The evaluated hei were notified in advance about the observers' atten- dance. Moreover, the attendance and role of the observers were likewise confirmed by the ncqhe. The eec or the observers themselves intro- duced the observer and explained their role to each of the interviewed hei groups. Observers were informed in advance about the subject observed, the particular observation process and the key rules of unobtrusive obser- vation. Even so, some situations arose during the observation process (especially outside the interviews) that saw the observer transform from unobtrusive to participant, as the participants would often perceive their observer as a 'social stranger' (Flere 2000, 89) and initiate interaction with him. With the help of their laptop computers, our observers would record their impressions and participant statements or added them by hand to their observation checklists intended for that particular interview. Af- ter the observation was complete, each observer wrote an observation report. The observation coordinator qualitatively processed the reports they had received. The analyses results were then presented to all at the final ncqha project meeting. During the final meeting of the ncqhe pilot project, one more survey was carried out (in addition to observation) among the representatives of evaluated heis and eec members that attended. The survey was con- ducted through two survey questionnaires, one for the eec members and the other for the representatives of heis. All persons meeting a pre- requisite condition were included (i. e. the presence of eec members or hei representatives at site-visits). Of 12 eec members present, 11 respon- dents answered the questionnaire. Additionally, all 7 hei representatives present answered the survey. At the final meeting, the eec Chairs and the representatives of evalu- ated heis were also given an opportunity to explain their own sugges- tions and observations regarding the methodology of pilot iee to all present. By the triangulation of these methods (observation, question- naire survey and experience presentations) we tried to ensure a correct understanding of the conclusions made by observers and to sustain the findings. research limitations Observation is based on a more or less subjective appreciation of the situation, which is also the main limitation of the research. Although the feasibility of unobtrusive observation may be questioned, as the observed individuals may differ in their attitude towards the observer's presence and consequently react to it, this type of observer role ensures the highest possible credibility of all the different observation methods (Flere 2000, 88). A further limitation to observation lies in the fact that these research techniques can provide an extremely detailed image of what is going on and how long it has been happening, but they do not enable an all- embracing description as to why things happen (Easterby-Smith, Thorpe and Lowe 2005,144). Thus it is necessary to analyze these perceptions, in other words: 'What is recorded must be meaningfully processed in ac- cordance with our understanding of the observation focus' (Flere 2000, 81). The answers to W-questions were hence obtained through surveys and with the help of participants, presenting their experience at the final meeting. Empirical Findings The eec members used the interview as their main method of data col- lection during their visits at heis. This method 'requires a high level of skill in the interviewer, who needs to be knowledgeable about the inter- view topic and familiar with the methodological options available, as well as have a grasp of the conceptual issues of producing knowledge through conversation. Interview research is a craft that, if well carried out, can become art' (Kvale 1996,13). Observers' findings, such as 'emphasis on personal opinion and expe- rience . . . expressing own notions, opinions and views . . . transition to a friendly approach . . . providing advice and instruction . . . the presence of judgments and instant suggestions . . . patronizing . . . a very inquis- itive approach of the commission', all mark the professionalism of the eec work. In one studied case, 'the group interviews transformed into plenary sessions which, quite often - in terms of their content - disre- garded the evaluation questions'. This begs the question of what could be regarded as an optimal methodological approach of the interviewer, and results in the following, largely incompatible, hypothetical possibili- ties. The first possibility is for the eec to empathize with hei representa- tives, as this is likely to encourage trust and a greater openness from hei representatives in sharing information. However, the levels of objectiv- ity, dispassion and suggestiveness in the communication on the part of eec members are questionable. The second possibility builds on the pre- supposition that the eec must act dispassionately, thus distancing them- selves from the representatives of evaluated heis, refrain from revealing their own perspective, personal experience, etc. In this case, the eec can expect a more pronounced reticence in communication from hei repre- sentatives. Based on their experience, enqa (2007,12) also presented some fairly substantial discrepancies as to what ought to be the appropriate rela- tionship ruling the interaction between heis and eec members. Some professionals, mainly from agencies accrediting programmes or institu- tions, take the view that external quality assurance is essentially a matter of 'consumer protection', requiring a clear distance to be established be- tween the quality assurance agency and the heis whose work they evalu- ate. Meanwhile other agencies understand the main purpose of external quality assurance to be the provision of advice and guidance in the pur- suit of improving the standards and quality of study programmes and related qualifications. The effort to establish a balance between account- ability and improvement has emerged as a key responsibility of the eec members. In connection to the paragraphs above, the study case confirmed the thoughts of Kvale (1996,101), who argues that an interaction among in- terview participants leads to emotional statements about the topics being discussed. This was due to the fact that 'the evaluator's role was too often mistaken for that of a counsellor' and, in one case, 'a verbal argument provoked a defensive reaction among faculty representatives - silence', in other words, a communication hurdle with some of the hei representa- tives. In spite of this, the e e c's work does comply with the e ua guidelines (2005, 22), which emphasize that the eec 'does not judge the quality of teaching and learning or that of research, nor does it rank or compare one university against others' and that 'it should be emphasized that the main preoccupation of the team is to be helpful and constructive rather than threatening or punitive'. Even the implementation of the questions asked by eec members that were supposed to be mostly open-ended, was not optimal. 'Questions were formulated too explicatively ... it started out fine, but there was a lot of extra information, directing subsequent answers ... questions were aimed at obtaining interviewees' opinions instead of the data that would enable the evaluation of work within the hei ... questions were suggestive, rooted in unfounded or vague conclusions and judgments' were the findings made by three observers, potentially pointing out some problems in acquiring unbiased information. One of the key character- istics underlying open-ended questions, as carefully analyzed by Foddy (1993,128), is that they should not imply answers. One of the observers, however, did note that 'the questions [asked by eec members] did even- tually become more structured and goal-oriented', which implies that there were some self-evaluation or self-correction measures applied by eec members in one study case. The triangulation of methods as an approach combining several in- dependent methods and measurements (Easterby-Smith, Thorpe, and Lowe 2002,181) within the eec's work, proved optimal in two study cases under observation. This can be concluded from the following feedback: 'the obtained information was verified in different target groups and questions were asked about the issues that remained unclear from the previously acquired materials' or 'the commission turned to their ques- tionnaire and the documentation for support'. In two other instances, the triangulation of methods was misinterpreted and implemented incor- rectly. The interviewees were sometimes asked 'unnecessary questions, where answers were already apparent from the questionnaire' and 'ques- tions seeking opinions of faculty representatives, instead of information, which would enable work evaluation', or provided 'information or drew conclusions from the interviews that had not been validated [in other evaluated groups]'. Standaert (2000,47) argues that less attention should be paid to mem- bers' familiarity with legislation, as this is already sufficiently represented in administration. But the observation findings indicate a lower credi- bility of those eec members (coming from different fields of activity) who displayed only superficial knowledge of the legislation regulating he or hei work in general. The survey questionnaire dealt with this study topic. The eec members rated the claim that 'commission members had adequate knowledge of higher education' with an average of 4.18, while the representatives of heis gave it the average rating of 4.14 (on a scale of 1 to 5, with 5 being 'agree completely' and 1 being 'completely disagree'; all average values mentioned in this article represent the regular arithmetic mean). Despite the satisfactory (more or less rational) results that the iee participants gave, the observers marked the spontaneous responses on weak legislative knowledge as critical (the hei representatives were astonished when they identified the gap, as was noticed also from their voice sound). In the survey questionnaire for eec members, the statement that 'commission members had adequate qualifications to carry out external evaluation' received an average of 3.82 (on a scale of 1 to 5, with 5 being 'agree completely' and 1 'completely disagree'), while this same claim, using the same scale, received an average grade of 4.71 from the rep- resentatives of evaluated heis. In the presentations at the ncqhe final meeting the necessity to upgrade the evaluators' competences was also pointed out more by the eec members (3 statements), than by the hei representatives (just 1 statement regarding the methodology in general). In response to the following questions 'In your opinion, should there be additional topics included in the training of eec members? If "yes", which?' all 11 eec members who answered the survey questionnaire, re- sponded affirmatively. Their suggestions mostly mirror the need to learn about the examples of good iee practice in the he systems of other countries, and for more research methodology training. These trainings should be organized as workshops, ranging from interactive participants' work to partial simulations of the iee process. Implications Due to this pilot iees, the Slovene he system took a historic step toward in establishing a national quality assurance system. While the main fo- cus of these pilot iees was on testing of the evaluation tools, the quality of other, accompanying elements defining iee success, i.e. the profes- sional competences of eec members, stayed in the background. This is the main conclusion. The presented study provides a starting point for self-improvement and proposes some guidelines for future measures at the national level. The most crucial among these is the introduction of more intensive eec members training, as emphasized by the eec members. It seems sensible that the qaa's evaluators training programme, which focuses mainly on research methodology (2006, 19-20), should be expanded with an educational module on higher-education legislation. Also it is necessary that a Code of ethics is formed and considered by the iee participants. The document ought to emphasize the preferred values and principles of the eec members' work, hereby facilitating the cor- rect choice of approach towards establishing interaction with the eval- uated hei. Including the iee Manual, Code of ethics, national higher- education legislation and research methodology the framework for the eecs' training is formulated. At this stage it would be indispensable to determine at national level, which body would be responsible for eecs' trainings. The next precondition for acquiring quality eec members is to define the necessary qualifications or criteria for appointing members, besides the necessary training. It is indispensable to link these activities to the potential national act, which would define the body (and its for- mation) that will grant the licences and assess the quality of eec mem- bers. This body would be responsible for creating and developing the National evaluators register. In this way the evaluators' mobility would be stimulated and the necessary quantity of evaluators would be easier to provide. The main limitation for now is the non affiliation of the Slovenian evaluation body in the enqa and eqar that is enabled in the summer 2008 (see http://www.eqar.eu/). Negative impact is expected not just on the evaluators' mobility and knowledge exchange, but on the develop- ment, improvement and comparability of the iees as well. In spite of granting the Council for higher education of the Republic of Slovenia independence (zvis-e 2006), the necessary conditions that would allow for affiliation in the enqa and eqar are not fulfilled. Finally, we wish to emphasize the need for a systematically organized feedback on the eec's work. This could become an important source of information for self-improvement that could require just a minimal financial input. Beside this internal feedback mechanism, it would be worth supporting future research that would be focused on the contri- butions of the foreign evaluators to the quality of the eecs (in the pilot project the eec was formed just of the national subjects), looking both from the eec members' perspective and from the perspective of the eval- uated hei representatives. The interesting further research that we propose is the accordance of the objectives, regarding the hei quality, comparing the eec - external - view on hei quality and the hei Quality commission - internal - view on hei quality. We have to emphasize that the evaluations main target is not just assuring accountability toward the stakeholders, but mainly to sustain the hei's improvement. This paper presents a unique methodology approach in setting up the external feedback mechanism in the iee; complementing the observa- tion with the questionnaire survey and the experience presentations. The need of a many-sided view on the ecc work contributes to the reliability of the findings. Acronyms eec - External Evaluation Commission enqa - European Association for Quality Assurance in Higher Education eua - European University Association he - Higher Education hei(s) - Higher Education Institution(s) iee(s) - Institutional External Evaluation(s) Criteria - Criteria for monitoring, assessment and assurance of quality in the higher education institutions, study programmes, science and research, artistic and professional work ncqhe - National Commission for Quality in Higher Education qaa - Quality Assurance Agency in Higher Education unesco - United Nations Educational, Scientific and Cultural Organiza- tion References Broady-Preston, J. 2002. The Quality Assurance Agency (qaa) and subject review: The viewpoint of the assessor. Libri 52:195-8. Cuš, F. 2006. Vzpostavljanje nacionalnega evalvacijskega sistema. Hand- outs presented at the training Posvet in skupne priprave na pilotske institucionalne zunanje evalvacije nkkvš, Brdo pri Kranju. Dolinšek, S., N. Trunk Širca, and A. Faganel. 2005. Vpeljava sistemov vo- denja kakovosti v visokošolske zavode. Kakovost, no. 1:20-23. Easterby-Smith, M., R. Thorpe, and A. Lowe. 2002. Raziskovanje v man- agementu. Koper: Fakulteta za management Koper. enqa. 2007. Standards and guidelines for quality assurance in the European higher education area. Helsinki: European Association for Quality As- surance in Higher Education. eua. 2005. Institutional evaluation programme guidelines: Self-evaluation and site visits 2005. Http://eua.cu.edu.tr/files/ euakdpsureciyapilacakisler.pdf. Faganel, A., N. Trunk Širca, and S. Dolinšek. 2005. Managing diversity and moving towards quality assurance in Slovenian higher education. Paper presented at the inqaahe conference, Wellington. finheec. 2006. Audits of quality assurance system of Finnish higher ed- ucation institutions: Audit manual for 2005-2007. Http://www.kka.fi/ pdf/julkaisut/KKA_406.pdf. Flere, S. 2000. Sociološka metodologija. Maribor: Pedagoška fakulteta. Foddy, W. H. 1993. Constructing questions for interviews and questionnaires: Theory and practice in social research. Cambridge: Cambridge Univer- sity Press. Kralj, A. 2006. Institucionalna evalvacija: 10-letne izkušnje. Paper pre- sented at the symposium Vzpostavljanje kakovosti na Univerzi na Pri- morskem in mednarodni vidiki kakovosti na univerzah, Ankaran. Kvale, S. 1996. InterViews: An introduction to qualitative research interview- ing. Thousand Oaks, ca: Sage Publications. Logaj, V., and A. Trnavcevic. 2006. Internal marketing and schools: The Slovenian case study. Managing Global Transitions 4 (1): 79-96. Merila za spremljanje, ugotavljanje in zagotavljanje kakovosti visokošol- skih zavodov, študijskih programov ter znanstvenoraziskovalnega, umetniškega in strokovnega dela. 2004. Uradni list Republike Slovenije, no. 124/2004. Orsingher, C. 2006. Assessing quality in European higher education institu- tions: Dissemination, methods and procedures. Heidelberg: Physica. qaa. 2006. Handbook for institutional audit: England and Northern Ire- land 2006. Http://www.qaa.ac.uk/reviews/institutionalAudit/ handbook2006/handbookComments.pdf. Rossi, P. H., M. W. Lipsey, and H. E. Freeman. 2004. Evaluation: A system- atic approach. 7th ed. Thousand Oaks, ca: Sage. Srikanthan, G., and J. Dalrymple. 2003. Developing alternative perspec- tives for quality in higher education. International Journal of Educa- tional Management 17 (3): 126-36. Standaert, R. 2000. Inspectorates of education in Europe: A critical analy- sis. Ministry of Education Flanders, Utrecht. Wagenaar, R. 2006. Creating a culture of quality: Quality assurance at the University of Groningen in the Netherlands. In Assessing quality in Eu- ropean higher education institutions: Dissemination, methods and pro- cedures, ed. C. Orsingher, 71-92. Heidelberg: Physica. zvis-D. 2004. Zakon o spremembah in dopolnitvah Zakona o visokem šolstvu. Uradni list Republike Slovenije, no. 63/2004. zvis-E. 2006. Zakon o spremembah in dopolnitvah Zakona o visokem šolstvu. Uradni list Republike Slovenije, no. 94/2006.