Measuring Impacts of Science and Research on the Society: Development, Issues and Solutions dušan lesjak University of Primorska and International School of Social and Business Studies, Slovenia dusan. lesjak@guest. arnes.si In the last three-hundred years, the role of research for the technological progress is undeniable. Successful societies have mechanisms for a quality knowledge transfer into the economy and society. As we are all aware off, scientific and research activities are not intended for themselves, therefore scientific and research results as well as the socio-economic impacts of the results are important too. The paper covers the analysis of socio-economic impacts of research that can be divided into economic, political/social, educational and other. A literature review demonstrates the great importance of the socio-economic impacts of the public funding of science and research. There is a number of developed and successful methods to maximize the socio-economic effects on research and development, consequently, numerous documented cases of good practice in the world. This allows for good management of research projects, from their preparation, implementation to completion, and later dissemination of results and transfer to the economy and non-economy. In the paper, we firstly discuss the history of measuring the impacts of results of science and research, than the assessment of the socio-economic effects of the research and issues related to that. The international waymarks of planning and monitoring of the research effects are presented as well. Finally, some suggestions for solutions to how to deal with measuring the impacts of research results are presented. Key words: science, research, results, socio-economic impacts, measurement https://doi.org/10.26493/1854-4231.14.219-236 Introduction For over fifty years, governments have been funding science, research and development for the influence and the impact it has - or at least we think it has or will have - on the society. Although science policy was once guided by philosophy or ideology of 'policy for science,' it has never been doubted in the minds of policy makers that management 14 (3): 219-236 Dušan Lesjak the ultimate goals of financing science and technology were socioeconomic, namely national security, economic development, social well-being and environment. The problem actually begins with the very definition of 'social impact of research.' A number of different concepts or notions have emerged: 'third-stream activities' (Molas-Gallart et al. 2002), 'social benefits' or 'social quality' (van der Meulen, Nedeva, and Braun 2005), 'utility' (Department of Education Science and Training, 2005), 'public values' (Bozeman and Sarewitz 2011), 'knowledge transfer' (van Vught and Ziegele 2011) and 'social meaning' (eric 2010; Holbrook and Frodeman 2011; Demšar et al. 2017). However, each of these concepts ultimately concerns measuring of social, cultural, environmental and economic contributions from publicly funded research, whether products or ideas. In this context, 'societal benefits' refer to contribution of research to the social capital of a nation, in promoting new approaches to social issues or in informing on public debates and policy-making. 'Cultural benefits' are those that add to cultural capital of a nation, for example by looking at how we relate to other societies and cultures, by understanding our history better, and by contributing to the preservation and enrichment of cultures. 'Environmental benefits' profit from the nation's natural capital, by reducing waste and pollution, and by increasing nature conservation or biodiversity. Finally, 'economic benefits' enhance country's economic capital by improving its ability and capacity and by improving its productivity (Donovan 2008). Given the variability and complexity of evaluating the social impact of research, van der Meulen, Nedeva, and Braun (2005) point out that 'it is not clear how to evaluate social benefit, especially for basic and strategic research.' There is no acceptable framework with relevant databases comparable to e.g. the one from Thomson Reuters Web of Science, which enables the calculation of bibliomet-ric values such as the H index (Bornmann 2009) or the impact factor (Bornmann 2012). In addition, there are no criteria or methods that could be used in evaluating the impact on society, while conventional research and development indicators provide little insight, with the exception of patent data. In fact, in many studies, the social impact of research is predicted rather than proven (Niederkroten-thaler, Dorner, and Maier 2011). For Godin and Doré (2005), 'systematic measurements and indicators (of influence) on social, cultural, political and organizational dimensions are almost completely absent from the literature.' In addition, they point out that most of the research in this area is primarily concerned with economic in- 226 management • volume 14 Measuring Impacts of Science and Research on the Society fluence or the impact of science and research results, while other social fields of potential impact of science and research are largely neglected and overlooked. This is the reason why in this paper we first focus on the measurement history and the assessment of impacts of science and research after we have highlighted some of the problems and dilemmas of such measurement and evaluation. Further on, we present characteristics of economic and social measurement or evaluation of research effects. We conclude the paper with a proposal for a comprehensive overview of aspects of measurement and societal domains of the potential impact of science and research. History of Measuring Impacts of Science and Research In most countries of the Organization for Economic Co-Operation and Development (oecd), the annual gross consumption for research and development exceeds 2% of gross domestic product (gdp), in 2018, on average, 372% (minimum Chile 0.355% and maximum S. Korea 4,553%) and the eu at an average of 1.974% of gdp (minimum Romania at 0.504% and maximum Sweden at 3.397%) (see https://data.oecd.org/rd/gross-domestic-spending-on-r-d.htm). Even before World War ii, individual governments invested public funds in scientific research, expecting that military, economic, health and other benefits would result from it. This trend continued during the war and during the Cold War, with increasing investment of public money in science and research. Nuclear physics was the main beneficiary, but other fields of science and research were also financially supported, especially military and increasingly commercial, as their potential for economic and social development was becoming more and more apparent. In addition, research in itself has always and increasingly been regarded as valuable effort and endeavour, mainly because of newly created knowledge and its value, even if this knowledge could not be directly and immediately used. Many states believed in the principle that 'science is a spirit that will keep the country competitive, but the spirit must also be nourished' (Stephan 2012). In the us, Bush (1945) argued that any investment in science is inherently good for society. Until the 1970s, policy makers had no doubt that public investment in research and development had a positive impact on areas such as communication, way of work, way of life, our clothing and food, our modes of transportation and even length and quality of our life (Burke, Bergman, and Asimov 1985). However, over the following decades, the growth and scope of sci- number 3 • 2019 223 Dušan Lesjak entific research has exceeded available public funding, which has increasingly 'forced' science to test its achievements in the form of the so-called peer review and the development of measurement indicators for measuring of scientific results and research impact. The only interesting aspect of measuring the impact of science so far was the impact of research on academic and scientific knowledge. The assumption was that the more demanding a scientific activity, the greater the social benefit. Since the 1990s, there has been a trend of moving away from automatic confidence in the validity of a presumption; that science is always beneficial to society (Martin 2011). That is why questions appeared: What are the results of public investment in research from which society actually expects benefits (European Commission 2010)? Today we expect measurements of the impact of science on human lives and health, on organizational capabilities of companies, institutional and team behaviour, on the environment, etc. (Godin and Doré 2005). A company can only use the benefits of successful research if the results are translated into marketing and consumer products (e.g. medicines, diagnostic devices, machines and appliances) or services (Lamm 2006). This has caused more and more problems for research-funding policies and agencies that were confronted with how limited public resources can most effectively be shared between researchers and research projects? This challenge - defining promising research in advance - has led to the development of criteria for assessing the quality of scientific research itself and for determining the social impact of research. Although the so-called first set of criteria was relatively successful and is still widely used to determine the quality of journals, research projects and research teams, it has been much more difficult to develop reliable methods for assessing the social impact of research. The impact of applied research such as development of medicines or development of information technology is obvious, but the benefits of basic research, which are more difficult to evaluate, have also been increasingly studied since 1990 (Salter and Martin 2001). In fact, there is no immediate or direct link between the scientific quality of a research project and its social value. As pointed out by Paul Nightingale and Alister Scott (2007), 'research that is highly cited or published in top journals may be good for the scientific community but not for society.' In addition, it can take years or even decades until a certain amount of knowledge contributes to new products or services that affect the society. 226 management • volume 14 Measuring Impacts of Science and Research on the Society Problems of Measuring the Social Impact of Research The socio-economic objectives of public funding were so pronounced that scientists and researchers, as well as statistical offices for measuring science and technology have initially discussed how to measure the results and impact of scientific research and how to develop more indicators to that end. For example, we now have a historical series of patent performance indicators, technology balance of payments and high technology trade. We also have several studies that link science and technology with productivity and economic growth. The oecd countries have also adopted a standard classification for measurement and breakdown of public research and development expenditure by socio-economic objectives. Ben Martin from the Science and Technology Policy Research Department at Sussex University, uk, lists four common problems that emerge within the criteria of social impact of research (Martin 2007): 1. Problem of causality - it is not clear what impact can be attributed to the cause or in other words, what effect or the impact of the research stems from a specific research and its results. 2. The attribution problem that arises because the effect can be diffuse or complex and contingent and it is unclear what is attributable to individual research or other inputs. 3. The problem of internationalization, which arises from the international nature of research and development, as well as innovation, which makes it very difficult, if not impossible, to identify the impacts of a particular research. 4. Last but not least, the timing aspect, as prematurely measuring the impact of research can emphasize effects that give only short-term benefits and do not take into account its potential long-term effects. In addition, there are four other problems: 1. It is difficult to find experts to evaluate the social impact of peer-reviewed research. As noted by Holbrook and Frodeman and at the University of North Texas in the usa, 'scientists generally do not like to think about the impact and evaluation of research in terms of its social impact, as this too often places scientists outside their scientific discipline borders' (Holbrook and Frodeman 2011). 2. Given that scientific work of a natural scientist has a different impact compared to work of a sociologist or a historian, number 3 • 2019 223 Dušan Lesjak it would be difficult to have a single assessment mechanism (Molas-Gallart et al. 2002; Martin 2011). 3. When measuring social impact, it should have been taken into account that there was not only one model of a successful research institution. Therefore, the assessment needs to be tailored to the particular strengths of each individual research institution in teaching and research, the cultural context in which it exists, and, of course, national standards. 4. Finally, the social impact of research is not always desirable or positive. For example, Rymer wrote that environmental research leading to the cessation of fishing may have an immediate negative economic impact, although they would retain a resource that could have been made available for use again for a longer period of time. The fishing industry and nature conservationists may have very different views on the nature of the original impact, some of which may depend on their views on research excellence and their disinterested nature (Rymer 2011). Despite these efforts, we do not know much about the impacts of science. Firstly, most studies and indicators address the economic impact. Although economic impacts are important and, above all, non-negligible, they represent only a part of the overall impact that extends to the social, organizational and cultural sphere of the society. As Cozzens (2002) said: 'Most (measurement efforts) have looked at the process of innovation rather than its results. Traditional studies on innovation continue to focus on producing new things in new ways rather than on whether new things are needed or desired, let alone their implications for jobs and wages.' Secondly, even those few discussions and measurements that go beyond the economic dimension focus on indirect rather than final effects. Even forty years after the initial requirements for impact indicators, we are still relying on peer review and case studies to measure the non-economic dimensions of research effects very incompletely and insufficiently. Until a reliable and robust method for assessing the social impact of research is developed, it is first appropriate to use expert panels to qualitatively assess the social relevance of research. Rymer (2011) states that 'just as peer review can be useful in assessing the quality of academic work (research work) in an academic context, expert panels with relevant experience from panellists in different fields can be useful in assessing the differences caused by research.' 226 management • volume 14 Measuring Impacts of Science and Research on the Society The Economic Dimension of the Research Effects Measuring In literature, most, if not all, of the measurable effects of science have in one way or another been referred to the economic dimension of action. In the 1950s, economists began incorporating science and technology into their models and focused on the impact of research and development on economic growth and productivity. The Solow (1957) model was the dominant methodology for linking research and development to productivity. It was the first to formalize accounting of sources of growth (separating gdp into capital and labour) and to equate the rest of his equation with science and technology, though it included more than just science and technology. Denison (1962) and Jorgenson and Grilliches (1967), among other, also improved the Solow approach later. Following Solow's initial work, numerous cost-benefit analyses were conducted and econometric models were developed, in order to try and measure what the economy owes to science. A number of studies have focused on assessing the rate of return on research and development investment in two basic forms - publicly funded research and development and return on privately funded research and development. Since then, studies on the economic impact or the impact of science focused on two topics (Godin and Doré 2004): • productivity and • the so-called spill over from university and government funding of research and across sectors and industries. One of the topics that also deserved early attention was the impact of science on international trade. As early as the 1960s, economists began incorporating science into models of international trade (Posner 1961; Vernon 1966). The authors, who use research and development as a factor to interpret international trade patterns, discussed why some countries led in trade while others stayed behind. The literature on the economic impact of science is much less extensive. The impact on science itself is the most researched in the literature. The number of citations has been used to measure the impact of scientific publications on other researchers for many decades. Of particular note was the contribution of the Science Policy Research Unit (spru) in Sussex, England, and the cwts (Center for Science and Technology Studies) in Leiden, the Netherlands. The impact of research and science on technological innovation has also received much attention from researchers (Gibbons and Johnston 1974; Mansfield 1991; Rosenberg and Nelson 1996). For number 3 • 2019 223 Dušan Lesjak example, several authors, including Mansfield, have illustrated the importance of academic research for the advancement of industrial innovation. They argued that a large proportion of companies would not have developed products and processes without academic research. Several factors have contributed to the economic dimension of science being focused on statistics and indicators, notably official statistics. One relates to the mission of the first organization to systematically engage in the measurement of science, namely the oecd. Most of the oecd work has dealt with indicators of economic nature, since from the beginning of its Scientific Research Committee, the purpose is 'to place high emphasis on the economic aspects of scientific research and technology in the future programme.' The oecd has had a great influence on national statistical offices according to methodology of measuring science, and its philosophy has significantly influenced the statistics collected and the indicators developed (Godin 2002). Secondly, economists were the main producers and users of statistics and science indicators, so they represented the majority of national and oecd consultants since, until recently, they were the only analysts who worked with statistic data systematically. R. Nelson once claimed (1977), 'One would think that political science, not economics, would be the domestic discipline of policy analysis. According to some, the reason it never happened was that the normative structure of political science was usually appalling, while economics had a strongly articulated structure in order to think about what politics should look like.' The third reason for focusing on economics was that the economic dimension of reality was the easiest to measure. Much of the production and impact of science is intangible, diffuse, and often appears with a significant delay. Although difficult to measure as well, the economic dimension of science and technology is still the easiest to measure. The Social Dimension of Research Effects Social or the socio-economic impact of research is much harder to measure than the scientific impact, and there are also no indicators that can be used in all disciplines and institutions to compare databases (Martin 2011). Social influence often takes years to become apparent, and 'the ways in which research can influence certain behaviours or inform social policy are often very diffuse' (Martin 2007). We can find some empirical studies on the impact of new tech- 226 management • volume 14 Measuring Impacts of Science and Research on the Society nologies (computers) on jobs and job sharing or measures of return on investment in health research on disease burden - frequency, prevalence, hospital days, mortality, lost years of life (Comroe and Dripps 1976; Hanney, Davies, and Buxton 1999; Gross, Anderson, and Powe 1999; Grant 1999). Several assessments of individual public programs dealing with socio-economic impacts can also be found, for example at European Commission level. But, most of literature deals with identifying the right approach to use when assessing impact or simply with describing the methods available to do so (e.g. Garrett-Jones 2000; van der Meulen and Rip 2000; Roessner 2000; Caulil, Mombers, and van den Beemt 1996; Kostoff 1994). Many authors have acknowledged the difficulty of measuring impact, firstly because of the fact that it is indirect rather than direct, and secondly, because it is scattered across time and space. For many, the concern for measuring non-economic impact depends on a better knowledge of the research transfer mechanisms. Several models can be found in the literature that proposes analytical frameworks for transmission mechanisms (Hanney, Davies, and Buxton 1999; Caulil, Mombers, and van den Beemt 1996; Cozzens 1996). Recognizing the limitations so far listed, some researchers have, in recent years, discovered new ways in which science and, above all, basic research has an impact on society. These include K. Pavitt and B. Martin, who built on the work of Pavit and Salter, and who argues that econometric studies provide only a few clues as to the true economic benefits of publicly funded (basic) research. These studies use models that face too many methodological constraints to capture the full benefits of basic research. They lack reliable indicators and do not explain the link between research and economic performance (Salter and Martin 2001). Most studies that have evaluated the social impact of research so far have focused on the economic dimension. As early as the 1950s, economists began to integrate science and technology in their models, examining the impact of research and development on economic growth and productivity (Godin and Doré 2005). Compared to other dimensions (e.g. cultural dimension), the economic dimension is certainly the easiest to measure (notwithstanding that reliable indicators have not yet been developed). Salter and Martin (2001) cited six types of positive effects of publicly funded research on economic growth stimulating: • expanding the knowledge that companies provide for their technology activities; • well-educated business graduates getting employed; number 3 • 2019 223 Dušan Lesjak • scientists develop new equipment, laboratory techniques and analytical methods available for use outside the academic world; • government-funded research is often the entry point into expertise networks; • faced with complex problems, the industry relies on/is supported by publicly funded research and • new businesses are formed from scientific projects. Recently, interest in evaluating non-economic social outcomes has increased significantly. In most cases, initiatives to measure social outcomes based on science and technologies come from high-level political world. Thus, the European Commission's (2014) Horizon 2020 research and innovation programme explicitly focuses on social outcomes in the 'Science with and for Society' section as well as in other chapters. In the us, the so-called Broader Impact criteria of the National Science Foundation (nsf), i.e. criteria related to socio-economic impacts, derived from the National Science Board (nsb), the governing and advisory body of the nsf. According to a 2011 paper (National Science Board 2011), criteria for evaluating research proposals must include not only scientific qualities but also 'contribute to achievement of broader societal goals.' Of particular importance for the present purposes is the nsb's warning that the 'assessment and valuation' of nsf projects should be based on appropriate measurements, taking into account the likely link between the effect of wider impacts and the resources provided for project implementation. Due to the fairly recent interest in the social effects of research, there are not yet a large number of useful, valid techniques available to evaluate these effects. One reason is that too little time has passed. Economic approaches to research evaluation count at least fifty years of development and bibliometric approaches at least thirty. The other reason for this is the simple fact that it is much more difficult to measure the social effects of research or science. In the case of bibliometric approaches, causal pathways are rarely the focus. Focusing on patents or publications or citations, bibliomet-ric studies may sometimes coincide with socioeconomic effects but do not disclose the mechanisms leading to these effects. In terms of economic studies, commodification and monetization of results are almost always of interest. In some cases, this may in fact deviate from an understanding of the result and its effect (since even some important economic results are not well captured in monetary indicators), but in most cases the accuracy of economic data, when 226 management • volume 14 Measuring Impacts of Science and Research on the Society considered with economic theory assumptions, at least allows for some robust causal hypotheses on the research effects. How to Deal with Measuring the Impacts of Research Results? To summarize the discussion and findings on determining the socioeconomic impacts or the impact of the results of the research work, it appears that most of the efforts to determine the impact of science are primarily concerned with economic consequences, such as economic growth, productivity, profit, job creation, market share, spinoffs, etc., with very few indicators directly linking science to economic benefits. What is even more characteristic (and worrying) is that systematic measurements and indicators of influence on social, cultural, political and organizational dimensions are almost completely absent from the literature. Because of this, the approach taken by Godin and Doré in identifying areas of science impact and funded by the Quebec Department of Research, Science and Technology is even more interesting. (Godin and Doré 2014) conducted a series of interviews with researchers (some of whom also directors) from 17 publicly funded research institutes (10 in science and technology, 4 in health sciences and 3 in social sciences and humanities), and with actual and potential users of research results within 11 social and economic entities. The interviews had two main goals. Firstly, they discussed different types of research done by researchers: strategic, fundamental and useful. Secondly, they sought to identify the full range of potential research impacts by gathering information on the results of research activities that provide at least potential use. Interviews were conducted using a short questionnaire that served as a guide for the interviewer. The interviews were semi-guided in nature and thus offered the freedom to address topics. On this basis, the authors have constructed a typology with eleven dimensions, corresponding to many categories of the impact of science on society (table 1). The first dimension - science - is the most direct and obvious. It refers to a direct scientific impact. The results of a particular research influence the later advancement of knowledge - theories, methodologies, models and facts - creation and development of new disciplines and development of research activities themselves: in-terdisciplinarity, intersectionality, internationalisation, etc. and, ultimately, training. The second dimension relates to technological impact. Manufacturing, process and service innovations as well as technical knowl- number 3 • 2019 223 Dušan Lesjak table 1 Science Impact Dimensions Science Knowledge, research activities, training Technology Products and processes, services, know-how Economy Production, financing, investing, commercialization, budget Culture Knowledge, know-how, relations, values Society Well-being, group discourses and activities Policy Decision makers, citizens, public programmes, national security Organization Planning, organization of work, administration, human resources Health Public health, healthcare system Environment Management of natural resources and the environment, climate and meteorology Symbolically Legitimacy/credibility/recognition, general knowledge Training Curriculum, educational aids, professional qualification, graduates, labor market entry, matching work and training, careers, use of acquired knowledge notes Adapted from Godin and Doré 2014. edge are the kinds of influence for which research activity deserves at least part of the credit. However, apart from patents, there are actually very few indicators to properly assess this dimension. For example, an innovation survey measures innovation activities, not the results and impact of the research. The third dimension is the most well-known - the economic one. It refers to the impact on financial position of an organization (operating costs, product sales prices, revenues, profits), sources of financing (so-called stock capital, risk capital, contracts, etc.), investments (human capital - hire and training, physical capital - infrastructure and material, operation and expansion), manufacturing activities (types of goods and service products) and market development (diversification and exports). The next eight dimensions are new, at least for statistics, as they are often less tangible. The impact on culture refers to what people often call the public understanding of science, but above all to the four types of knowledge: know what, know why, know how, and know who (Lundvall and Johnson, 1994). More specifically, it refers to the impact on an individual's knowledge and understanding of ideas and realities. It also includes intellectual and practical skills, positions and interests (about science in general, scientific institutions, scientific and technological disputes, scientific news and culture in general), as well as values and beliefs (Godin and Gingras 2000). Impact on society refers to the impact that knowledge has on the 226 management • volume 14 Measuring Impacts of Science and Research on the Society well-being and behaviours, practices and activities of people, groups and communities. For people, social impact is about welfare and quality of life. It also refers to life habits (consumption, work, food, sport and sexuality, sport). For groups and communities, new knowledge can contribute to changing the discourse and concepts of society or it can help to 'modernize' the way we 'behave and act.' Impact on policy is related to the way that knowledge affects policy makers and politicians themselves: the interest and attitude of politicians, administrators and citizens towards the issues of public interest, including science and technology, public action (legal practice, ethics, policies, regulations, norms, standards) and citizen participation in scientific and technological decisions. Impact on organizations is the impact on the activities of institutions and organizations, such as planning (goals and activities of organizations), organization of work (division and quality of tasks, automation, etc.), management and administration (management, marketing, distribution, procurement, accounting, etc.) and on human resources (workforce, employee qualifications, working conditions, training, etc.). The health dimension refers to the effects of research on public health (life expectancy, prevention and restriction of the spread of disease, etc.) and on the health care system (health care and costs, healthcare professionals, health infrastructure and equipment, products - medicines, treatments, etc.). The environmental dimension refers to the impact of science on the environment and environmental management, in particular on the management of natural resources (conservation of biodiversity) and environmental pollution (mechanisms and instruments for pollution control and pollution sources). It also highlights the impact of research on climate and meteorology (climate control methods and models of climate and meteorological forecasts). Indicators on environmental status and health status already exist in several organizations and countries. However, as with economic growth and productivity, the problem is linking this effect with research activity and its results. The last two dimensions deserve a special comment. The symbolic effect is a significant one, identified by users of the survey results that were interviewed. Companies involved in research or through their research and development departments, for example, often gain credibility for leading research and development or for liaising and collaborating with university centres and academics. For many businesses, this is often just as important an impact as an eco- number 3 • 2019 223 Dušan Lesjak nomic impact - and is likely to have an economic impact as well. Unfortunately, this effect has so far not been systematically examined and measured. The last dimension of impact - training, could also be placed in the first, the scientific one. It is specifically addressed here because of its importance in relation to the mission of universities and other higher education institutions. It refers to curricula (study programmes and training programmes), educational manuals, qualifications (acquired competences in research), entry into the labour market, matching between training and work, career paths and the use of acquired knowledge. This typology provides a checklist, as considered by Godin and Doré (2004), which reminds the statisticians that research with its results affects many dimensions of reality, in addition to those that are usually defined and measured in the literature. As we can see, Godin and Doré tried to comprehensively and systematically analyze and substantiate potential effects of scientific research on society from all aspects mentioned and identified in their research, which can be a very useful approach in assessing and evaluating the effects of research that has already been concluded, as well as in assessing the intended effects of future research or research of applied research projects. Conclusions The history of research evaluation is diverse, with a focus on processes, results, and occasionally effects. With respect to the effects of research, most research so far has focused on economic or knowledge outcomes. On the former, a wide range of economic approaches have been developed, including input-output analysis, simulations, case studies, and cost-benefit analysis in particular. Very different approaches have been used to evaluate knowledge outcomes. While peer reviews, whether open or structured, remain an important approach to assessing the quality of knowledge outcomes, many rapidly evolving bibliometric techniques have increasingly been used by policy makers and researchers in recent decades. Whether scientists like it or not, the social impact of their research is an increasingly important factor in gaining public funding and support for basic research. This has always been the case, but new studies of measuring instruments that could assess the social impact of research would provide better qualitative and quantitative data on the basis of which funding agencies and politicians could base their decisions. 226 management • volume 14 Measuring Impacts of Science and Research on the Society The fact is that there is a great need for quantitative studies and indicators on the socio-economic effects of science. This need does not come solely from the individual governments who want or need to evaluate the success of their investment in science, but also from researchers themselves, as they want to understand and know the extent of the impact of their research on society and through what mechanisms the effects of their research are transmitted to it. As we have already found out, most quantitative studies on the impact of science on society are based on econometric models that link the research and development expenditure to economic variables such as economic growth or gdp. In addition, many researchers still agree with nsf's old claim that 'returns (of science) are so great that it is almost unnecessary to justify or evaluate investment in it' (National Science Foundation 1957). Determining and measuring the impact of research results or science is, according to Godin and Doré (2004) at a similar stage as the measurement of research and development and its results was in the early 1960s. Of course, there are many challenges, so appropriate solutions need to be developed in order to properly address methodological issues. Cozzens (2002) was right in suggesting, 'We need to be more involved with fundamental social problems and issues, rather than narrowly focusing on the direct benefits of a particular research program or research activity.' References Bornmann, L. 2009. 'The State of h Index Research. Is the H Index the Ideal Way to Measure Research Performance?' embo Reports 10 (1): 2-6. -. 2012. 'Measuring the Societal Impact of Research.' embo Reports 13 (8): 673-76. Bozeman, B., and D. Sarewitz. 2011. 'Public Value Mapping and Science Policy Evaluation.' Minerva 49 (1): 1-23. Burke, J., J. Bergman, and I. Asimov. 1985. The Impact of Science on Society. Washington, dc: National Aeronautics and Space Administration. Bush, V. 1945. Science: The Endless Frontier; A Report to President Truman Outlining His Proposal for Post-War U.S. Science and Technology Policy. Washington, dc: United States Government Printing Office. Caulil, G. F., C. A. M. Mombers, and F. C. H. D van den Beemt. 1996. 'Quantifying the Utilization of Research: The Difficulties and Two Models to Evaluate the Utilization of Research Results.' Scientomet-rics 37 (3): 433-44. number 3 • 2019 223 Dušan Lesjak Comroe, J. H., and R. D. Dripps. 1976. 'Scientific Basis for the Support of Biomedical Science.' Science 192 (4235): 105-11. Cozzens, S. E. 1996. 'Technology, R&D, and the Economy.' 1996. Quality of Life Returns from Basic Research, edited by C. E. Barfield and B. L. R. Smith, 199-200. Washington, dc: The Brookings Institution. Cozzens, S. 2002. 'Evaluating the Distributional Sonsequences of Science and Technology Policies and Programs.' Research Evaluation 11 (2): 101-7. Demšar, F., D. Lesjak, F. Kohun, and R. Skovira. 2017. 'Socio-Economic Impacts of the Science and Research Systems.' In Management Challenges in a Network Economy: Proceedings of the MakeLearn and tiim International Conference, edited by V. Dermol and M. Smrkolj, 42732. Bangkok, Celje, and Lublin: ToKnowPress. Denison, E. F. 1962. 'The Sources of Economic Growth in the United States and the Alternatives before Us.' Supplementary Paper 13, The Committee for Economic Development, New York. Donovan, C. 2008. 'The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research.' New Directions for Evaluation 118:47-60. eric. 2010. Evaluating the Societal Relevance of Academic Research: A Guide. The Hague: Rathenau Institute. European Commission. 2010. Assessing Europe's University-Based Research: Expert Group on Assessment of University-Based Research. Luxembourg: Publications Office of the European Union. -. 2014. Horizon 2020: The eu Framework Programme for Research & Innovation. Brussels: European Commission. Garrett-Jones, S. 2000. 'International Trends in Evaluating University Research Outcomes: What Lessons for Australia?' Research Evaluation 8 (2): 115-24. Gibbons, M., and R. Johnston. 1974. 'The Roles of Science in Technological Innovation.' Research Policy 3 (3): 220-42. Godin, B. 2002. 'Measuring Output: When Economics Drive Science and Technology Measurements.' http://www.csiic.ca/PDF/Godin_14.pdf Godin, B., and C. Doré. 2004. 'Measuring the Impacts of Science: Beyond the Economic Dimension.' http://www.csiic.ca/PDF/Godin_Dore _Impacts.pdf . 2005. 'Measuring the Impacts of Science: Beyond the Economic Dimension.' Urbanisation inrs, Culture et Société, Helsinki Institute for Science and Technology Studies, Helsinki. Godin, B., and Y. Gingras. 2000. 'What is Scientific Culture and How to Measure It: A Multidimensional Model.' Public Understanding of Science 9 (1): 43-58. Grant, J. 1999. 'Evaluating the Outcomes of Biomedical Research on Healthcare.' Research Evaluation 8 (1): 33-38. 226 management • volume 14 Measuring Impacts of Science and Research on the Society Gross, C. P., G. F. Anderson, and N. R. Powe. 1999. 'The Relation between Funding by the National Institutes of Health and the Burden of Disease.' The New England Journal of Medicine 340 (24): 1881-87. Hanney, S., A. Davies, and M. Buxton, M. 1999. 'Assessing Benefits from Health Research Projects: Can We Use Questionnaire instead of Case Studies.' Research Evaluation 8 (3): 188-99. Holbrook, J. B., and R. Frodeman. 2011. 'Peer Review and the Ex Ante Assessment of Societal Impacts.' Research Evaluation 20 (3): 239-46. Jorgenson, D. W., and Z. Grilliches. 1967. 'The Explanation of Productivity Change.' Review of Economic Studies 34 (3): 249-83. Kostoff, R. N. 1994. 'Federal Research Impact Assessment: State-of-the-Art.' Journal of the American Society for Information Science 45 (6): 428-40. Lamm, G. M. 2006. 'Innovation Works: A Case Study of an Integrated Pan-European Technology Transfer Model.' BIF Futura 21 (2): 8690. Mansfield, E. 1991. 'Academic Research and Industrial Innovation.' Research Policy 20 (1): 1-12. Martin, B. R. 2007. 'Assessing the Impact of Basic Research on Society and the Economy.' Paper presented at fwf-esf International Conference on Science Impact: Rethinking the Impact of Basic Research on Society and Economy, Vienna, 11 May. -. 2011. 'The Research Excellence Framework and the "Impact Agenda:" Are We Creating a Frankenstein Monster?' Research Evaluation 20 (3): 247-54. Molas-Gallart, J., Salter, A., Patel, P., Scott, A., and Duran, X. 2002. 'Measuring Third Stream Activities.' Final Report to the Russell Group of Universities, Science and Technology Policy Research Unit spru, Brighton. National Science Board. 2011. National Science Foundation's Merit Review Criteria: Review and Revisions. Washington: National Science Board. National Science Foundation. 1957. The Seventh Annual Report of the National Science Foundation. Alexandria, vi: National Science Foundation. Nelson, R. R. 1977. The Moon and the Guetto. New-York: Norton. Niederkrotenthaler, T., T. E. Dorner, and M. Maier. 2011. 'Development of a Practical Tool to Measure the Impact of Publications on the Society Based on Focus Group Discussions with Scientists.' bmc Public Health 11 (1): 588. Nightingale, P., and A. Scott. 2007. 'Peer review and the Relevance Gap: Ten Suggestions for Policymakers.' Science and Public Policy 34 (8): Posner, M. V. 1961. 'International Trade and Technological Change.' Oxford Economic Papers 13 (3): 323-41. 543-53. number 3 • 2019 223 Dušan Lesjak Roessner, D. J. 2000. 'Outcome Measurement in the usa: State of the Art.' Research Evaluation 11 (2): 85-93. Rosenberg, N., and R. R. Nelson. 1996. 'The Roles of Universities in the Advance of Industrial Technology.' In Engines of Innovation: us Industrial Research at the End of an Era, edited by R. S. Rosenbloom and W. J. Spencer, 87-109. Boston: Harvard Business School Press. Rymer, L. 2011. Measuring the Impact of Research: The Context for Metric Development. Turner: The Group of Eight. Salter, A. J., and B. R. Martin. 2001. 'The Economic Benefits of Publicly Funded Basic Research: A Critical Review.' Research Policy 30 (3): 509-32. Solow, R. 1957. 'Technical Change and the Aggregate Production Function.' The Review of Economics and Statistics 39 (3): 312-20. Stephan, P. 2012. How Economics Shapes Science. Cambridge, ma: Harvard University Press. van der Meulen, B., and A. Rip. 2000. 'Evaluation of Societal Quality of Public Sector Research in the Netherlands.' Research Evaluation 9 (1): 11-25. van der Meulen, B., Nedeva, and D. Braun. 2005. 'Intermediaries Organisation and Processes: Theory and Research Issues.' Position Paper for PRIME Workshop, Enschede, 6-7 October 2005. van Vught, F., and F. Ziegele, eds. 2011. 'Design and Testing the Feasibility of a Multidimensional Global University Ranking.' Final Report, Consortium for Higher Education and Research Performance Assessment, CHERPA-Network, Brussels. Vernon, R. 1966. 'International Investment and International Trade in the Product Cycle.' The Quarterly Journal of Economics 80 (2): 190207. This paper is published under the terms of the Attribution-NonCommercial-NoDerivatives 4.0 International (cc by-nc-nd 4.0) License (http://creativecommons.org/licenses/by-nc-nd/4.o/). 226 management • volume 14