With the United States on the brink of defaulting on its national debt, global media reports, online chat rooms and social conversation alike are once again dominated by discussion about the world’s economic future. Familiar concepts, such as Gross Domestic Product (GDP), Consumer Price Index (CPI), inflation rate, trade figures and unemployment figures — all well-known and well-worn economic indicators — inevitably enter the discourse and debate.
Yet what exactly are statistical indicators? How do they develop, change behaviours and intersect with broader social change? And, in the context of the UNU’s work, how does this apply to science, technology and innovation indicators? UNU-MERIT Professorial Fellow Fred Gault explains.
Statistical indicators can change behaviour, ways of thinking and communicating. That is why understanding their development and use is important. But indicators can be both used and abused, and a better understanding of how they are developed and applied may help avoid problems.
Technologies and practices also change behaviour, as users of mobile telephones and mobile services, such as banking, know well. In this context, indicators can be seen as acting in exactly the same way; that view can help in comprehending how indicators evolve and are applied.
Underpinning this discussion is the understanding that an indicator is a technology, a product, that governs behaviour, is modified by users (outside of the producer community), and develops in response to user needs. A widely known economic example is the Gross Domestic Product (GDP) indicator, which signifies the size of the economy, its change over time and economic growth or decline. GDP can be combined with other indicators, such as population, to give GDP/Capita.
Indicators, then, are statistics, or combinations of statistics, that are populated by data. Social and economic data can come from a variety of sources, including surveys, administrative data, or registers and case studies. Indicators support monitoring, benchmarking, foresight studies and research into further development of indicators. Indicators suggest, or indicate, a characteristic of a system. But like any statistic, indicators can mislead.
A recent UNU-MERIT working paper looks at the social impacts of the development of science, technology and innovation indicators. It shows that the way in which statistical indicators change behaviour happens at two levels: firstly, when they are being developed and, secondly, when they are used.
It takes time to develop an indicator. First, of course, there must be some interest in what is being indicated, such as the magnitude and the change of GDP, which indicates when an economy is doing well (magnitude) and when it is growing or in recession (change), as well as how it compares with other economies.
The need for comparison with other countries means that there must be agreed standards for producing the indicator so that it is internationally comparable. This requires discussion, negotiation and agreement, and is not always easy.
Part of developing an indicator requires experimentation with ways of measuring the activity to be described. In the 1980s, for example, numerous experiments were conducted into how to measure innovation. These experiments were discussed at the Organisation for Economic Co-operation and Development (OECD) Working Party of National Experts on Science and Technology Indicators (NESTI).
As the tacit knowledge of the delegates to NESTI converged through discussion, two things happened.
The first was a decision to write down that tacit knowledge, or to codify it by producing a manual to guide the development of the measurement of innovation.
The second thing that happened, linked to the first, was the evolution of a language that included a definition of innovation for statistical purposes, and a description of how that definition should be applied. This resulted in a common language and a way of using it — a grammar — that made communication easier within the Expert Group that produced the manual.
“Whether for good or ill, the use of indicators changes behaviour in society…. Developing indicators, therefore, is a serious matter.”
From the perspective of behavioural change, the experts — in moving from country-specific experiments to agreement on what could be measured and reported for international comparison — had to make concessions and build trust among members of the group. Without the willingness to compromise, and the belief that the outcome (the production of the manual) would serve the greater good, this achievement could not have been obtained. The experts developed a language, learned to use it, built a community of practice based on trust, and then codified what was known tacitly in the first of the series of Oslo Manuals (the leading international source of guidelines for collecting and interpreting innovation data).
That raises another point. Just as societies and economies change, so do the manuals that support the production of the indicators to describe the societies and economies, and their international comparisons. The Oslo Manual is now in its third edition (and more will follow). For statistics on research and development (R&D), the Frascati Manual, first published in 1963, is now in its sixth edition.
Consider what happens when the indicators are used.
Some are well known, such as the indicators used in the System of National Accounts, for example the GDP indicator and the Consumer Price Index (CPI). If GDP declines for two consecutive quarters, the economy is seen to be in recession, and fiscal and monetary measures may be applied to restore growth. Similarly, if CPI rises, collective bargaining agreements can trigger wage increases.
Example: The System of National Accounts (SNA) –
a model of the development, use and impacts of indicators
While these are clear applications of indicators that change the lives of people, the impacts are less clear when it comes to indicators of science, technology and innovation (STI). STI indicators, a much more recent occurrence, are not yet part of day-to-day conversation. Because STI indicators have not been around as long as those in the System of National Accounts, it is thus more difficult for people, including policy makers, to talk about STI indicators, due again to the “language issue” (and lack of common agreement and understanding of the language to use).
For R&D, there are clear applications of the Gross Domestic Expenditure on R&D (GERD) indicator. The ratio of GERD to GDP is used as a target by governments and institutions around the world. For example, the Lisbon Target ratio of three percent in the European Union, President Obama’s target of greater than three percent in the United States, and the African Union target of one percent.
Political leaders use indicators to set targets and, as a result, behaviour changes in the affected population. This can have both positive and negative outcomes. Incentives might give rise to more R&D performance (the formal creation of new knowledge that can be brought to market and increase growth) or policies could promote more basic research when the immediate needs of the economy are for applied research or innovation.
Whether for good or ill, the use of indicators changes behaviour in society, and this would not be possible were the indicators not developed in the first place. Developing indicators, therefore, is a serious matter.
It is significant to note that such serious matters as indicator development are not always left to the international experts.
As with technologies, users can make a difference. They can do this in three ways: Firstly, they can use the indicators and make suggestions for their improvement through better coverage, survey methodology and analysis. Secondly, users can develop related indicators initially for domestic use, and then invite international experts to include them in the next revision of the manual. Or thirdly, they can produce and publish the indicators domestically and see what happens.
Users are an important part of indicator development, which is why the OECD’s NESTI is made up of data producers and policy users, to ensure that good, policy-relevant suggestions for indicators from users are incorporated and that suggestions from the policy community are feasible.
Once a manual is produced, it provides boundaries for the topic covered. But pushing these boundaries is part of indicator development. As an example, innovation, as defined in the Oslo Manual, is connected to the market. While public sector organizations can do all of the activities that lead to innovation, such as R&D, capital expenditure, purchase of knowledge, design, training, etc., the outcome, according to the Oslo Manual, is not innovation.
Similarly, consumers can change goods or services to make them easier to use. This will appear in innovation statistics if they give the prototype back to the producer and ask for a better good or service next time, or if they start their own business. But not, however, if they simply share the knowledge of how to make the change with a community of practice or a peer group. Dealing with this is a different and evolving story.
To conclude, indicators change the practices of the people who produce them, and using indicators changes the lives of the people affected by them. Knowing more about how these changes occur is an important subject for the social, behavioural and economic sciences.