1999


From: Georgia Institute of Technology Research News

Measuring Scientific Research: New Accountability Requirements Pose Tough Questions For Researchers And Funding Agencies

How do you measure the impact of basic research on society? What does "quality" mean when applied to scientific research activities?

These are among the questions faced by the research community as it deals with new government-mandated requirements for accountability. These often-controversial efforts to increase accountability are hampered by the difficulty in measuring creative activities like research, concerns about a growing burden of reporting, and a lack of standard measuring tools.

"Research organizations around the world are feeling new pressures for accountability," said Dr. Susan Cozzens, director of the School of Public Policy at the Georgia Institute of Technology. "There are new reporting requirements and new questions about the effectiveness of programs -- driven by a desire to demonstrate to the public how it benefits from the investments being made in research."

Cozzens will describe "best practices" in research assessment at a session to be held Saturday, January 23, 1999 at the annual meeting of the American Association for the Advancement of Science (AAAS) in Anaheim, CA. Over the past 20 years at the U.S. National Science Foundation and other organizations, she has helped set policy. Most recently, she conducted an international study of research assessment with the Center for Research Policy at the University of Wollongong in Australia.

"One of the toughest question is measuring socioeconomic benefits from specific investments in basic research, because basic research gets embodied in capabilities for a society," she said. "The capabilities get used in a lot of different directions and make their impact at unpredictable times in the future."

While impacts of research can be measured on the large scale, it is difficult to measure the specific results of particular projects. That makes it difficult to demonstrate the kinds of cause and effect relationships that funding agencies would like to be able to show.

"There are some very tough methodological questions, and there are no breakthrough methods out there," Cozzens added.

The Australian study reviewed the best assessment methods available, and looked at what techniques managers actually use to make decisions about research programs.

Predominantly, decisions are made on the basis of a modified "peer review" process in which panels of experts offer their evaluations. Over the past ten years, Cozzens found, the traditional peer review process has broadened to include input from potential users of the research -- such as industrial companies.

By including "customers" of research, these "mixed-panel" reviews follow the trends pioneered by other communities that seek input from persons outside the enterprise being evaluated.

Though the specific assessment techniques may be changing, evaluation of projects being considered for funding has always been important and highly competitive in the U.S. research enterprise. However, Cozzens' study found a growing interest in ongoing monitoring of these projects once they receive funding.

The new monitoring efforts rely on advances in information technology to regularly gather and analyze relatively simple indicators such as the level of student involvement, amount of funds invested and number of publications produced. Such quality control analysis can pinpoint research programs that are in trouble -- though it does not offer much help in making truly difficult decisions.

"These tend to be very crude systems, and may help managers sort out the bottom five percent of programs that are really problematic," Cozzens explained. "But they are not going to provide much input on the more complicated priority-setting tasks because they do not capture information about quality or advances made."

The managers examined by the study preferred simple assessment techniques -- such as reviews by experts -- over more complex measurement tools. But whether that results in good decisions cannot really be determined because of yet another difficulty in assessing the scientific endeavor.

"The problem for the research community is that we really do not have any acceptable quantitative measures of quality," Cozzens noted. "Scientific research is on the cutting edge. It is not about producing standardized widgets on a production line. Measuring research activity does not answer the central question about research policy -- how to choose the most exciting areas to explore."

Imposition of accountability systems has caused controversy because of concern that they could adversely affect the creativity and autonomy of basic research: "People fear too much political interference with their research direction," she said. "Researchers need autonomy to be creative and explore new directions. They do not like it when people put a lot of emphasis on outside influences."

The reporting requirements involved in assessments can also prove a burden on researchers. Unless government agencies work to minimize that burden, the new accountability requirements could divert researchers from their true goals.

Placing too much emphasis on easily-measured indicators, such as the number of research papers published, can also skew the scientific enterprise so that it -- for example -- produces publications just to satisfy the reporting process.

Despite the potential dangers, Cozzens believes accountability requirements can have very positive impacts.

"If done wisely, this kind of movement can make researchers into better strategic thinkers in ways that do not interfere with autonomy," she said. "If done right, accountability can also strengthen the relationship between research and society."

In the United States, accountability pressure comes from the Government Performance and Results Act (GPRA), passed in 1993. Requiring strategic plans and annual performance plans and reporting, that law is now being implemented, though no clear consensus has developed among federal agencies on how to meet the requirements.

Cozzens predicts that Congress, the Office of Management & Budget and the research agencies will ultimately agree on some practical solutions, though she predicts "there will continue to be diversity and a range of enthusiasm among the agencies about using these tools."

CONTACT:
Jane Sanders 404-894-2214; E-mail: [email protected]

Technical Contact:
Dr. Susan Cozzens 404-894-6822;
E-mail: [email protected]




This article comes from Science Blog. Copyright � 2004
http://www.scienceblog.com/community