Research finds college ranks flawed
The peer assessment portion of the annual U.S. News & World Report college rankings uses imprecise methods, according to new research released in the February edition of the American Journal of Education.
The report ranked Syracuse University No. 58 of the top 100 national universities in August.
This portion of the ranking formula is the largest factor at 25 percent. Top academics, such as presidents and provosts, complete the survey to judge a school’s undergraduate academic excellence, said Michael Bastedo, one of the study’s researchers and an associate professor in the Center for the Study of Higher and Postsecondary Education at the University of Michigan.
When the rankings are studied from year to year, the peer assessment score is influenced by the reputation and ranking from the previous year, Bastedo said. The new assessment and the results from the year before are barely independent of each other, he said.
Eric Spina, vice chancellor and provost of SU, said that the U.S. News & World Report college rankings fail to reflect the tremendous developments SU has made over the last five to seven years.
‘As far as we’re concerned, such measures fail to capture what we believe is the essence of SU’s traditional strengths as a place of access and affordability,’ he said in an e-mail.
Fawn Bertram, a senior advertising major, said she does not check college rankings often, but SU was ranked near 60 when she applied while other schools she was interested in were in the 20s.
‘In the end, it came down to financial aid,’ she said. ‘And even though Syracuse was ranked lower, it was still a good school because it also depends on the program.’
Bastedo said he recommends staff and faculty who complete the survey avoid a conflict of interest. High-level department chairs are more likely to be honest about institutions that are not their rivals and that do not have overlapping interests, he said.
Representatives completing the survey also need to know the institutions well, Bastedo said. Instead of giving the survey to presidents or provosts of the universities, faculty members or deans should rank the programs with which they are familiar, he said.
The remaining 75 percent of the rankings measure objective factors, such as graduation and retention rates, faculty resources and student selectivity, said Robert Morse, the data research director at U.S News & World Report.
These peer evaluations have always been relatively stable and hardly change, Morse said. They are important because colleges have an outside reputation in the broader society, he said.
It is hard to design a survey without problems, Bastedo said. The way reputation is currently measured, through peer assessment, is not a bad idea, but it can be improved, he said.
High values are placed on rankings because they are the only indicators of what other people think are good schools, said Nicholas Bowman, a postdoctoral research associate in the Center for Social Concerns at the University of Notre Dame and one of the researchers featured in the American Journal of Education.
But students are better off looking at individual factors that determine the results, rather than relying solely on the school’s actual ranking, Bastedo said.
‘(Rankings are) one piece of information in a range of information,’ he said.
Published on February 2, 2010 at 12:00 pm