Ranking the Scientific Journals
Scientists live by a well-known motto: "Publish or perish." But they don't just want to publish. They want to publish in the best scientific journals. How do they know which journals are the best?
Currently, the most popular ranking system is called "Impact Factor." It ranks a journal by considering how many times its papers are cited as a ratio of the total number of papers it publishes over a certain limited time period. For example, as the authors of a recent PLoS ONE paper explain, "the 2010 Impact Factor for the journal CA: A Cancer Journal for Clinicians was 94.33, the highest among all scientific journals. This number is calculated by noting that 19 source items were published in 2008 and 23 items in 2009 and in turn the journal's 2008 and 2009 material was cited a total of 3,962 times in 2010 (3,962/42 = 94.33)."
But, this system has plenty of shortcomings. Most notably, it punishes journals which publish a lot of good, but less frequently cited, papers. It also punishes journals which publish high quality, but not high publicity, science. Furthermore, the algorithm seems obsolete since Google and other search engines exist. Thus, to remedy these problems, the authors propose a new system: Content Factor. (See the chart below for the most popular biomedical journals re-ranked using their new system.)
The authors applied their new system to orthopedic surgery journals, too. They then surveyed experts in the field to assess what they believed were the most important journals in the field. The authors found that those answers correlated poorly with the traditional Impact Factor (Pearson correlation of 0.08), but much better with their Content Factor (Pearson correlation of 0.56).
Of course, no ranking system is perfect, but the author's results suggest any change would be an improvement over the current system.
Source: Bernstein J, Gray CF (2012) Content Factor: A Measure of a Journal’s Contribution to Knowledge. PLoS ONE 7(7): e41554. doi:10.1371/journal.pone.0041554