Metrics for Evaluating Journals

Journal evaluation metrics measure the importance and impact of journals based on the number of articles and the citations those articles receive. These metrics enable comparisons between journals to help choose where to publish, assess journal performance, and identify research trends.

Sources for Journal Evaluation Metrics:

Journal Citation Reports

WoS: Journal Citation Reports (JCR) is a subscription-based database from the Clarivate group, relying on a collection of academic journals in the natural sciences, social sciences, and humanities, called the Web of Science Core Collection. The database allows searching by journal name, ISSN number, and research fields, and provides access to various types of metrics.

  • Journal Impact Factor (JIF) - A quantitative metric representing the average number of citations of publications over a certain period of time. The JIF of a journal for a specific year is the ratio between the number of citable articles published in the journal in the previous two years and the number of citations of these articles in those two years. The metric allows comparison between different journals in the same category. A higher metric value represents a more important journal that has influenced the research community in that field of knowledge, as the articles published in it have received significant attention (many citations in other articles). Starting from the 2023 edition, all journals indexed in the Web of Science have a JIF value.
    • Science Citation Index Expanded (SCIE)
    • Social Sciences Citation Index (SSCI)
    • Emerging Sources Citation Index
    • Humanities Citation Index

Note: Since the JIF metric is influenced by several factors, such as publication year, research field, types of articles, changes in format, title, and language of the journal, it is not recommended to use the JIF as the sole metric for determining journal quality. It is important to consider the journal's ranking in its category and its quartile.

  • Rank (Rank by Journal Impact Factor) - The position of the journal in the category to which it belongs, according to its JIF value. Journal rankings in different categories are sorted in descending order by JIF. A separate ranking is presented for each category in which the journal is listed in the JCR.
     
  • Quartile - A quantitative metric used to rank the quality of academic journals by categorizng them into quartiles. In the JCR, these quartiles are defined as Q1, Q2, Q3, and Q4. A journal ranked in the first quartile (Q1) is among the top 25% of prestigious and most-cited journals in its research field.
     
  • Percentile - The percentile in which the journal is located in the category to which it belongs according to its JIF value.
     
  • Journal Citation Indicator (JCI) - Measures the impact of the journal by normalizing according to different research fields and their varying publication and citation rates. The metric allows comparison between fields, as it takes into account the specific characteristics of different fields and their publications. The metric is calculated for all journals in the Web of Science Core Collection. A value of 1.0 represents average performance or impact, with values higher than 1.0 indicating higher-than-average citation impact (2.0 is double the average) and values lower than 1.0 indicating less-than-average impact.
     
  • Rank (Rank by Journal Citation Indicator [JCI]) - The position of the journal in the category to which it belongs, according to its JCI value. Journal rankings in different categories are presented in descending order by JCI. A separate ranking is presented for each category in which the journal is listed in the JCR.
     
  • Eigenfactor - A metric that examines the average impact of articles in a specific journal over the first five years after publication. It is based on a weighted average calculation that gives higher weight to citations originating from more important journals.

Scopus

Scopus is a subscription-based abstract and citation database launched by Elsevier. The database allows, through the Sources tab, searching for journals by research fields, journal name, publisher, and ISSN number. The database provides access to several metrics.

  • CiteScore - A metric for the average citations received by all items published in a journal. Calculated only for journals indexed in the Scopus database and calculated over a four-year period.
     
  • Source Normalized Impact per Paper (SNIP) - A metric that expresses the ratio between the average citations of a specific source and the "citation potential," the number of citations a journal is expected to receive in its subject field. The metric considers specific differences in citation patterns in different fields and is calculated over a three-year period.
     
  • SCImago Journal Rank (SJR) - A metric for the scientific impact of journals that takes into account both the number of citations received by the journal and the importance or prestige of the journals from which the citations originate. Calculated over a three-year period.

SCImago Journal & Country Rank

SCImago Journal & Country Rank is a free multidisciplinary database based on Scopus data, where the SJR metric values and information about the quartiles to which the journals belong can be found.

 

Advantages and Disadvantages of Using Journal Evaluation Metrics

Advantages:

  • Quick Comparison: They allow researchers to quickly compare journals in their field and decide where to publish their articles for maximum exposure in the research field. This is particularly useful for young researchers who are still learning to navigate the academic landscape in their field.
  • Decision-Making Tool: They provide a helpful tool for academic institutions and libraries in making decisions about journal subscriptions, especially when budget resources are limited.
  • Objective Evaluation: They offer a relatively "objective" way to assess the quality and prestige of journals, based on quantitative data rather than just subjective opinions.
  • Trend Identification: They help identify trends in academic publishing over time and enable tracking the development and impact of different research fields.

Disadvantages:

  • Article Variability: The metrics obscure significant variability between individual articles published in the same journal. An article can be published in a high-ranking journal but receive few citations, and vice versa.
  • Field Comparison Difficulty: There is difficulty in comparing different fields of study due to differences in citation and publication patterns. For example, life sciences tend to accumulate more citations than humanities.
  • Self-Citations and Citation Cartels: The metrics can be influenced by self-citations and "citation cartels" of groups of researchers citing each other.
  • Distorted Publishing Behavior: Overemphasis on these metrics can lead to distortions in academic publishing behavior, such as splitting articles.
  • Quality vs. Popularity: The metrics do not necessarily reflect the quality or innovation of the research, but mainly its popularity in the academic community.
  • Database Differences: There are differences in metrics between different databases depending on their calculation methods and coverage (including the languages indexed in the databases).
  • Structural Factors: Ranking methods based on citation counts are influenced by several structural factors that do not necessarily reflect research quality: older journals have a significant advantage due to the long time allowed to accumulate citations; the metrics do not reflect dynamic changes in a journal's quality over time, and there is a bias towards journals that publish more articles and more frequently, as these increase the chance of accumulating more citations.

 

Back to main menu