Table of Contents  
CONTEMPORARY ISSUE
Year : 2016  |  Volume : 9  |  Issue : 2  |  Page : 164-166  

Quality and impact of journals and authors


Department of Pathology, Dr. D.Y. Patil Medical College, Hospital and Research Centre, Pune, Maharashtra, India

Date of Web Publication1-Mar-2016

Correspondence Address:
Banyameen Mohammad Iqbal
Department of Pathology, Dr. D.Y. Patil Medical College, Hospital and Research Centre, Pune, Maharashtra
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0975-2870.177651

Rights and Permissions

How to cite this article:
Iqbal BM. Quality and impact of journals and authors. Med J DY Patil Univ 2016;9:164-6

How to cite this URL:
Iqbal BM. Quality and impact of journals and authors. Med J DY Patil Univ [serial online] 2016 [cited 2024 Mar 28];9:164-6. Available from: https://journals.lww.com/mjdy/pages/default.aspx/text.asp?2016/9/2/164/177651

Times have changed. At present, it is not just about publishing an article but to publish it in an indexed journal with some standing so that the article gets cited improving the journal's impact and "h"- and "i-10" indices of the author. All these terms seem confusing not only to budding authors but also to senior authors. Even on Google search, these terms are difficult to comprehend. The aim of this editorial is to present these terms in a simplified manner. We can divide these metrics into "journal-related" and "author-related." Some metrics applies to journals (Impact Factor [IF], Scimago Journal Rank [SJR], Science Citation Index [SCI] Rank, etc.) and others to authors (citations, Hirsch index [H-index], i-10 index, etc.). These quality assessment tools and metrics have been explained in the subsequent paragraphs.

To a certain extent, the quality of a journal is reflected in its indexing. Indexed journals as compared to nonindexed journals are considered to be more authentic in their scientific content. All this started in 1879 when Index Medicus started indexing medical journal articles to maintain their quality. Subsequently, over the years many more indexation services have emerged some of them being very popular and well-known such as PubMed Central, SCOPUS, EBSCO, SCIRUS, and EMBASE among others. Moreover, many regional versions of Index Medicus have emerged over time like African Index Medicus, and the list goes on. With so many indexation services available the question arises which one is the best and most valid? And which one should be selected and why? These questions are really important at present because the demand for indexed publications is increasing in academic institutions vis-à-vis Medical Council of India guidelines also makes it compulsory for teaching faculty in medical colleges. This is, in fact, a very difficult decision because there is no clarity on this issue. To complicate this issue further, some other indexing services which have recently come up such as Caspur, DOAJ, ASAP, Hinari, Index Copernicus, Open J- -Gate, Primo Central, Pro Quest, and SCOLOAR. Now, are these indexations services equally relevant? Would a journal indexed with any of these databases be considered "indexed?" There is no clarity on this issue and warrants discussion from all the stakeholders, especially associations of editors of medical journals such as International Committee of Medical Journal Editors. As of now, there are no clear-cut protocols about the hierarchy or even the authenticity of these journal indexing services. Let us hope, we get a more transparent picture in future.

An equally controversial and confusing issue is that of IF. Conceptually developed in the 1960s, IF has gained acceptance as a quantitative measure of journal quality. [1] IF of a journal denotes the frequency with which the journals articles are cited in literature. IF is awarded to the journals indexed in Thomson Reuters Journal Citation Reports. IF has been criticized for manipulation and incorrect application. [2] There are multiple factors that could bias the calculation of the IF. [3] These include coverage and language preference of the database, procedures used to collect citations, the algorithm used to calculate the IF, online availability of publications, negative citations, etc. [4]

It is derived by dividing the number of citations in 3 rd year to any items published in the journal in years 1 and 2 by the number of articles published in that journal in years 1 and 2. For instance, the year 2012 IF for Journal Z is calculated by dividing the total number of citations during the year 2012 to items appearing in Journal Z during 2010 and 2011 by the number of articles published in Journal Z in the year 2010 and 2011. For example, an IF of 1.0 means that, on average, the articles published 1 or 2 years ago has been cited at least 1 time. An IF of 2.5 means that, on average, the articles published 1 or 2 years ago has been cited at least 2½ times. IF of some important medical journals such as NEJM is 55, lancet 45, and nature 41.

Jevin West and Carl Bergstrom at the University of Washington proposed Eigenfactor, which is a rating of the total importance of a scientific journal. [5] The Eigenfactor score is intended to measure the importance of a journal to the scientific community, by considering the origin of the incoming citations and is thought to reflect how frequently an average researcher would access content from that journal. Originally, Eigenfactor scores were measures of a journal's importance; it has been extended to author-level also. It can be used in combination with the H-index (which is discussed later in the text) to evaluate the work of individual scientists. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the Eigenfactor than those from poorly ranked journals.

SJR is a measure of the scientific influence of scholarly journals that accounts for both the number of citations received by a journal, and the importance or prestige of the journals where such citations come from. The SJR indicator is a free journal metric which uses an algorithm similar to PageRank and provides an alternative to the IF, which is based on data from Scopus. Average citations per document in a 2-year period, abbreviated as "Cites per Doc. (2y)," is another index that measures the scientific impact of an average article published in the journal. It is computed using the same formula as the journal IF.

These topics need a much deeper study to fully understand them and to be able to calculate them manually. The description of some journal-related metrics given above will sensitize the reader to the basics of these terms and may stimulate further study and understanding. In the remaining paragraphs, some author-related metrics is discussed.

A citation index is a kind of bibliographic database, (a bibliographic database is a database of bibliographic records, an organized digital collection of references to published literature, including journal, and newspaper articles, conference proceedings, reports, government, and legal publications, patents, books, etc.) an index of citations between publications, allowing the user to easily establish which later documents cite which earlier documents. In 1960, Eugene Garfield's Institute for Scientific Information (ISI) introduced the first citation index for papers published in academic journals, first the SCI, and later the Social Sciences Citation Index and the Arts and Humanities Citation Index. [1] The first automated citation indexing was done by Cite Seer in 1997. Other sources for such data include Google Scholar and Elsevier's Scopus.

General-purpose academic citation indexes include:

  • ISI (now part of Thomson Reuters) publishes the ISI citation indexes. They are now generally accessed through the web under the name Web of Science, which is, in turn, part of the group of databases in the Web of Knowledge.
  • Elsevier publishes Scopus, available online, which similarly combines subject searching with citation browsing and tracking in the sciences and social sciences.
  • Indian Citation Index is an online citation data which covers peer-reviewed journals published from India. It covers major subject areas such as scientific, technical, medical, and social sciences and includes arts and humanities. The citation database is the first of its kind in India.


H-Index was suggested in 2005 by Jorge E. Hirsch, a physicist, as a tool for determining theoretical physicists' relative quality and is sometimes called the H-index or Hirsch number. [6] The index is based on the distribution of citations received by a given researcher's publications. Hirsch writes.

A scientist has index h if h of his/her N p papers have at least h citations each, and the other (N p − h) papers have no more than h citations each. Just to simplify it, a scholar with an index of h has published h papers each of which has been cited in other papers at least h times. Thus, the H-index reflects both the number of publications and the number of citations per publication. [6] For example, an H-index of 6 means that an author has published at least 6 papers that have each received at least 6 citations.

G-index was proposed by Leo Egghe in 2006 as an improvement on the H-index. G-index is calculated by arranging all the published articles in decreasing order of the number of citations that they have received. The G-index is the (unique) largest number such that the top g articles received (together) at least g^2 citations.

i10-index is yet another index created by Google Scholar and used in Google's "my citations" feature. i10-index means the total number of publications with at least 10 citations each. This very simple method of citation measure is only used by Google Scholar and is another way to help gauge the productivity of a scholar. Advantages of i-10 Index is that it is very simple and straightforward to calculate, and my citations in Google Scholar are free and easy to use. Disadvantages of i10-index are that it is used only in Google Scholar.

A self-citation is a reference, an author provides in a document to other papers written by him. Self-references may result from the cumulative nature of individual research, the need for personal gratification, or the value of self-citation as a rhetorical and tactical tool in the struggle for visibility and scientific authority. Fowler and Aksnes [7] concluded that "self-citation advertises not only the article in question but the authors in question." In practical terms, this could mean that the author who managed to fit in even one more self-citation per article will be much better off years down the road than her colleague who did not.

Self-citations or advertising oneself to promote ones published work, if done with good intent and within proper limits is in no way unethical. Some scholars, however, see it as a case of "blowing your own trumpet," while others may argue "if I don't cite my work, no one else will." Citations data may be adjusted so to exclude excessive self-citations. In the final analysis, context is important to judge whether self-citation is deliberate or justified.

The above account is by no means comprehensive. They have been simplified to make them appear less obscure to readers confronting these terms for the first time. There is ongoing endeavor by the academic community to make an assessment of researchers and journals more objective and free from bias. Views and additional inputs from readers will facilitate debate on these important issues.

 
  References Top

1.
Garfield E. The impact factor. Curr Contents 1994;25:3-7.  Back to cited text no. 1
    
2.
Not-so-deep impact. Nature 2005;435:1003-4.  Back to cited text no. 2
    
3.
Malathi M, Thappa DM. The intricacies of impact factor and mid-term review of editorship. Indian J Dermatol Venereol Leprol 2012;78:1-4.  Back to cited text no. 3
[PUBMED]  Medknow Journal  
4.
Balhara YP. Publication: An essential step in research. Lung India 2011;28:324-5.  Back to cited text no. 4
[PUBMED]  Medknow Journal  
5.
Bergstrom CT, West JD, Wiseman MA. The Eigenfactor metrics. J Neurosci 2008;28:11433-4.  Back to cited text no. 5
    
6.
Hirsch JE. An index to quantify an individual′s scientific research output. Proc Natl Acad Sci U S A 2005;102:16569-72.  Back to cited text no. 6
    
7.
Fowler JH, Aksnes DW. Does self-citation pay? Scientometrics 2007;72:427-37.  Back to cited text no. 7
    




 

Top
   
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
References

 Article Access Statistics
    Viewed2627    
    Printed99    
    Emailed0    
    PDF Downloaded302    
    Comments [Add]    

Recommend this journal