바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

  • P-ISSN2287-9099
  • E-ISSN2287-4577
  • SCOPUS, KCI

From Signtometrics to Scientometrics: A Cautionary Tale of Our Times

JOURNAL OF INFORMATION SCIENCE THEORY AND PRACTICE / JOURNAL OF INFORMATION SCIENCE THEORY AND PRACTICE, (P)2287-9099; (E)2287-4577
2013, v.1 no.4, pp.6-11
https://doi.org/10.1633/JISTaP.2013.1.4.1
Blaise Cronin (School of Informatics & Computing Indiana University)

Abstract

It is but a short journey from citation indexing to citation analysis and thence to evaluative bibliometrics. This paper outlines the path and describes how the time-honored practice of affixing bibliographic references to scholarly articles has paved the way for a culture of accounting to establish itself in contemporary academia.

keywords
Citation indexing, bibliometrics, citation analysis, scientometrics, bibliographic references, evaluation, research assessment

In 1955 Eugene Garfield published his seminal (he prefers the adjective “primordial” [Garfield, 2009, p. 173]) paper, “Citation indexes for science” in, appropriately enough, the journal Science (Garfield, 1955). His proposed bibliographic tool would allow scientists to more easily and effectively access the proliferating literature of science. The Science Citation Index (SCI) differed from other secondary publication services (e.g., Chemical Abstracts) in that it enabled scientists to chain backwards and forwards in time through the literature, identifying influential papers, and by extension influential authors and ideas, whether inside or outside their home discipline, based on the references authors themselves attached to their papers. Garfield expressed the concept with admirable clarity and succinctness (1955, p. 110): “every time an author makes a reference he is in effect indexing that work from his point of view”. Fast-forward to the present and think for a moment of social tagging, where users rather than professional indexers or automatic indexing software assign index terms/tags to a document. One might thus think of the totality of references attached to an author’s oeuvre as the equivalent of a ‘docsonomy’ (the cluster of tags around any given document). But I digress. In any case, with the advent of the SCI, the humble bibliographic reference had finally come of age. Cinderella, much to everyone’s surprise, would soon be going to the Evaluators’ Ball.

From an historical perspective it is significant that Garfield’s early supporters included a number of eminent scholars, most notably the Nobel Prize-winning geneticist Joshua Lederberg, the sociologist Robert Merton, and the undisputed ‘father of scientometrics’ Derek de Solla Price, the last of whom, a veritable polymath, memorably described how he was “inoculated with Citation Fever” in the 1960s after meeting Garfield at Yale University (Price, 1980, p. vii). The SCI didn’t simply allow scientists to locate potentially relevant research—to reference is to deem relevant— by chaining though the literature, it enabled them to see in general terms whose work was exercising greater or lesser influence on any given epistemic community at any given time. The scholarly journal article’s paratext was gradually moving center stage, a point well grasped by Fuller (2005, p. 131; in this context, see also Cronin, 1995, on the acknowledgment, another paratextual device for bestowing credit), who wryly observed as follows: “Academic texts are usually more interesting for their footnotes than their main argument— that is, for what they consume, rather than what they produce” (italics added). In addition, the SCI allowed historians of science to track the development and diffusion of ideas within and across disciplines and made it possible for sociologists and others to visualize heretofore dimly perceived networks, both national and international, of socio-cognitive interaction and institutional collaboration (Cronin & Atkins, 2000; De Bellis, 2009; Price, 1965; Small, 1973).

Of course, like any system, a citation index is only as good, only as comprehensive, as the data upon which it is based. If your work was brilliant but inexplicably overlooked, or if it happened to receive only delayed recognition (“Sleeping Beauties,” as such papers have been termed by van Raan [2004]), or if you happened to be cited in journals (of which there are many) not covered by the SCI and its sister products, then you were down on your citational luck. Uniquely, though, the SCI provided scientists with what Garfield aptly termed a “clipping service” (1955, p. 109), a way of not only tracking their own visibility within their peer communities but also an admittedly crude means of quantifying the impact of their work. The privileging of that particular function (self-monitoring/self-evaluation) over information retrieval along with the subsequent reification of (citation) count data by the scientific community at-large was not far off.

It is important, however, not to lose sight of the fact that the Science Citation Index was conceived of originally as a search and retrieval tool; such was its intended purpose, as Garfield himself repeatedly emphasized over the years (Garfield 1979). The widespread, systematic use of the SCI and its successor products (today embodied in Web of Science [WoS]) for the purposes of impact assessment and bibliometric evaluation came somewhat later (for up-to-date overviews of the many associated reliability, validity, and ethical issues, see Cronin & Sugimoto, 2014a, b). With hindsight that development probably was inevitable. If science is about quantification and measurement, should there not be, one might reasonably ask, a science of science— a guiding meta science—devoted to the measurement of the inputs, processes, outputs and effects, broadly construed, of scientific research? The general sentiment would appear to be ‘yes,’ if the establishment of, to take but a few examples, (a) the journal Scientometrics in 1979, (b) the International Society for Scientometrics and Informetrics in 1993/94, and (c) the Journal of Informetrics in 2007 is anything to go by. Furthermore, if a scientometrician (the hapless Dr. Norman Wilfred in Michael Frayn’s Skios) can be the central character in a critically acclaimed satirical novel, then it’s probably safe to assume that the field has indeed come of age (Frayn, 2012; see also Sharp, 2010, for an indication of growing public interest and awareness in the application of metrics to the conduct of science).

At the individual level, most researchers and scholars quite naturally want to know what kind of attention (be it positive or negative, holistic, or particularistic) their published work is attracting and in what quarters. What simpler way to do this than by checking to see who has publicly acknowledged one’s work? And what a pleasant way, at the same time, of having one’s ego boosted. Needless to say, it did not take long for the SCI to become the magic mirror on the wall telling us who was ‘the fairest of them all? ’ The index’s popularity rose inexorably as online access gradually replaced the use of the unwieldy printed volumes with their microscopic print that we associate with the early days of the SCI. By way of an aside, Google Scholar’s ‘My Citations’ offers a quick and dirty alternative to both Web of Science and Scopus (see Meho & Yang, 2007 for a comparative assessment) for those who need to know how their intellectual stock is faring at any given moment, though caution is warranted1 (López-Cózar, Robinson-García, & Torres-Salinas, 2014). Bibliographic references could now be tallied with a few keystrokes and their distributions plotted with ease; they were, after all ‘objective’ in nature, being in effect ‘votes’ (mostly but by no means always positive, there being such a thing as negative citations), to use one of many prevalent metaphors, cast by scientists for other scientists. Before long reference counts (aggregate endorsements, if you will) were being used routinely to identify, inter alia, high-impact publications, influential authors, and productive institutions, even though authors’ motivations for referencing the work of others were inherently complex and anything but clear (e.g., Brooks, 1985; MacRoberts & MacRoberts, 1989). Validity and reliability concerns notwithstanding, the institutionalization of bibliometric indicators was proving to be irresistible.

At the institutional level, universities were not slow to recognize the practical utility of bibliometricallyderived impact measures (e.g., the Journal Impact Factor [JIF], Jorge Hirsch’s [2005] h-index, and most recently the Eigenfactor [West & Vilhena, 2014] in assessing the performance of academic departments, programs and, indeed, individuals (specifically in the context of promotion and tenure reviews). At the science policy level, national research councils are continually looking for reliable data to inform resource allocation decisions and determine funding priorities, while national governments—the UK ‘s 2014 Research Excellence Framework2(REF), a refinement of the rolling Research Assessment Exercises (RAE) begun in the mid-1980s, is a good illustration of the trend—are increasingly making use of bibliometric indicators, albeit in conjunction with established forms of peer review, in evaluating national research strengths, weaknesses, and opportunities (Sugimoto & Cronin, 2014; Owens, 2013). After all, data don’t lie.

Garfield’s idea (a citation index for science) spawned a successful business (the Institute for Scientific Information [ISI], subsequently acquired by Thomson Reuters), the flagship product of which (Web of Science) has become the dataset of choice for use in large-scale and longitudinal research evaluation exercises, though it faces stiff competition in the marketplace from, amongst others, Elsevier’s Scopus. The bibliometric indicators derived from the WoS database are a foundational component of a growing number of institutional ranking and rating systems (e.g., the Leiden Ranking, the Shanghai Ranking). These annual listings of the world’s ‘best universities’ can all too easily influence both public perceptions and, just as important, managerial practice within academia; that is to say, their promulgation has direct, real-world consequences, as universities take note of the variables and weighting mechanisms that determine their overall scores, which, as we shall see, in turn materially affects the behavior of the professorate and, ultimately, alters the ethos of the academy (Burrows, 2012; Weingart, 2005). In similar vein, Thomson Reuters’ Journal Citation Reports (JCR) can be used to provide an ‘objective’ evaluation of the world’s leading scientific journals based on an analysis of cited references. Despite widespread recognition of its many shortcomings (e.g., Seglen, 1997; Lozano, Larivière, & Gingras, 2012), the JIF has become a commonly used expression of a scholarly journal’s presumptive quality or influence and as such shapes authors’ submitting behaviors and also the perceptions of academic review bodies. Many in the scientific community are unhappy with the use of bibliometric indicators to assess authors or journals in such fashion, as can be seen in the recent spate of editorial and opinion pieces condemning their inappropriate and ill-informed use (e.g., Brumback, 2008; and see the recent DORA manifesto, the San Francisco Declaration on Research Assessment: http://am.ascb.org/dora/, for a discussion of concerns, criticisms, and potential remedial actions).

With hindsight, it is fascinating to see how a superficially mundane, more or less normatively governed authorial practice—the affixing of bibliographic references to a scholarly text—has, unwittingly, helped create the conditions necessary for a culture of accounting, most compellingly instantiated in the RAE/REF, to take root in the world of universities (Burrows, 2012; Cronin, 2005). To properly understand how this came about we need to look a little more closely at the way in which a reference is transmuted into a citation, and the ramifications of that silent metamorphosis. Essentially, a bibliographic reference is a sign pointing to a specific published work, its referent (or extensional reference). For Small (1978), references can in certain cases function as concepts symbols; referencing a particular paper is thus equivalent to invoking a specific concept, method, idea, process, etc. A citation, however, is a different kind of sign, in that while it points at a disembodied paper it is also being pointed to by all those later publications that invoked it, in the context of a citation database such as WoS. A reference can thus be thought of, in directional terms, as recognition given and a citation as recognition received. The reciprocal relationship always existed, of course, but prior to the development of commercial citation indexes its importance was little appreciated. Garfield’s invention altered that; a novel sign system was born.

One of the first to illuminate the subtle distinction between the reference and the citation was Paul Wouters. He described the citation as “the mirror image of the reference” (Wouters 1999, p. 562) and went on to say—simple but nonetheless insightful— that the purpose of commercial citation databases was “to turn an enormous amount of lists of references upside down” (Wouters, 1998, p. 232—for more on the semiotics of referencing and citing, see Cronin, 2000). This inverting of the reference changes its character, transmuting it from a relatively insignificant paratextual element into a potentially highly significant form of symbolic capital, with which academic reputations are built. At the risk of slipping into hyperbole, the SCI turned the dross of literary convention into career gold: no wonder Wouters spoke of “Garfield as alchemist” (Wouters, 2000, p. 65). Today, many scholars not only track their citation scores as a matter of course but unabashedly include raw citation counts and their h-index on their curricula vitae (CVs), for good measure often adding the JIF alongside the journals in which they have published. The message is simple: I count, therefore I am. The hegemony of the sign is complete: signtometrics has begat scientometrics—a case of homophones with quite different meanings.

Human behavior being what it is, this kind of signaling behavior will soon be widely imitated, and before long the inclusion of such ‘objective’ indicators, along with so-called alternative indicators of social presence and influence (Piwowar & Priem, 2013), will become a badge of honor to be worn on one’s sleeve, or CV: a clear sign of one’s true market value. This, I suspect, is what Day (2014, p. 73) had in mind when he spoke of the “self-commodification of the scholar” in today’s neo-liberalist society. Indeed, such is the power of peer pressure that even those who are fully cognizant of the limitations of both the h-index and the JIF, and who are by nature disinclined to engage in blatant self-promotion, may find it hard not to follow suit, particularly as assessment bodies, both inside and outside academe, increase their reliance upon standardized metrics of one kind or another. This mutual reinforcement is creating “a regime of permanent self-monitoring” (Wouters, 2014, p. 50) that engenders systematic displacement activity (Osterloh & Frey, 2009).

The emerging culture of accountability within and around academia is directing researchers’ focus away from purely intellectual concerns to extra-scientific considerations such as the career implications of problem choice, the fashionableness or ‘hotness’ of a potential research topic, channel selection for the dissemination of research results, and ways to maximize the attention of one’s peers and thereby one’s citation count (and now also download statistics, since citations are not only lagged but also capture only a portion of total readership [Haustein, 2014]). That, of course, is not to say that scientists and scholars are expected to be shrinking violets, unaccountable to those who fund them, or cavalier in the ways they communicate the findings of their research. Far from it, but these basically second-order considerations should not be allowed to dictate scientists’ research agendas, determine their work styles, or consume a disproportionate amount of their productive time. The inversion of the bibliographic reference is hardly grounds for inverting the time-honored goals of scholarly enquiry. After all, to quote the title of Thomas Sebeok’s (1991) book, a sign is just a sign.

Ack

I am grateful to Cassidy Sugimoto for comments.

References

001 

Brooks, T. A. ((1985)) Private acts and public objects: An investigation of citer motivations. JASIS, 36(4), 223-229 .

002 

Brumback, R. A. ((2008)) Editorial. Worshipping false idols: The impact factor. Journal of Child Neurology, 23(4), 365-367 .

003 

Burrows, R. ((2012)) Living with the h-index? Metric assemblages in the contemporary academy. Sociological Review, 60(2), 355-372 .

004 

Cronin, B.The scholar’s courtesy: The role of acknowledgement in the primary communication process., Taylor Graham, London, (1995)

005 

Cronin, B. ((2000)) Semiotics and evaluative bibliometrics. Journal of Documentation, 56(4), 440-453 .

006 

Cronin, B.The hand of science: Academic writing and its rewards., Scarecrow Press, Lanham, MD, (2005)

007 

Cronin, B., & Atkins, H. B.The web of knowledge: a Festschrift in honor of Eugene Garfield., Information Today Inc. & The American Society for Information Science., Medford, NJ, (2000)

008 

Cronin, B., & Sugimoto, C. R.Beyond bibliometrics: Metrics-based evaluation of research., MIT Press, Cambridge, MA, (2014a)

009 

Cronin, B, & Sugimoto, C. R.Metrics under the microscope: From citation analysis to academic auditing., Information Today Inc. & The Association for Information Science & Technology, Medford, NJ, (2014b)

010 

Day, R. E.“The data—It is Me!”(“Les données— c’est Moi!). In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metrics-based evaluation of research., MIT Press, Cambridge, MA, (2014), 67, 84

011 

De Bellis, N.Bibliometrics and citation analysis: From the Science Citation Index to cybermetrics., Scarecrow Press, Lanham, MD, (2009)

012 

Frayn, M.Skios., Picador, New York, (2012)

013 

Fuller, S.The intellectual., Icon Books, Cambridge, UK, (2005)

014 

Garfield, E. ((1955)) Citation indexes for science: A new dimension in documentation through association of ideas. Science, 122(3159), 108-111 .

015 

Garfield, E.Citation indexing: Its theory and application in science, technology, and the humanities., ISI Press, Philadelphia, PA, (1979)

016 

Garfield, E. ((2009)) From the science of science to Scientometrics: Visualizing the history of science with HistCite software. Journal of Informetrics, 3(3), 173-179 .

017 

Haustein, S.Readership metrics. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics:Metrics-based evaluation of research., MIT Press, Cambridge, MA, (2014), 327, 344

018 

Hirsch, J. E. ((2005)) An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569-16572 .

019 

López-Cózar, E. D., Robinson-García, N., & Torres-Salinas, D.The Google Scholar experiment: How to index false papers and manipulate bibliometric indicators., JASIST (in press), (2014)

020 

Lozano, G. A., Larivière, V., & Gingras, Y. ((2012)) The weakening relationship between the impact factor and papers' citations in the digital age. JASIST, 63(11), 2140-2145. .

021 

MacRoberts, M. H., & MacRoberst, B. R. ((1989)) Problems of citation analysis: A critical review JASIS, 40(5), 342-349 .

022 

Meho, L. I., & Yang, K. ((2007)) A new era in citation and bibliometric analyses: Web of Science, Scopus, and Google Scholar JASIST, 58(13), 2105-2125 .

023 

Osterloh, M., & Frey, B. S. ((2009)) Research governance in academia: Are there alternatives to academic rankings? Institute for Empirical Research in Economics, University of Zurich., Working paper no. 423 .

024 

Owens, B. (2013, 16 October) Research assessments: Judgement day. Nature, 502(7471) .

025 

Piwowar, H, & Priem, J. ((2013)) The power of altmetrics on a CV. Bulletin of the Association for Information Science &Technology, 39(4), 10-13 .

026 

Price, Derek D. J. de Solla ((1965)) Networks of scientific papers. Science, 149(3683), 510-515 .

027 

Price, Derek D. J. de SollaForeword. In E. Garfield, Essays of an information scientist. Vol. 3, 1977-1978., ISI Press, Philadelphia, PA, (1980), v, ix

028 

Sebeok, T. A.A sign is just a sign., Indiana University Press, Bloomington, IN, (1991)

029 

Seglen, P. O. ((1997)) Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 498-502 .

030 

Sharp, R. (2010, 16 August) In their element: The science of science. The Independent. Retrieved from: http://www.independent.co.uk/news/science/intheir-element-the-science-of-science-2053374.html

031 

Small, H. G. ((1973)) Co-citation in the scientific literature: A new measure of the relationship between two documents. JASIS, 24(4), 265-269 .

032 

Small, H. G. ((1978)) Cited documents as concept symbols. Social Studies of Science, 8, 327-340 .

033 

Sugimoto, C. R, & Cronin, B.Accounting for science. In Cronin, B. & Sugimoto, C. R. (Eds.), Metrics under the microscope: From citation analysis to academic auditing., Information Today Inc. & The Association for Information Science & Technology (in press)., Medford, NJ

034 

Van Raan, A. F. J. ((2004)) Sleeping Beauties in science. Scientometrics, 59(3), 461-466 .

035 

Weingart, P. ((2005)) Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62(1), 117-131 .

036 

West, J. D., & Vilhean, D. A.A network approach to scholarly evaluation. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metricsbased evaluation of research., MIT Press, Cambridge, MA, (2014), 151, 165

037 

Wouters, P.The citation: From culture to infrastructure. In B. Cronin, & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metrics-based evaluation of research., MIT Press, Cambridge, MA, (2014)

038 

Wouters, P.Garfield as alchemist. In Cronin, B. & Atkins, H. B. (Eds.), The web of knowledge: a Festschrift in honor of Eugene Garfield., Information Today Inc. & The American Society for Information Science, Medford, NJ, (2000), 65, 71

039 

Wouters, P. ((1999)) Beyond the holy grail: From citation theory to indicator theories. Scientometrics, 44(3), 561-580 .

040 

Wouters, P. ((1998)) The signs of science. Scientometrics, 41(12), 225-241 .


Submission Date
2013-11-14
Revised Date
Accepted Date
2013-11-25

JOURNAL OF INFORMATION SCIENCE THEORY AND PRACTICE