Things 15 and 16: Open Access and Bibliometrics

Last week was a rest week – at least as far as #23ThingsSurrey/#23ThingsforResearch is concerned.  If you were here anyway, you’ll have seen my post on #AprilA2Z/ #AtoZchallenge, which is the next challenge on the horizon.  This is beginning to shape up over at Fiction Can Be Fun.  April is getting ever closer, and we’d like to be further ahead with our month long story, but we have a least made a start and we have at least got a plan.  I’m cautiously optimistic: whilst my professional writing is usually carried out with others, this is the first time that I’m collaborating with someone else on a writing project where it is truly a 50:50 split.

And the question that you are probably asking yourself now is “what on Earth has any of this got to do with Open Access, Bibliometrics, and the publication of research?”.  Simples: I don’t know about other blogging platforms,, but WordPress has a nice package in the management section to allow you to track how readers have found you, whether through WordPress itself, a link, or via a search engine.  One of our most popular articles on FCBF is not a story, but rather a reflection on the book, film, and truth behind “The Eagle Has Landed”.  I was surprised to see a spike of several hundred hits over Christmas, and then I realised that this coincided with a showing of the film.

But back to the main feature…

Open Access:  There is a whole body of discussion on the industry of publishing academic literature – and much still left to be said – but that’s not the purpose of this post.  How accessible are my papers? Over the last 15 years or so, I’ve presented at conferences with a range of ways of dealing with proceedings , and I’ve published in a number of different journals.  I’ve had some proactive students who have secured funding for Gold Access – i.e. anyone can read the article because we have paid the publisher to make it open access. Everything that I have written – conference proceedings and journal articles – is green access, available via SRI.  Most of what I have written is also available via ResearchGate, but as I wrote in the post on the Professional Network, I’ve lapsed a little because I lost confidence in the legal case for publishing accepted versions of papers.  The Sherpa-Romeo database that we were advised to look at for this Thing looks incredibly helpful.  Not something I can sort out immediately, but I’m going to add interrogating the database for all my publications to my “to do” list and catch up on uploading to ResearchGate and Academica.edu.  (As a note, I’m still less than impressed with Academia.edu – everything that might actually be helpful is locked behind a pay-wall, but at the same time I keep on getting emails inviting me to use these functions).

Bibliometrics: One of things that we were asked to do was choose a paper and look at how many times it has been cited according to different information sources.  This is something that I’ve already noticed – some sources indicate more citations than others.    Given that progression in an academic career is at least partially dependent on publication and the success of these publications, it’s frustrating that there isn’t a way of capturing every single citation in a reliable way.   GoogleScholar usually has the most citations listed, but I did do a comparison with a couple of other sources once, and found that in every case there were citations that were unique to each list.  But the key thing we’ve been asked to consider this week is the longevity of bibliometrics: given that I couldn’t find the Altmetrics donut on the paper we were asked to look at, it does give some pause for thought.  Various indices, usually based around citations, are used in the sector, but I’ve yet to meet anyone working at the coal face who places much faith in them.  These things are used to manage research funding allocations, assess the success of projects and so on, but there is comparatively little in the way of consideration of the variability across different areas.  An early metric was the ‘h’ index.  This is the number of papers that you have that have been cite that number of times.  You might have a hundred papers under your belt, but if they are not cited frequently, you might have an ‘h’ of one or two, perhaps even zero.  Depending upon which source you look at, I have an ‘h’ of six or seven, which is about average for my area, but perhaps a bit low for someone at my stage of my career across the sector.  This is despite having a review paper that’s been cited more than 50 times, and a couple of research papers that are comfortably in the double digits.  ‘h’ doesn’t take into account a lot of factors though and is a non-normalised metric, so it seems to have fallen out of favour.

Bibliometrics is here to stay: it’s going to evolve, and the stakes are such that it will probably continue to evolve quite quickly.  It’s likely to grow, and we might even see distinct forms of bibliometrics being used.  What is going to  be important is programmes such as this one providing researchers with the skills not only to interrogate the data, but to make meaningful decisions on the basis of this.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s