I'm back in Germany (for the very successful doctoral exams of Tomasz Berezniak (good luck in your postdoc in Munich!), Mithun Biswas (good luck in Frankfurt!) and Mai Zahran (good luck in New York!)). When I was younger I used to even enjoy long flights to distant lands, but flying across the Atlantic 48 times in the last six years (!) has been a right royal pain in the butt. However, a slight benefit has been that I have racked up Frequent Flyer miles.
Now, I have always considered 'miles' programs as close-to-worthless ephemeral, slippery corporate traps. (However, I admit I have occasionally used them to upgrade to business class in a usually vain effort to get some sleep on the West to East overnight leg.) So it was amusing to read in the latest issue of Der Spiegel that there is a class of person so hell bent on earning the top miles status that they will go to almost any lengths. The Spiegel reports accompanying a group of six people pointlessly flying round-trip from Frankfurt to Innsbruck on a special Lufthansa chartered plane just to get the miles - the plane just touched down in the Alps a few seconds then took off again to fly back. Another strategy is that of Wolfgang Reigert, who sits all night in front of his computer looking for the cheapest ticket with the most miles e.g., Frankfurt to New York via Amsterdam, Dubai, Rio and Panama then sits in the plane for two days. It seems that the break-even price is 13 Euros for 1000 miles: any more and it's not worth it.
Apparently hardcore miles-grabbing 'cartels' have cropped up - one hired a female student to check in at the Lufthansa machines with a pile of frequent flyer cards, the owners pocketing the miles while never even leaving their sofas.
Of course, it all ends in tears. One miles-hunter succinctly expressed his dilemma in a frequent flyer forum : "I was so determined to become Platinum that I'm now deeply in debt and can't afford to buy any flights. And, as far as I can see, most Platinum benefits can only be claimed by people actually flying. Seems somehow stupid, doesn't it?"
This is Jeremy Smith's blog about life in Tennessee, local science and other topics of interest. Is not endorsed by and does not, of course, represent the opinion of UT, ORNL or any other official entity.
Wednesday, January 18, 2012
Wednesday, January 4, 2012
How to Judge Scientists
Well, my 300th scientific article was just accepted for publication (N. Smolin, R. Biehl, G.R. Kneller, D. Richter and J.C. Smith "Functional Domain Motions in Proteins on the ~1-100ns Timescale: Comparison of Neutron Spin Echo Spectroscopy of Phosphoglycerate Kinase with Molecular Dynamics Simulation, Biophysical Journal - good job, Nikolai!) and there will be a few beers in the Union Jack pub later on in the week. However, this kind of artificial milestone brings one to reflect on how really to judge scientists.
Clearly, although a large number of publications does point to some aspect of productivity, such as, possibly, getting involved in a lot of projects and helping bring them to fruition, it is a very one-dimensional metric and misses important elements of scientific life. Numbers of citations, h-factors and the like also have their problems (just as an anecdote, for example, a very famous physicist working at Saclay when I was there once said one of his most cited articles was one he got wrong - his rivals loved pointing this out in their own publications!).
So how can one judge scientists? Well, increasingly, discoveries result from the voluntary sharing and development of knowledge through collaboration, rather than individual discoveries, and so an intriguing recent article by Azoulay et al tries to quantitatively track effects on collaborations of the ideas that scientists create. The concept is that a scientist will influence the people with whom they work, by forming an "invisible college" of ideas. To quantify this influence they tracked the publication productivity of faculty-level collaborators of eminent scientists in the life sciences. They found that if an eminent scientist suddenly and tragically died before the end of their career (mostly of heart attacks, but in the sample studied three were actually murdered!) then the publication productivity of their collaborators subsequently irreversibly declined on average by 8%.
The authors concluded that the effects of, as they call it, "superstar extinction" appear to be driven by the loss of an irreplaceable source of scientific ideas. My own opinion is that while this may indeed account for some of their observed effect, the collaborative nature of science means that success depends on not only the exchange of scientific ideas, but also inevitably social aspects such as friendship, motivation, drive and team spirit. When sources of these are not replaced then productivity will decrease.
Clearly, although a large number of publications does point to some aspect of productivity, such as, possibly, getting involved in a lot of projects and helping bring them to fruition, it is a very one-dimensional metric and misses important elements of scientific life. Numbers of citations, h-factors and the like also have their problems (just as an anecdote, for example, a very famous physicist working at Saclay when I was there once said one of his most cited articles was one he got wrong - his rivals loved pointing this out in their own publications!).
So how can one judge scientists? Well, increasingly, discoveries result from the voluntary sharing and development of knowledge through collaboration, rather than individual discoveries, and so an intriguing recent article by Azoulay et al tries to quantitatively track effects on collaborations of the ideas that scientists create. The concept is that a scientist will influence the people with whom they work, by forming an "invisible college" of ideas. To quantify this influence they tracked the publication productivity of faculty-level collaborators of eminent scientists in the life sciences. They found that if an eminent scientist suddenly and tragically died before the end of their career (mostly of heart attacks, but in the sample studied three were actually murdered!) then the publication productivity of their collaborators subsequently irreversibly declined on average by 8%.
The authors concluded that the effects of, as they call it, "superstar extinction" appear to be driven by the loss of an irreplaceable source of scientific ideas. My own opinion is that while this may indeed account for some of their observed effect, the collaborative nature of science means that success depends on not only the exchange of scientific ideas, but also inevitably social aspects such as friendship, motivation, drive and team spirit. When sources of these are not replaced then productivity will decrease.
Labels:
citations,
h factors,
judging scientists,
science productivity
Subscribe to:
Posts (Atom)