My blog has moved!

You will be automatically redirected to the new address, all posts have been transferred from this blog. Use site search to find them. If that does not occur, visit
http://www.ianhopkinson.org.uk
and update your bookmarks.

Thursday, December 31, 2009

Happy New Year!

The last few years I've made a calendar from photos taken through the year, to furnish homemade gifts to my close family. My brother clearly values this gift, and has it on proud display in his downstairs lavatory! This year, the power of blogging enables me to display such quality gifts to so many more people - a good two or three, at least.

The Inelegant Gardener (HappyMouffetard) is responsible for some of these, but I can't remember which - probably some of the ones of flowers.


Cover image - it's an amaryllis.


January - a brass marmot in the mountains above Obergurgl


February - a dwarf iris


March - a rice paper butterfly (at Chester Zoo)


April - more snowy mountains, this time Orelle in France


May - spring greenery in North Wales


June - calendula


July - Blea Tarn, above Great Langdale


August - the Japanese garden at Tatton Park


September - autumn colours in North Wales


October - jack o'lantern by HappyMouffetard, photography by me


November - fireworks in Val Thorens (cheating, because this was April)


December - frosted nigella seed head

Thank you for visiting the blog in the few short months since I started writing it, it's been great fun and I've enjoyed reading your comments.

Happy New Year to you all!

Tuesday, December 29, 2009

What kind of scientist am I? (audio version)

My earlier "What kind of scientist am I?" post is now available as a podcast: http://bit.ly/6EA17H - Posterous allows the easy posting of audio. I'm not sure I'll do it again but it was fun to try. I used a basic Logitech headset microphone, Audacity to do the capture and editing with the Lame plugin for MP3 export.

Monday, December 28, 2009

A walk along the Shropshire Union Canal

Yesterday the weather was so evil, cold and wet, that we scarcely left the house all day - cabin fever set in. This morning things were looking rather better, cold and frosty, but a little hazy - so we set off for a walk along the Shropshire Union Canal which passes close by. In the summer I cycled along the canal for a week, on my way to work, and saw a couple of kingfishers. No kingfishers today, but we saw a heron, a fox leaping through undergrowth and heard the distant roar of a lion at Chester Zoo.

Some photos:
















Sunday, December 27, 2009

Grant Applications II

This post is probably not for you, unless you're interested in grant applications!

I touched on grant applications a few posts ago with reference to the THES debate on blue-skies research, I mentioned my abysmal grant application record, the generally low success rate and the pain involved for all concerned. Here I intend to add a few additional comments arising, in part, from my experience in industry.

It's worth stating what I believe the grant application process is for: on the face of it is a method by which discretionary funding is provided to researchers to provide resources for research; that is to say equipment, consumables and personnel. However, in addition to this it has a hidden purpose in that it is felt by many to be part of an rating process for researchers. Researchers believe that the more grant applications they win, the higher their ranking. Therefore top-down attempts to limit the number of applications a researcher can make cause consternation because they impact on the perceived worth of that researcher. This additional function is not explicit, and in a way it arises for a lack of any better measure of apparent researcher worth.

I believe this perception arises because university departments don't do a very good job of career management for academics. As an employee of a very large company, I have regular discussions about where my career within the company is going - indeed in my first year I spent about an hour and half talking about just this subject, whilst in academia I *never* in 8 years post-doctoral employment, had a formal discussion about my career development. This applies both to those who have successfully made it to permanent lecturing positions, and the many post-doctoral research assistants who aspire to a limited number of permanent posts.

The grant application process takes no account of an attempt to create a wider research program. Grant applications are made to acquire a specific piece of equipment and/or someone to carry out the research proposed. Typically the equipment will be used long after the end of the grant, and there will be no formal mechanism of replacement.

I am still involved in writing internal research proposals, these differ in two ways from grant applications. Firstly, they are much shorter than grant applications - a couple of sides of A4; secondly, they are much more concerned with all the things 'around' the core of the proposal rather than an explicit description of the research to be done. Funding and allocation of resources is made at the level of projects comprising of order 10 or more people, rather than at the 1 or 2 researcher level at which the typical grant application aims. Furthermore there is a longer cascade in the resource allocation process, rather than each 'end user' approaching the holder of a central pot, resources are allocated at a higher level. This reduces the number of people in the grant application business and means that rounds of allocation are smaller affairs.

The winning of grants appears to contain a large element of lottery, that is to say the outcome depends to a moderate degree on chance. To improve your chances of winning a lottery, you buy more tickets. This has caused the EPSRC, at least, problems since although the amount available for grants has increased, the amount applied for has increased more rapidly.

There are two solutions to the problem of researcher disillusionment through the low success rate of grant applications, one is to increase the amount of cash available (which is unlikely to happen in the current economic climate), the other is to reduce the number of grant applications made - here the problem is how to do this in an equitable fashion. Part of the problem here is that the number of potential researchers is governed by the number of people required to teach the undergraduates population, rather than a judgement on the number of people required to consume the research allocation pot.

So what does this suggest for the grant application process:
1. Better career management for academics, in order that the grant application process is not used as a rating tool for academics;
2. Devolution of spending to a lower level;
3. More thought paid to providing continuity.

I guess in my ideal world an academic will develop a coherent, over-arching research plan which is executed in pieces by application to research funds at something like the university scale. The success of such applications depends largely on past performance, and on the coherence or otherwise of the over-arching research plan rather than an attempt to evaluate the quality of a particular piece of research, or idea, in advance.

It's worth noting that academic research is seriously difficult, in that your ideas should be globally competitive - you should be developing thoughts about how nature operates at that are unique. Your competition is thousands of other, very clever researchers spread across the world. Compared to this, my job as an industrial researcher is easier - I need to communicate the answer to the question at hand to the appropriate person, if the answer already exists then that's fine. Also I get to do more research with my own hands than I would in an equivalent position as an academic.

Wednesday, December 23, 2009

What kind of scientist am I?

Following on from my earlier blog post on the tree of life, this post is about the taxonomy of my area of science: physics. I should point out now that I'm not too keen on the division science in this way. These divisions are relatively recent, as an example: the Cavendish Laboratory, the department of physics at Cambridge University, was only founded in 1874.

I am an experimental soft-matter physicist.

So taking the first word: experimental. This is one of the three great kingdoms of physics, the others being  computer simulation and the theory. "Experimental" means I spend a large part of my time trying to do actually experiments on objects in the real world, this may involve substantial computational work to process the output data and should generally involve some comparison to theory when published, although serious development of theory tends to end up in the hands of specialists. Computer simulation is distinct from from theory: simulation is like doing an experiment in a computer - give a set of entities some rules to live by and set them at it, measure results after some time. Theory on the other hand attempts to model the measurements without the fuss of explicitly modelling each entity in the collection.

Next to the physicist bit: In a sense theory is the essence of what physics is about: building an accurate model of the world. The important thing with physics is abstraction, to take an example I'm interested in granular materials; from a physics point of view this means I'm looking for a model that covers piles of ball bearings, avalanches, sand dunes, grain in silos, cereals in a box and possibly even mayonnaise all in a single framework.

And so to the final division: soft-matter. Physical Review Letters, which is the global house journal for physics, has the following subdivisions (in italics):
  • General Physics: Statistical and Quantum Mechanics, Quantum Information, etc; Domain of Schrödingers cat, Alice and Bob exchanging secure messages, and Bose-Einstein condensates.
  • Gravitation and Astrophysics; Physicists go large. Stephen Hawking lives here - black holes, the big bang.
  • Elementary Particles and Fields; down to the bottom, with things very small studied by things very large (like the Large Hadron Collider at CERN). Here be Prof Brian Cox.
  • Nuclear Physics; The properties of the atomic nucleus, including radioactivity, fission and fusion. This is Jim Al-Khalili's field. 
  • Atomic, Molecular, and Optical Physics; Stuff where single atoms and molecules are important, things like spectroscopy, fluorescence and luminescence go here.
  • Nonlinear Dynamics, Fluid Dynamics, Classical Optics, etc; Pendulums attached to pendulums, splashes and invisibility cloaks!
  • Plasma and Beam Physics; Matter in extreme conditions of temperature: fusion power goes here.
  • Condensed Matter: Structure, etc; Condensed matter is stuff which isn't a gas - i.e. liquids and solids, and is acting in a reasonable size lump. 
  • Condensed Matter: Electronic Properties, etc; This is where your semiconductors, from which computer chips are made, live. 
  • Soft Matter, Biological, and Interdisciplinary Physics; Soft-matter refers to various squishy things, plastics, big stringy molecules in solution (polymers), little particles (colloids, like emulsion paint or mayonnaise), liquid crystals, and also granular materials (gravel, grain, sand and so forth).
So there I am in the last division, studying squishy things.

Since I've provided a means to wind up most sorts of scientist in previous blog posts, I thought I could provide a few here for me. Theoreticians can wind me up by assuming that experiments, and the analysis of the resulting data, are trivially easy to do and if they don't fit their theory then I need to try again. Simulators I have a bit more sympathy with, simulations are experiments on a computer, however when you're writing a paper perhaps you should say in the title you ran a simulation, rather than did a  proper experiment like a real man ;-)

Update: I made this post into a podcast: http://bit.ly/6EA17H - it's on Posterous because uploading of audio is easier. I used a basic Logitech headset microphone, Audacity to do the capture and editing with the Lame plugin for MP3 export.  I'm not sure I'll do it again but it was fun to try!

Sunday, December 20, 2009

Happy Christmas



 I hope you enjoy a happy week of eating, drinking, playing with friends and family, and looking at pretty lights.

Thursday, December 17, 2009

The Professionals

Lecturing is a tough business, and half the job is largely ignored.

This post is stimulated, in part by an article in Physics World on the training of physicists for lecturing, and how they really don't like it. It turns out it is rather timely since Times Higher Education has also published on the subject, in this case highlighting how universities place little emphasis on the importance of good teaching in promotion.

I taught physics at Cambridge University: small group tutorials and lab classes - I was a little short of a lecturer. I also taught physics as a lecturer at UMIST. I should point out that the following comments are general, I think they would apply equally to any of the older universities.

Mrs SomeBeans is a lecturer in further and higher education, the difference between the two of us is that she had to do a PGCE qualification whereas I was let loose on students with close to zero training.

I did spend an interesting day in lecturer training at Cambridge, a small group of new lecturers, and similar, spent a fairly pleasant day chatting and being video'd presenting short chunks of lectures. I learnt several things on that day:
1. Philosophy lecturers use hardly any overheads.
2. Most of us found lecturing pretty nerve-wracking, one of our number wrote out her lectures in full in longhand to cope.
3. Drinking as a cure for pre-lecture nerves doesn't work well
4. I spoke like a yokel and was slightly tubbier than I thought!

Round two at my next employers was a bit more involved. I can't remember much from the two day event, but many of the points from the Physics World post came out. Scientists are typically taught how to lecture together as group, and their point of view is somewhat in collision with those of educationalists who seem to be able to throw out three mutually incompatible theories before breakfast and not be interested in testing any of them.

I have an insight which may help scientists in these situations: outside science the idea of a "theory" has quite a different meaning from that inside science. This paradox is also found in management training. Non-scientists use a "theory" as a device to structure thought and discussion, not as a testable hypothesis. Therefore multiple contradictory, or apparently incompatible theories, can be presented together without the speaker's head exploding. They're not generally tested in any sense a scientist would understand, very few people attempt to quantify teaching Method A against teaching Method B. The thing is not to get hung up on the details of the theory, the important bit is being brought together to talk about teaching.

I enjoyed parts of teaching: physics tutorials for second years at Cambridge was something of a steeplechase with the not particularly experienced me, hotly pursued by rather cleverer undergraduate students over problems for which the lecturers did not deign to supply model answers. Exceedingly educational for all concerned. Practical classes were also fun: the first time a student presents you with a bird's nest of wires on a circuit board it takes about 15 minutes to work out what the problem is, the second time you immediately spot the power isn't connected to the chip - and students think Dr Hopkinson is a genius.

Lecturing I found pretty grim, except on the odd good day when I got an interesting demonstration working. I was faced with 80 or so students, many of an unresponsive kind. I ploughed through lecture notes on PowerPoint which I found interesting when I was writing but in the lecture theatre I found painfully long winded. Lecturing is the most nerve-wracking sort of public speaking I've done, and I suspect many lecturers find it the same. I remember one of my undergraduate lecturers was clearly a bag of nerves even in front of the small and friendly course to which I belonged (and I'm not good at picking up such things).

In a sense lecturing is a throwback, there are so many other ways to learn - and I fear we only teach via lecturing because that's what we've always done. Nowadays it's easy, although time consuming, to produce a beautiful set of printed lecture notes and distribute the overheads you use: but is it really a good use of time to go through those overheads (which I am sure is what nearly everyone does)? Nowadays I learn by reading, processing and writing (a blog post) or a program.

There's another thing in Physics World article:
At universities the task is often performed by academics who are much more interested in research and therefore regard teaching as a chore.
This is absolutely true, in my experience. I've worked in three universities post-undergraduate, I've been interviewed for lectureships in a further six or so. And in everyone the priority has been research not teaching, which is odd because if you look at funding from the Department for Innovation, Universities and Skills something like £12billion is directed at teaching and something like £5bn at research.

So why did I write this post: perhaps it's a reflection of opportunities missed and a time spent chasing the wrong goals. If I did it all again there seem to be so many more ways to talk to other lecturers about teaching. On twitter, in blogs.

Sunday, December 13, 2009

Drowning by numbers

I am not a mathematician, but physics and maths are intimately entwined. I suspect I stumble on a deep philosophical question when I ponder whether maths exists that has no physical meaning.

On a global scale I am moderately good at maths, I have two A levels* in the subject (maths and further maths), long years of training in physics have introduced me to a bit more. However, beyond this point I realised I was manipulating symbols to achieve correct results rather than really knowing what was going on. A lot of my work involves carry out calculations, but that's not maths.

I did intend decorating this post with equations, I didn't in the end, wary of a couple of things: firstly the statement by Stephen Hawking that every equation would half sales; secondly I discovered that putting equations into Blogger is non-trivial. Equations, statements in mathematical notation are the core of maths and much of my journey in maths has been in translating equations into an internal language I understand.

So here's a pretty bit of maths, the Mandelbrot set, the amazing thing about the Mandelbrot set is how easy it is to generate such a complex structure. We can zoom into any part of the structure below and see more and more detail. Mathematics is the study of why such a thing is as it is, rather than just how to make such a thing.


Image by Wolfgang Beyer
I remember playing with Mandelbrot sets as a child, before I understood complex numbers, to me they were a problem in programming and a source of wonder as I plunged ever deeper into a pattern that just kept developing. Have a look yourself with this applet... *time passes as I re-acquaint myself with an old friend*. There is something of this towards the end of Carl Sagan's novel Contact, where the protagonists discover a message hidden deep within the digits of π.

I fiddle with numbers when I see them, and I suspect mathematicians do too. So my Girovend card showed 17.29 recently which, without the decimal place is 1729 = 13+123 = 103+93, the smallest number that has the property of being the sum of two different pairs of positive cubes, it's also a very common piece of numerology. The numbering of the chapters of "The Curious Incident of the Dog in the Night-time" by Mark Haddon, with consecutive prime numbers also appeals to me.

It's become a tradition that I find ways to annoy the people I visit, and there's no escape for mathematicians here. It seems like the best way is to annoy a mathematician is to assume they can do useful arithmetic, like a calculating a shared restaurant bill. Interestingly though, this may be a poor example, since fair division methods for important things, like cake, are an area of mathematical research.

It's true that some mathematicians are a bit odd, but then so are some physicists and to be honest if reality TV has taught us anything, it's that the world is full of very odd people in every walk of life. So if you meet a mathematician, don't be afraid!

*A levels are the qualification for 18 year olds in the UK, when I was a student you would study for 3 or 4 A levels for 2 years.

Update: Since writing this I've discovered a couple more interesting sites for fractals, and for want of a better place to put them I record them: here you can find a pretty rendering of the quaternion Julia set, and here is an in depth exploration of the Julia and Mandelbrot sets (1/1/10).

Sunday, December 06, 2009

A jigsaw not a House of Cards

This is the blog post I was never going to write, its about climate change.

This post will contain pretty much no science, if you're interested in that then I've put some references at the end. Instead this is a post about my personal journey with anthropogenic global warming (AGW).

So why did I decide to write this now? A couple of reasons really: the Copenhagen Climate Summit is next week but mainly climategate, the publication of leaked e-mails and data from University of East Anglia Climate Research Unit. I was listening to the Today programme Radio 4 in the UK in the gym, I was so frustrated by their climategate piece I had to turn the radio off and start to compose this post in my head. It's the mindless spouting of John Humphrey's about something of which he clearly has no understanding which is frustrating.

My journey begin in 2007 when "The Great Global Warming Swindle" (TGGWS) was broadcast. Ben Goldacre wrote about it on his blog. People I knew started talking about it and the balanced view it presented: yet it was utterly at variance with what I knew about global warming. I still haven't seen "The Great Global Warming Swindle", nor have I see it's antithesis Al Gore's "An Inconvenient Truth". Subsequently OFCOM ruled that several scientists had been treated unfairly by TGGWS, and I decided that I was going to treat any further science broadcast by Channel 4 (the broadcaster) with overwhelming skepticism.

I say my journey began in 2007, but actually it began long before then. I've been reading New Scientist for the last 20 years, New Scientist is a popular science news magazine; I've also been reading Nature for quite a few years. Nature is a general scientific journal, with science news - getting a paper in Nature is like winning a gold medal at a national sporting event for a scientist. So by 2007 I had absorbed the conventional scientific view of on AGW via stories in the scientific press, In much the same way as I have conventional opinions on continental drift, evolution, the big bang, and mass extinction.

"The Great Global Warming Swindle" stimulated me to action, here was purportedly a scientific question which should have a scientific answer. First stop was the Intergovernmental Panel on Climate Change (IPCC) 3rd Assessment report published in 2001 succeeded closely by the 4th Assessment report published 2007. I focused on the Physical Science Basis (WG1): the Summary for Policymakers for this report is short (18 pages) and written in a pretty accessible style, the following chapters filling out the detail and referencing the primary peer-reviewed literature. This is the coal-face of science, it's where I publish my professional work. Ultimately I also got myself an undergraduate textbook on atmospheric physics and Spencer Weart's book on the history of climate science.

So how does a scientist get on outside their field? Well, the IPCC reports are no problem for me to understand. I can get on fine with understanding the abstracts of most the papers in the primary literature, but I would really struggle to contribute to this literature and I would not be able to pick out subtle errors, or judge between opposing expert views. This is unsurprising, because these papers will have passed peer-review and this means in most cases anything obviously wrong would have been weeded out.

I also plunged into the blogosphere, reading both contrarian and conventional sites. The only one I'll cite here is realclimate.org, a blog by climate scientists which reports on new papers in the literature - I still read this. It was in the blogosphere that I came to my conclusions on the contrarian view: it's wrong. I'll expand on that slightly, there's a huge bunch of stuff that's just rubbish from the scientific point of view, there's a smallish bunch of stuff where people from other areas of science have published work on climate science that may well be accurate for their field but the climate science looks a bit wobbly, there are a very few academic climate scientists who think that IPCC AR4 exaggerates the problem but on the other hand there are quite a few academic climate scientist who think AR4 was overly optimistic.

Thus armed with knowledge I set out to argue the scientific case for the existence of anthropogenic global warming... I think I gave up on arguing about AGW when, in an exchange on a discussion forum, I suggested reading an undergraduate text in atmospheric physics was a useful thing to do in understanding climate science and the response was "Just goes to show what you know", it seemed knowledge was counting against me - blind prejudice was what was required. I had mistakenly believed I was arguing a scientific case, when in fact most other people were having a political argument dressed up as a scientific one. Which is odd really because it never occurred to me that global warming was a political question.



It seems to be a characteristic of contrarians that they view climate science as a house of cards, that if they disprove the contents of a single scientific paper then the whole edifice will fall. Hence there are very long arguments about single papers such as the original hockey stick controversy. However, science is a jigsaw, not a house of cards: to test a piece of science you look at the pieces of science around it to see how they fit, rather than staring very hard at the one piece. This view of science also makes it less personal, nothing in science depends solely on the work of one person. If they didn't exist someone else would independently come up with the same result before too long.


People are naturally political, democratic animals. They like to consider both sides of a dispute and come up with what seems like an balanced solution. Science is not democratic, there is not a middle ground. If we vote on science, nature will not accommodate to match the outcome.


References

Intergovernmental Panel on Climate Change 4th Assessment Report (know as IPCC AR4 published 2007)
Copenhagen Diagnosis (an update the IPCC AR4)
Physics of Atmospheres by John Houghton
The Discovery of Global Warming by Spencer Weart

Thursday, December 03, 2009

Confocal microscopy


Back to stuff I should know, in theory, at least.

I had my first microscope as a child, it was a small one but I had a great time looking at little creatures that lived in dirty pond-water. I remember spending a long afternoon trying to see transparent single celled animals, and finally getting the lighting just right to see an amoeba. I also remember trying to immobilise a tiny worm with white spirit - it exploded, but I like to think it died happy.

This week I shall mostly be talking about 'confocal microscopy', this is a type of light microscopy (as opposed to electron, infra-red, x-ray, scanning tunnelling, or atomic force microscopy). The smallest thing you can see with a light microscopy is about 1 micron across, that's a thousandth of a millimetre - a human hair is about 80 micron in diameter. A normal light microscope gives you a nice focused picture of a slice of you sample at the "focal plane", but it also lets in loads of light from parts of your sample away from the focal plane which leaves you, overall, with a bit of a blurry picture. Microscopists get around this problem by slicing their samples up very thinly hence no bits to be blurry, but this is a fiddly procedure and leaves your sample very dead even if it started alive.

Confocal microscopy is a technique by which the slicing of the sample happens virtually, you can put a big fat sample in the microscope and by the use of cunning optics you only get an image from the focal plane which is lovely and sharp. You can build up a 3D picture of the sample by moving it up and down in front of the lens. Marvin Minsky was the original inventor of the confocal microscope in about 1955 but was somewhat held back by the lack of lasers, computers and stuff. Things picked up again in the 1980's as these things became readily available. Oops, I think that might have been some cod history ;-)

An interesting feature of the confocal microscope is that if there's nothing in the focal plane, you don't see anything (unlike a conventional light microscope where you can always see a big bright something, even if it's blurry) this can be disconcerting for the learner - you can't find your sample!

Every microscopy needs a contrast mechanism, a way of separating one thing from another. In confocal microscopy by far the most popular contrast mechanism is to use fluorescence via the use of a fluorescent dye to label bits of your sample.  If you illuminate a fluorescent dye with light of one colour it emits light of another colour (making it stand out particularly well). If you ask an organism nicely (okay - genetically engineer), you can get it to make Green Fluorescent Protein (GFP) which is a protein that fluoresces green (duh!).  All that remains is to find a way of  sticking the fluorescent dye to the thing in which you're interested.

In each post about science I like to add a little fact to help you wind up / avoid winding up practitioners in that field. So to wind up a microscopist: project an image onto a screen for a presentation and claim "x800" magnification (or whatever). The problem is: to what does "x800" magnification apply? Is it what the microscope told you when you looked through the eyepiece? Is it the magnification on the printed page, the computer screen or on the wall? We really doubt you know. It's scale bars all the way.

For several years I was proud keeper of a confocal microscope. I, and my students, had great fun with the microscope and it had fun with us. The pointy end of the microscope is the objective lens, the bit closest to the sample. A fancy microscope like our Zeiss LSM 510 had 5 or more objectives mounted on a turret (see the image at the top of the post), each objective gives different magnification. The Zeiss LSM 510 was fully motorised, and too clever by half. It would assume that you wanted to stay focused on the same part of the sample when you changed objectives (or it changed them for you, with it's motors). Now the problem is that for a x10 objective the focal plane is about 1cm from the front of the objective lens, and for a x40 objective lens it could be only a tenth of a millimetre. Now imagine I've just focused deeply inside my sample using an x10 objective, I switch to the x40 object on the computer..... and the microscope mashes the x40 objective lens into the sample, blithely ignoring the sound of £6000 lens smashing glass coverslip and covering it in sticky sample!

In later posts I'll show some of the results from the confocal microscope in non-mashy-lens-into-sample mode.

Here are some images, these are all slices through solid objects. I didn't really think this through in terms of explaining what's in these first three images, roughly they're what you get if you add a small amount of salt-water to Fairy liquid (although I would prefer you to use Persil washing up liquid). First up is a cross-section through an "onion-type micelle":



And these are the structures you see in a similar system but with a different concentration of water:





This is a false colour image, bit lurid - don't know what I was thinking at the time. These are known as "myelin":



Pollen-grains are always popular - I stole this one from here. Each of the images is a slice, and the inset bottom right is the result of adding all the slices together.

Wednesday, December 02, 2009

There may be blue skies ahead

You have to feel sorry for Lord Drayson. At a time when he is doing his best to stand up for science and the funding of science in what are very difficult economic circumstances, where every department in government must show it's worth, scientists appear to be trying to hack his legs off; by balking at the proposal that they should explain their impact on society and insisting that they should be left to do 'blue skies' research. I refer to the recent debate hosted by the Times Higher Education Supplement (THES): "Blue skies ahead? The prospects for UK science", a discussion which centred around the impact of science on society and how you might increase that impact, how impact is evaluated, the role of blue skies research and the crisis at the Science and Technology Facilities Council (STFC).

In this post I'll try to explain why scientists are so impassioned about the subject of impact assessment and make a couple of suggestions as to how we might collectively do this better.

Impact means several things in this context: there is the HEFCE Impact pilot exercise, which is retrospective and in pilot phase at the moment and there are impact statements in grant applications which were introduced this year which should provide a prediction of the economic and societal impact of the work proposed. I suspect it is the impact statements in grant applications which are causing the real concern here, and I also suspect that Lord Drayson was talking primarily about the former, and the audience and panel were talking about the latter at the THES event.

Before I go on I will provide a short autobiography to provide context for my comments: I'm currently a research scientist in a large company, I've been here for 5 years but until the age of 35 I was an academic scientist, rising to the position of tenured lecturer in the physics department of what is now Manchester University. The opinions I present here are entirely my own.

Putting aside the question of how good a scientist I am; I can tell you one thing for certain: I'm an incredibly bad at writing successful grant applications! Really bad, awful, abysmal. I wrote about 8 over my relatively short period as a grant writer and they all failed (and not even by a small margin). I don't think I'm alone in this.

Exactly what grant applications mean varies from subject to subject, but in my field: experimental laboratory-scale soft matter physics it's important: in order to do research you need people and equipment. Typically grant applications are written to get 2-3 years of postdoctoral research assistant, or a PhD student and some equipment. Your department will rate itself on the grant funding it obtains, and you on your contribution to that figure. You will definitely feel that winning grants is something you have to do to succeed in your job. You can eke out an existence without grants, funding students from other sources and helping colleagues with successful grant applications, throwing yourself into teaching but it doesn't feel like the way you're supposed to do it.

I find it difficult to put into words how much I loathed the whole grant application process. A grant application requires you to describe the research you're going to do over the next few years, and how the results will be world-class. The average success rate is about 20% (disputed) in the field in which I was applying. Once written the grant applications are sent to reviewers - actually that means someone just like you - it's peer review. Reviewers know full well that if they rate an application "excellent" as opposed to "outstanding" in any one of several areas they are damning the application to failure. Reviews go forward to a panel who plough through huge numbers of these things (about 5 times as many as they are going to fund), then rank them. Funding is given to those at the top of the list, working down until the available cash is exhausted. There are serious questions as to whether we can rank schools accurately, here we try to do it for world-class research. It's a grim process for all involved.

It's in this context that the new impact statements are introduced, potentially these contribute 25% to the grant decision. As an grant application writer I need this like I need another hole in the head! Writing the science part of the grant application is a work of pure fiction (I think it might have helped if I'd appreciated that when I was doing it), writing the impact statement: "The demonstrable contribution that excellent research makes to society and the economy" is going beyond that. This is forward looking, you're being asked to quantify the future economic value of that bit of world-class research you're going to do.

The problem is that university science can take a long time to trickle out, in a case I highlighted yesterday in what is a pretty applied area, research was having a very direct, specific impact 40 years after it was done. Prior to my grant seeking days my work as a PhD student, postdoc and ADR has been funded, at least in part by three separate companies - little word of the economic impact has made it back to me from these companies.

So this brings me to some concrete points:
Point 1: For many areas of research societal, economic impacts are diffuse and long term, and actually the academic proposing the research is not in the best position to determine those impacts. As an industrial researcher I've written business cases for doing external research, this is a very revealing exercise which I know I couldn't have done as an academic because I simply wouldn't have had the required information to hand. Impact statements should not be required on a "one per application" basis, they should be for whole subject fields and written in consultation with people with actual data on the economic and societal impacts.

Moving on to a second point, Lord Drayson very generously praised scientists in Britain for the quality of the science they do, but said the problem was in how this expertise is translated into wider society:
Point 2: Impact statements are purely about scientists, applying for grants in universities. The onus seems to be on scientists to fix a problem which has two sides. What are we doing about how society and industry interact with science?

In the end impact is about communication, it's about understanding the preconceptions that other people bring to the party and addressing these preconceptions in the way you communicate. Although I described myself as a research scientist, I'm actually a science communicator in and industrial environment.

This whole blog is really about communicating what it's like to be a scientist, how it feels, the little details of the tribe. The people I follow on twitter are all very clever, they do lots of different things and from a combination of tweets and blogs you learn about their lives. This is my contribution to that discussion, it's a societal impact.

Wordless Wednesday


Tuesday, December 01, 2009

A short story about scientific impact

I work for a company, I won't tell you who.

40 years ago a man wrote a paper, published in a scientific journal, you won't have heard of it, it wasn't important.

A couple of years ago I read the paper, and I carried out the calculations described in the paper. I showed the results to the team I worked with. They showed that what we were doing wasn't going to work. So we stopped the project.

There were 10 people on the team, let's say they each cost my company £100,000 a year. Let's be harsh and say that my presentation only accounted for 10% of the decision to stop.

That man saved saved us £100,000.

I wish I could tell the man that wrote the paper, so he can put it in his next impact statement.

Update: Thanks to @dr_andy_russell for pointing out my rather significant typo!