My blog has moved!

You will be automatically redirected to the new address, all posts have been transferred from this blog. Use site search to find them. If that does not occur, visit
http://www.ianhopkinson.org.uk
and update your bookmarks.

Thursday, December 31, 2009

Happy New Year!

The last few years I've made a calendar from photos taken through the year, to furnish homemade gifts to my close family. My brother clearly values this gift, and has it on proud display in his downstairs lavatory! This year, the power of blogging enables me to display such quality gifts to so many more people - a good two or three, at least.

The Inelegant Gardener (HappyMouffetard) is responsible for some of these, but I can't remember which - probably some of the ones of flowers.


Cover image - it's an amaryllis.


January - a brass marmot in the mountains above Obergurgl


February - a dwarf iris


March - a rice paper butterfly (at Chester Zoo)


April - more snowy mountains, this time Orelle in France


May - spring greenery in North Wales


June - calendula


July - Blea Tarn, above Great Langdale


August - the Japanese garden at Tatton Park


September - autumn colours in North Wales


October - jack o'lantern by HappyMouffetard, photography by me


November - fireworks in Val Thorens (cheating, because this was April)


December - frosted nigella seed head

Thank you for visiting the blog in the few short months since I started writing it, it's been great fun and I've enjoyed reading your comments.

Happy New Year to you all!

Tuesday, December 29, 2009

What kind of scientist am I? (audio version)

My earlier "What kind of scientist am I?" post is now available as a podcast: http://bit.ly/6EA17H - Posterous allows the easy posting of audio. I'm not sure I'll do it again but it was fun to try. I used a basic Logitech headset microphone, Audacity to do the capture and editing with the Lame plugin for MP3 export.

Monday, December 28, 2009

A walk along the Shropshire Union Canal

Yesterday the weather was so evil, cold and wet, that we scarcely left the house all day - cabin fever set in. This morning things were looking rather better, cold and frosty, but a little hazy - so we set off for a walk along the Shropshire Union Canal which passes close by. In the summer I cycled along the canal for a week, on my way to work, and saw a couple of kingfishers. No kingfishers today, but we saw a heron, a fox leaping through undergrowth and heard the distant roar of a lion at Chester Zoo.

Some photos:
















Sunday, December 27, 2009

Grant Applications II

This post is probably not for you, unless you're interested in grant applications!

I touched on grant applications a few posts ago with reference to the THES debate on blue-skies research, I mentioned my abysmal grant application record, the generally low success rate and the pain involved for all concerned. Here I intend to add a few additional comments arising, in part, from my experience in industry.

It's worth stating what I believe the grant application process is for: on the face of it is a method by which discretionary funding is provided to researchers to provide resources for research; that is to say equipment, consumables and personnel. However, in addition to this it has a hidden purpose in that it is felt by many to be part of an rating process for researchers. Researchers believe that the more grant applications they win, the higher their ranking. Therefore top-down attempts to limit the number of applications a researcher can make cause consternation because they impact on the perceived worth of that researcher. This additional function is not explicit, and in a way it arises for a lack of any better measure of apparent researcher worth.

I believe this perception arises because university departments don't do a very good job of career management for academics. As an employee of a very large company, I have regular discussions about where my career within the company is going - indeed in my first year I spent about an hour and half talking about just this subject, whilst in academia I *never* in 8 years post-doctoral employment, had a formal discussion about my career development. This applies both to those who have successfully made it to permanent lecturing positions, and the many post-doctoral research assistants who aspire to a limited number of permanent posts.

The grant application process takes no account of an attempt to create a wider research program. Grant applications are made to acquire a specific piece of equipment and/or someone to carry out the research proposed. Typically the equipment will be used long after the end of the grant, and there will be no formal mechanism of replacement.

I am still involved in writing internal research proposals, these differ in two ways from grant applications. Firstly, they are much shorter than grant applications - a couple of sides of A4; secondly, they are much more concerned with all the things 'around' the core of the proposal rather than an explicit description of the research to be done. Funding and allocation of resources is made at the level of projects comprising of order 10 or more people, rather than at the 1 or 2 researcher level at which the typical grant application aims. Furthermore there is a longer cascade in the resource allocation process, rather than each 'end user' approaching the holder of a central pot, resources are allocated at a higher level. This reduces the number of people in the grant application business and means that rounds of allocation are smaller affairs.

The winning of grants appears to contain a large element of lottery, that is to say the outcome depends to a moderate degree on chance. To improve your chances of winning a lottery, you buy more tickets. This has caused the EPSRC, at least, problems since although the amount available for grants has increased, the amount applied for has increased more rapidly.

There are two solutions to the problem of researcher disillusionment through the low success rate of grant applications, one is to increase the amount of cash available (which is unlikely to happen in the current economic climate), the other is to reduce the number of grant applications made - here the problem is how to do this in an equitable fashion. Part of the problem here is that the number of potential researchers is governed by the number of people required to teach the undergraduates population, rather than a judgement on the number of people required to consume the research allocation pot.

So what does this suggest for the grant application process:
1. Better career management for academics, in order that the grant application process is not used as a rating tool for academics;
2. Devolution of spending to a lower level;
3. More thought paid to providing continuity.

I guess in my ideal world an academic will develop a coherent, over-arching research plan which is executed in pieces by application to research funds at something like the university scale. The success of such applications depends largely on past performance, and on the coherence or otherwise of the over-arching research plan rather than an attempt to evaluate the quality of a particular piece of research, or idea, in advance.

It's worth noting that academic research is seriously difficult, in that your ideas should be globally competitive - you should be developing thoughts about how nature operates at that are unique. Your competition is thousands of other, very clever researchers spread across the world. Compared to this, my job as an industrial researcher is easier - I need to communicate the answer to the question at hand to the appropriate person, if the answer already exists then that's fine. Also I get to do more research with my own hands than I would in an equivalent position as an academic.

Wednesday, December 23, 2009

What kind of scientist am I?

Following on from my earlier blog post on the tree of life, this post is about the taxonomy of my area of science: physics. I should point out now that I'm not too keen on the division science in this way. These divisions are relatively recent, as an example: the Cavendish Laboratory, the department of physics at Cambridge University, was only founded in 1874.

I am an experimental soft-matter physicist.

So taking the first word: experimental. This is one of the three great kingdoms of physics, the others being  computer simulation and the theory. "Experimental" means I spend a large part of my time trying to do actually experiments on objects in the real world, this may involve substantial computational work to process the output data and should generally involve some comparison to theory when published, although serious development of theory tends to end up in the hands of specialists. Computer simulation is distinct from from theory: simulation is like doing an experiment in a computer - give a set of entities some rules to live by and set them at it, measure results after some time. Theory on the other hand attempts to model the measurements without the fuss of explicitly modelling each entity in the collection.

Next to the physicist bit: In a sense theory is the essence of what physics is about: building an accurate model of the world. The important thing with physics is abstraction, to take an example I'm interested in granular materials; from a physics point of view this means I'm looking for a model that covers piles of ball bearings, avalanches, sand dunes, grain in silos, cereals in a box and possibly even mayonnaise all in a single framework.

And so to the final division: soft-matter. Physical Review Letters, which is the global house journal for physics, has the following subdivisions (in italics):
  • General Physics: Statistical and Quantum Mechanics, Quantum Information, etc; Domain of Schrödingers cat, Alice and Bob exchanging secure messages, and Bose-Einstein condensates.
  • Gravitation and Astrophysics; Physicists go large. Stephen Hawking lives here - black holes, the big bang.
  • Elementary Particles and Fields; down to the bottom, with things very small studied by things very large (like the Large Hadron Collider at CERN). Here be Prof Brian Cox.
  • Nuclear Physics; The properties of the atomic nucleus, including radioactivity, fission and fusion. This is Jim Al-Khalili's field. 
  • Atomic, Molecular, and Optical Physics; Stuff where single atoms and molecules are important, things like spectroscopy, fluorescence and luminescence go here.
  • Nonlinear Dynamics, Fluid Dynamics, Classical Optics, etc; Pendulums attached to pendulums, splashes and invisibility cloaks!
  • Plasma and Beam Physics; Matter in extreme conditions of temperature: fusion power goes here.
  • Condensed Matter: Structure, etc; Condensed matter is stuff which isn't a gas - i.e. liquids and solids, and is acting in a reasonable size lump. 
  • Condensed Matter: Electronic Properties, etc; This is where your semiconductors, from which computer chips are made, live. 
  • Soft Matter, Biological, and Interdisciplinary Physics; Soft-matter refers to various squishy things, plastics, big stringy molecules in solution (polymers), little particles (colloids, like emulsion paint or mayonnaise), liquid crystals, and also granular materials (gravel, grain, sand and so forth).
So there I am in the last division, studying squishy things.

Since I've provided a means to wind up most sorts of scientist in previous blog posts, I thought I could provide a few here for me. Theoreticians can wind me up by assuming that experiments, and the analysis of the resulting data, are trivially easy to do and if they don't fit their theory then I need to try again. Simulators I have a bit more sympathy with, simulations are experiments on a computer, however when you're writing a paper perhaps you should say in the title you ran a simulation, rather than did a  proper experiment like a real man ;-)

Update: I made this post into a podcast: http://bit.ly/6EA17H - it's on Posterous because uploading of audio is easier. I used a basic Logitech headset microphone, Audacity to do the capture and editing with the Lame plugin for MP3 export.  I'm not sure I'll do it again but it was fun to try!

Sunday, December 20, 2009

Happy Christmas



 I hope you enjoy a happy week of eating, drinking, playing with friends and family, and looking at pretty lights.

Thursday, December 17, 2009

The Professionals

Lecturing is a tough business, and half the job is largely ignored.

This post is stimulated, in part by an article in Physics World on the training of physicists for lecturing, and how they really don't like it. It turns out it is rather timely since Times Higher Education has also published on the subject, in this case highlighting how universities place little emphasis on the importance of good teaching in promotion.

I taught physics at Cambridge University: small group tutorials and lab classes - I was a little short of a lecturer. I also taught physics as a lecturer at UMIST. I should point out that the following comments are general, I think they would apply equally to any of the older universities.

Mrs SomeBeans is a lecturer in further and higher education, the difference between the two of us is that she had to do a PGCE qualification whereas I was let loose on students with close to zero training.

I did spend an interesting day in lecturer training at Cambridge, a small group of new lecturers, and similar, spent a fairly pleasant day chatting and being video'd presenting short chunks of lectures. I learnt several things on that day:
1. Philosophy lecturers use hardly any overheads.
2. Most of us found lecturing pretty nerve-wracking, one of our number wrote out her lectures in full in longhand to cope.
3. Drinking as a cure for pre-lecture nerves doesn't work well
4. I spoke like a yokel and was slightly tubbier than I thought!

Round two at my next employers was a bit more involved. I can't remember much from the two day event, but many of the points from the Physics World post came out. Scientists are typically taught how to lecture together as group, and their point of view is somewhat in collision with those of educationalists who seem to be able to throw out three mutually incompatible theories before breakfast and not be interested in testing any of them.

I have an insight which may help scientists in these situations: outside science the idea of a "theory" has quite a different meaning from that inside science. This paradox is also found in management training. Non-scientists use a "theory" as a device to structure thought and discussion, not as a testable hypothesis. Therefore multiple contradictory, or apparently incompatible theories, can be presented together without the speaker's head exploding. They're not generally tested in any sense a scientist would understand, very few people attempt to quantify teaching Method A against teaching Method B. The thing is not to get hung up on the details of the theory, the important bit is being brought together to talk about teaching.

I enjoyed parts of teaching: physics tutorials for second years at Cambridge was something of a steeplechase with the not particularly experienced me, hotly pursued by rather cleverer undergraduate students over problems for which the lecturers did not deign to supply model answers. Exceedingly educational for all concerned. Practical classes were also fun: the first time a student presents you with a bird's nest of wires on a circuit board it takes about 15 minutes to work out what the problem is, the second time you immediately spot the power isn't connected to the chip - and students think Dr Hopkinson is a genius.

Lecturing I found pretty grim, except on the odd good day when I got an interesting demonstration working. I was faced with 80 or so students, many of an unresponsive kind. I ploughed through lecture notes on PowerPoint which I found interesting when I was writing but in the lecture theatre I found painfully long winded. Lecturing is the most nerve-wracking sort of public speaking I've done, and I suspect many lecturers find it the same. I remember one of my undergraduate lecturers was clearly a bag of nerves even in front of the small and friendly course to which I belonged (and I'm not good at picking up such things).

In a sense lecturing is a throwback, there are so many other ways to learn - and I fear we only teach via lecturing because that's what we've always done. Nowadays it's easy, although time consuming, to produce a beautiful set of printed lecture notes and distribute the overheads you use: but is it really a good use of time to go through those overheads (which I am sure is what nearly everyone does)? Nowadays I learn by reading, processing and writing (a blog post) or a program.

There's another thing in Physics World article:
At universities the task is often performed by academics who are much more interested in research and therefore regard teaching as a chore.
This is absolutely true, in my experience. I've worked in three universities post-undergraduate, I've been interviewed for lectureships in a further six or so. And in everyone the priority has been research not teaching, which is odd because if you look at funding from the Department for Innovation, Universities and Skills something like £12billion is directed at teaching and something like £5bn at research.

So why did I write this post: perhaps it's a reflection of opportunities missed and a time spent chasing the wrong goals. If I did it all again there seem to be so many more ways to talk to other lecturers about teaching. On twitter, in blogs.

Sunday, December 13, 2009

Drowning by numbers

I am not a mathematician, but physics and maths are intimately entwined. I suspect I stumble on a deep philosophical question when I ponder whether maths exists that has no physical meaning.

On a global scale I am moderately good at maths, I have two A levels* in the subject (maths and further maths), long years of training in physics have introduced me to a bit more. However, beyond this point I realised I was manipulating symbols to achieve correct results rather than really knowing what was going on. A lot of my work involves carry out calculations, but that's not maths.

I did intend decorating this post with equations, I didn't in the end, wary of a couple of things: firstly the statement by Stephen Hawking that every equation would half sales; secondly I discovered that putting equations into Blogger is non-trivial. Equations, statements in mathematical notation are the core of maths and much of my journey in maths has been in translating equations into an internal language I understand.

So here's a pretty bit of maths, the Mandelbrot set, the amazing thing about the Mandelbrot set is how easy it is to generate such a complex structure. We can zoom into any part of the structure below and see more and more detail. Mathematics is the study of why such a thing is as it is, rather than just how to make such a thing.


Image by Wolfgang Beyer
I remember playing with Mandelbrot sets as a child, before I understood complex numbers, to me they were a problem in programming and a source of wonder as I plunged ever deeper into a pattern that just kept developing. Have a look yourself with this applet... *time passes as I re-acquaint myself with an old friend*. There is something of this towards the end of Carl Sagan's novel Contact, where the protagonists discover a message hidden deep within the digits of π.

I fiddle with numbers when I see them, and I suspect mathematicians do too. So my Girovend card showed 17.29 recently which, without the decimal place is 1729 = 13+123 = 103+93, the smallest number that has the property of being the sum of two different pairs of positive cubes, it's also a very common piece of numerology. The numbering of the chapters of "The Curious Incident of the Dog in the Night-time" by Mark Haddon, with consecutive prime numbers also appeals to me.

It's become a tradition that I find ways to annoy the people I visit, and there's no escape for mathematicians here. It seems like the best way is to annoy a mathematician is to assume they can do useful arithmetic, like a calculating a shared restaurant bill. Interestingly though, this may be a poor example, since fair division methods for important things, like cake, are an area of mathematical research.

It's true that some mathematicians are a bit odd, but then so are some physicists and to be honest if reality TV has taught us anything, it's that the world is full of very odd people in every walk of life. So if you meet a mathematician, don't be afraid!

*A levels are the qualification for 18 year olds in the UK, when I was a student you would study for 3 or 4 A levels for 2 years.

Update: Since writing this I've discovered a couple more interesting sites for fractals, and for want of a better place to put them I record them: here you can find a pretty rendering of the quaternion Julia set, and here is an in depth exploration of the Julia and Mandelbrot sets (1/1/10).

Sunday, December 06, 2009

A jigsaw not a House of Cards

This is the blog post I was never going to write, its about climate change.

This post will contain pretty much no science, if you're interested in that then I've put some references at the end. Instead this is a post about my personal journey with anthropogenic global warming (AGW).

So why did I decide to write this now? A couple of reasons really: the Copenhagen Climate Summit is next week but mainly climategate, the publication of leaked e-mails and data from University of East Anglia Climate Research Unit. I was listening to the Today programme Radio 4 in the UK in the gym, I was so frustrated by their climategate piece I had to turn the radio off and start to compose this post in my head. It's the mindless spouting of John Humphrey's about something of which he clearly has no understanding which is frustrating.

My journey begin in 2007 when "The Great Global Warming Swindle" (TGGWS) was broadcast. Ben Goldacre wrote about it on his blog. People I knew started talking about it and the balanced view it presented: yet it was utterly at variance with what I knew about global warming. I still haven't seen "The Great Global Warming Swindle", nor have I see it's antithesis Al Gore's "An Inconvenient Truth". Subsequently OFCOM ruled that several scientists had been treated unfairly by TGGWS, and I decided that I was going to treat any further science broadcast by Channel 4 (the broadcaster) with overwhelming skepticism.

I say my journey began in 2007, but actually it began long before then. I've been reading New Scientist for the last 20 years, New Scientist is a popular science news magazine; I've also been reading Nature for quite a few years. Nature is a general scientific journal, with science news - getting a paper in Nature is like winning a gold medal at a national sporting event for a scientist. So by 2007 I had absorbed the conventional scientific view of on AGW via stories in the scientific press, In much the same way as I have conventional opinions on continental drift, evolution, the big bang, and mass extinction.

"The Great Global Warming Swindle" stimulated me to action, here was purportedly a scientific question which should have a scientific answer. First stop was the Intergovernmental Panel on Climate Change (IPCC) 3rd Assessment report published in 2001 succeeded closely by the 4th Assessment report published 2007. I focused on the Physical Science Basis (WG1): the Summary for Policymakers for this report is short (18 pages) and written in a pretty accessible style, the following chapters filling out the detail and referencing the primary peer-reviewed literature. This is the coal-face of science, it's where I publish my professional work. Ultimately I also got myself an undergraduate textbook on atmospheric physics and Spencer Weart's book on the history of climate science.

So how does a scientist get on outside their field? Well, the IPCC reports are no problem for me to understand. I can get on fine with understanding the abstracts of most the papers in the primary literature, but I would really struggle to contribute to this literature and I would not be able to pick out subtle errors, or judge between opposing expert views. This is unsurprising, because these papers will have passed peer-review and this means in most cases anything obviously wrong would have been weeded out.

I also plunged into the blogosphere, reading both contrarian and conventional sites. The only one I'll cite here is realclimate.org, a blog by climate scientists which reports on new papers in the literature - I still read this. It was in the blogosphere that I came to my conclusions on the contrarian view: it's wrong. I'll expand on that slightly, there's a huge bunch of stuff that's just rubbish from the scientific point of view, there's a smallish bunch of stuff where people from other areas of science have published work on climate science that may well be accurate for their field but the climate science looks a bit wobbly, there are a very few academic climate scientists who think that IPCC AR4 exaggerates the problem but on the other hand there are quite a few academic climate scientist who think AR4 was overly optimistic.

Thus armed with knowledge I set out to argue the scientific case for the existence of anthropogenic global warming... I think I gave up on arguing about AGW when, in an exchange on a discussion forum, I suggested reading an undergraduate text in atmospheric physics was a useful thing to do in understanding climate science and the response was "Just goes to show what you know", it seemed knowledge was counting against me - blind prejudice was what was required. I had mistakenly believed I was arguing a scientific case, when in fact most other people were having a political argument dressed up as a scientific one. Which is odd really because it never occurred to me that global warming was a political question.



It seems to be a characteristic of contrarians that they view climate science as a house of cards, that if they disprove the contents of a single scientific paper then the whole edifice will fall. Hence there are very long arguments about single papers such as the original hockey stick controversy. However, science is a jigsaw, not a house of cards: to test a piece of science you look at the pieces of science around it to see how they fit, rather than staring very hard at the one piece. This view of science also makes it less personal, nothing in science depends solely on the work of one person. If they didn't exist someone else would independently come up with the same result before too long.


People are naturally political, democratic animals. They like to consider both sides of a dispute and come up with what seems like an balanced solution. Science is not democratic, there is not a middle ground. If we vote on science, nature will not accommodate to match the outcome.


References

Intergovernmental Panel on Climate Change 4th Assessment Report (know as IPCC AR4 published 2007)
Copenhagen Diagnosis (an update the IPCC AR4)
Physics of Atmospheres by John Houghton
The Discovery of Global Warming by Spencer Weart

Thursday, December 03, 2009

Confocal microscopy


Back to stuff I should know, in theory, at least.

I had my first microscope as a child, it was a small one but I had a great time looking at little creatures that lived in dirty pond-water. I remember spending a long afternoon trying to see transparent single celled animals, and finally getting the lighting just right to see an amoeba. I also remember trying to immobilise a tiny worm with white spirit - it exploded, but I like to think it died happy.

This week I shall mostly be talking about 'confocal microscopy', this is a type of light microscopy (as opposed to electron, infra-red, x-ray, scanning tunnelling, or atomic force microscopy). The smallest thing you can see with a light microscopy is about 1 micron across, that's a thousandth of a millimetre - a human hair is about 80 micron in diameter. A normal light microscope gives you a nice focused picture of a slice of you sample at the "focal plane", but it also lets in loads of light from parts of your sample away from the focal plane which leaves you, overall, with a bit of a blurry picture. Microscopists get around this problem by slicing their samples up very thinly hence no bits to be blurry, but this is a fiddly procedure and leaves your sample very dead even if it started alive.

Confocal microscopy is a technique by which the slicing of the sample happens virtually, you can put a big fat sample in the microscope and by the use of cunning optics you only get an image from the focal plane which is lovely and sharp. You can build up a 3D picture of the sample by moving it up and down in front of the lens. Marvin Minsky was the original inventor of the confocal microscope in about 1955 but was somewhat held back by the lack of lasers, computers and stuff. Things picked up again in the 1980's as these things became readily available. Oops, I think that might have been some cod history ;-)

An interesting feature of the confocal microscope is that if there's nothing in the focal plane, you don't see anything (unlike a conventional light microscope where you can always see a big bright something, even if it's blurry) this can be disconcerting for the learner - you can't find your sample!

Every microscopy needs a contrast mechanism, a way of separating one thing from another. In confocal microscopy by far the most popular contrast mechanism is to use fluorescence via the use of a fluorescent dye to label bits of your sample.  If you illuminate a fluorescent dye with light of one colour it emits light of another colour (making it stand out particularly well). If you ask an organism nicely (okay - genetically engineer), you can get it to make Green Fluorescent Protein (GFP) which is a protein that fluoresces green (duh!).  All that remains is to find a way of  sticking the fluorescent dye to the thing in which you're interested.

In each post about science I like to add a little fact to help you wind up / avoid winding up practitioners in that field. So to wind up a microscopist: project an image onto a screen for a presentation and claim "x800" magnification (or whatever). The problem is: to what does "x800" magnification apply? Is it what the microscope told you when you looked through the eyepiece? Is it the magnification on the printed page, the computer screen or on the wall? We really doubt you know. It's scale bars all the way.

For several years I was proud keeper of a confocal microscope. I, and my students, had great fun with the microscope and it had fun with us. The pointy end of the microscope is the objective lens, the bit closest to the sample. A fancy microscope like our Zeiss LSM 510 had 5 or more objectives mounted on a turret (see the image at the top of the post), each objective gives different magnification. The Zeiss LSM 510 was fully motorised, and too clever by half. It would assume that you wanted to stay focused on the same part of the sample when you changed objectives (or it changed them for you, with it's motors). Now the problem is that for a x10 objective the focal plane is about 1cm from the front of the objective lens, and for a x40 objective lens it could be only a tenth of a millimetre. Now imagine I've just focused deeply inside my sample using an x10 objective, I switch to the x40 object on the computer..... and the microscope mashes the x40 objective lens into the sample, blithely ignoring the sound of £6000 lens smashing glass coverslip and covering it in sticky sample!

In later posts I'll show some of the results from the confocal microscope in non-mashy-lens-into-sample mode.

Here are some images, these are all slices through solid objects. I didn't really think this through in terms of explaining what's in these first three images, roughly they're what you get if you add a small amount of salt-water to Fairy liquid (although I would prefer you to use Persil washing up liquid). First up is a cross-section through an "onion-type micelle":



And these are the structures you see in a similar system but with a different concentration of water:





This is a false colour image, bit lurid - don't know what I was thinking at the time. These are known as "myelin":



Pollen-grains are always popular - I stole this one from here. Each of the images is a slice, and the inset bottom right is the result of adding all the slices together.

Wednesday, December 02, 2009

There may be blue skies ahead

You have to feel sorry for Lord Drayson. At a time when he is doing his best to stand up for science and the funding of science in what are very difficult economic circumstances, where every department in government must show it's worth, scientists appear to be trying to hack his legs off; by balking at the proposal that they should explain their impact on society and insisting that they should be left to do 'blue skies' research. I refer to the recent debate hosted by the Times Higher Education Supplement (THES): "Blue skies ahead? The prospects for UK science", a discussion which centred around the impact of science on society and how you might increase that impact, how impact is evaluated, the role of blue skies research and the crisis at the Science and Technology Facilities Council (STFC).

In this post I'll try to explain why scientists are so impassioned about the subject of impact assessment and make a couple of suggestions as to how we might collectively do this better.

Impact means several things in this context: there is the HEFCE Impact pilot exercise, which is retrospective and in pilot phase at the moment and there are impact statements in grant applications which were introduced this year which should provide a prediction of the economic and societal impact of the work proposed. I suspect it is the impact statements in grant applications which are causing the real concern here, and I also suspect that Lord Drayson was talking primarily about the former, and the audience and panel were talking about the latter at the THES event.

Before I go on I will provide a short autobiography to provide context for my comments: I'm currently a research scientist in a large company, I've been here for 5 years but until the age of 35 I was an academic scientist, rising to the position of tenured lecturer in the physics department of what is now Manchester University. The opinions I present here are entirely my own.

Putting aside the question of how good a scientist I am; I can tell you one thing for certain: I'm an incredibly bad at writing successful grant applications! Really bad, awful, abysmal. I wrote about 8 over my relatively short period as a grant writer and they all failed (and not even by a small margin). I don't think I'm alone in this.

Exactly what grant applications mean varies from subject to subject, but in my field: experimental laboratory-scale soft matter physics it's important: in order to do research you need people and equipment. Typically grant applications are written to get 2-3 years of postdoctoral research assistant, or a PhD student and some equipment. Your department will rate itself on the grant funding it obtains, and you on your contribution to that figure. You will definitely feel that winning grants is something you have to do to succeed in your job. You can eke out an existence without grants, funding students from other sources and helping colleagues with successful grant applications, throwing yourself into teaching but it doesn't feel like the way you're supposed to do it.

I find it difficult to put into words how much I loathed the whole grant application process. A grant application requires you to describe the research you're going to do over the next few years, and how the results will be world-class. The average success rate is about 20% (disputed) in the field in which I was applying. Once written the grant applications are sent to reviewers - actually that means someone just like you - it's peer review. Reviewers know full well that if they rate an application "excellent" as opposed to "outstanding" in any one of several areas they are damning the application to failure. Reviews go forward to a panel who plough through huge numbers of these things (about 5 times as many as they are going to fund), then rank them. Funding is given to those at the top of the list, working down until the available cash is exhausted. There are serious questions as to whether we can rank schools accurately, here we try to do it for world-class research. It's a grim process for all involved.

It's in this context that the new impact statements are introduced, potentially these contribute 25% to the grant decision. As an grant application writer I need this like I need another hole in the head! Writing the science part of the grant application is a work of pure fiction (I think it might have helped if I'd appreciated that when I was doing it), writing the impact statement: "The demonstrable contribution that excellent research makes to society and the economy" is going beyond that. This is forward looking, you're being asked to quantify the future economic value of that bit of world-class research you're going to do.

The problem is that university science can take a long time to trickle out, in a case I highlighted yesterday in what is a pretty applied area, research was having a very direct, specific impact 40 years after it was done. Prior to my grant seeking days my work as a PhD student, postdoc and ADR has been funded, at least in part by three separate companies - little word of the economic impact has made it back to me from these companies.

So this brings me to some concrete points:
Point 1: For many areas of research societal, economic impacts are diffuse and long term, and actually the academic proposing the research is not in the best position to determine those impacts. As an industrial researcher I've written business cases for doing external research, this is a very revealing exercise which I know I couldn't have done as an academic because I simply wouldn't have had the required information to hand. Impact statements should not be required on a "one per application" basis, they should be for whole subject fields and written in consultation with people with actual data on the economic and societal impacts.

Moving on to a second point, Lord Drayson very generously praised scientists in Britain for the quality of the science they do, but said the problem was in how this expertise is translated into wider society:
Point 2: Impact statements are purely about scientists, applying for grants in universities. The onus seems to be on scientists to fix a problem which has two sides. What are we doing about how society and industry interact with science?

In the end impact is about communication, it's about understanding the preconceptions that other people bring to the party and addressing these preconceptions in the way you communicate. Although I described myself as a research scientist, I'm actually a science communicator in and industrial environment.

This whole blog is really about communicating what it's like to be a scientist, how it feels, the little details of the tribe. The people I follow on twitter are all very clever, they do lots of different things and from a combination of tweets and blogs you learn about their lives. This is my contribution to that discussion, it's a societal impact.

Wordless Wednesday


Tuesday, December 01, 2009

A short story about scientific impact

I work for a company, I won't tell you who.

40 years ago a man wrote a paper, published in a scientific journal, you won't have heard of it, it wasn't important.

A couple of years ago I read the paper, and I carried out the calculations described in the paper. I showed the results to the team I worked with. They showed that what we were doing wasn't going to work. So we stopped the project.

There were 10 people on the team, let's say they each cost my company £100,000 a year. Let's be harsh and say that my presentation only accounted for 10% of the decision to stop.

That man saved saved us £100,000.

I wish I could tell the man that wrote the paper, so he can put it in his next impact statement.

Update: Thanks to @dr_andy_russell for pointing out my rather significant typo!

Thursday, November 26, 2009

Wonderful Life


You might have noticed I'm happy to go off into areas of science which are not my own with gleeful abandon, applying a physicist's mind to what I find. This week I'm off to see the biologists, Mrs SomeBeans is a zoologist - so in a sense, I sleep with the enemy. Twitter puts me in touch with so many more of the biological persuasion.

Animals are very well covered in TV documentaries but their view is somewhat partial, favouring the furry and cute, but there's so much more.

I think my appreciation for the wonders of the tree of life was first stimulated by "Wonderful Life" by Stephen Jay Gould. This book describes the fossils uncovered in the Burgess Shale a collection of fossils, from almost the very earliest life in the record - 500 million years ago. Gould is making two key points in his book, the first is that the so-called Cambrian explosion of species threw up a very diverse range of body plans; his second point is that the ones that survived did so almost by chance, there wasn't anything obviously superior about them. Richard Dawkin's book, "The Ancestor's Tale" is also well worth a read.

I should explain body plan: this is the overall layout of the animal. So for the tetrapods (including mammals, reptiles, amphibians, birds) you get four appendages. Insects: the plan is six legs, three body parts and an exoskeleton. I'm reliably informed that snakes are tetrapods, although I'm struggling with this particularly since one of my advisers previously tried to persuade me that mohair came from mo's. Basic rule seems to be that you can lose bones, fuse bones but not gain bones.

Once you appreciate this body plan stuff you start to get offended by representations of mythical beasts like angels and centaurs: they are clearly mammalian so angels can either have wings or they can have arms, they can't have both! Similarly centaurs can either have two pairs of horsey legs and no arms or a pair of horsey legs and a pair of arms, what they can't have is two pairs of horsey legs and two arms. The only reason I'm letting off the fairies is that I suspect they might be insects.

As a physicist, I really like this approach. Physicists basically have poor memories which shapes their approach to science, they like nice simple rules that encapsulate as much information as possible. You also spot them looking for "the simplest possible" model. So being freed from the requirement to learn lots of animal names by the simple expedient of calling them all 'tetrapods' is great. It's true that the tree of life is more complicated than that but the principle is there. If you want to go into this in more depth, the technical name for this study is phylogenetics... *time passes as I get completely distracted*

At this point I originally made my usual error by referring to other living animals as being further down the tree of life: they're not, we're all leaves on the surface of the tree. A good way to wind up a biologist is to refer to another currently living species as 'primitive'. They don't like this because, as they point out, they've been evolving for just as long as us! So, rather than referring to them as 'primitive' here are a couple of distant leaves: Hagfish are the only vertebrates to have a skull, but not a spinal column. They evade capture by covering themselves in slime. And as for tunicates, they start with a notochord (a precursor to a spinal cord) as larva but give it up as adults which indicates a certain bloody-mindedness that I admire. (it's a tunicate that decorates this post, at the top).

Rather more interesting than yet another furry animal...

Thursday, November 19, 2009

Not Waving but Drowning

I thought I'd write a little post about Google Wave, and more generally the uptake of new software.

Wave is a recent innovation from Google, currently in restricted beta. It's a combination of e-mail, instant messaging and wiki. So if you even feel the need to utilise e-mail, instant messaging and wiki functionality simultaneously then this is the app for you. Actually, that's a little unfair.

This is what it looks like:


So, broadly it looks like an e-mail client. I won't describe the details here but you can get a better idea from the very fine Complete Wave Guide.

At the moment the problem with Google Wave is that the a relatively small number of people have been given the equivalent of e-mail addresses and told to randomly e-mail each other. Unsurprisingly they're spending a lot of time "e-mailing" each other about Wave. However, there is serious potential in Wave, aside from the traditional Waving about Wave I got lucky and managed to have a non-Wave useful discussion on Wave. It started with a thing that looked like an e-mail (I had a bunch of questions), my interlocutor started answering the questions in real time. Wave allows you to launch chat at any point in an e-mail, not only that it always you to see  your colleague typing character-by-character. In earlier forays this had been irritating but for a 'real' question it was actually rather useful - I could see my colleague getting the wrong end of the stick and put him right promptly. We managed to usefully flit between several conversations, adding things as they occurred to us and the end result is a nice record of a branching conversation.

As a wiki / document preparation system I'm less convinced. The core formatting available in Google Wave is fairly basic, although it can be extended significantly using gadgets and robots, but I can't see it being a comfortable way of working together on anything other than quite a short document.

I can see this being a great replacement for interminable e-mail threads between multiple participants, and even a way of writing minutes for a bunch of people sitting in a room with each other. If Google Wave were ubiquitous and people were willing to use it, then it would be a significant improvement over e-mail. Ubiquity may be attained in the future for the domestic user - I could imagine it becoming available as an option in Google Mail. To be honest I see far more applications for the business user and there, for larger organisations, uptake will be much slower.

This leads to another issue, even if they have access will people use such a tool? I'm dubious about this, I work in the research arm of a large commercial organisation. So you'd expect this group to be more tech savvy than average, but uptake of new software is pretty slow, people have taken to instant messaging but not wikis, reference management software, revision control, all things you might expect them to find useful. Once you get past the standard business suite of Word, Excel, PowerPoint and Outlook enthusiasm peters out and arguably mastery of even these applications is limited. This isn't a criticism of my colleagues, it highlights how difficult it is to gain traction with new software. When it comes down to it remembering how to use software is  hard and unnatural so not surprisingly we don't do it very well.

Ultimately Google Wave is another tool, in a toolbox that is overflowing.

Sunday, November 15, 2009

The past is a foreign country


I've been hanging out with historians recently (both online and in real life), so it got me thinking about how scientists treat history. The 150th anniversary of the publication of "On the Origin of Species" is coming up too, so it seemed like a good time to write this post.

My impression is that historians are about the reading of contemporary material, and drawing conclusions from that material; a realisation I came to writing this is that historians seem to have the same sense of wonder and passion for historical minutiae as I have for nature and science. I remember talking to a historian of science who was working on an original manuscript of some important scientific work, it quickly become clear that this was much more exciting for her than me. To me the exciting thing was the theory presented in it's modern form, I wasn't very interested in the original.

In science it isn't the original presentation that's important: I haven't read Newton's Philosophiæ Naturalis Principia Mathematica, Maxwell's A Treatise on Electricity and Magnetism, any of Einstein's four "Annus Mirabilis" papers, Galileo's Dialogue Concerning the Two Chief World Systems, Darwin's On the Origin of Species, the list goes on...

And that's not to mention the real contemporary material: correspondence, notes and labbooks. I have a sequence of about 20 labbooks in the loft from 15 years of research, supplemented by a hoard of files and e-mails stored on my computer, covering the same period. I'm not sure I even want to try to reconstruct what I was thinking over that period - let alone try it on someone else's records! It's not that I'm remiss as a scientist, we just don't read original material.

The original presentation of an idea may not be the clearest, and it may well be that it makes more sense later to present it as part of a larger whole, and to be honest scientists can be a bit hit and miss: Newton's physics is great but the alchemy was bonkers. Science comes in bits, these days the bits are the size of a journal article and it's only when you're doing active research at the cutting edge that you need to keep track of the bits.

Mathematical notation is an issue for original publications. For example, Maxwell's equations, which describe electromagnetism (radio waves, electricity, light...) are a monster in his original presentation but can be squished down to four short lines in modern notation (actually a notation introduced not long after his original paper). There's a rule of thumb that each equation in an article halves the number of readers, therefore I link you to Maxwell's 1865 version on page 2 of this document with the modern version at the bottom of page 6...
impressive, no?

A bit of history is introduced into the teaching of science but it's either anecdotal such as the apple falling on Newton's head, Gallileo dropping things off towers, Sadi Carnot and his wacky exercises, or we might give a quick historical recap as we introduce a subject. But to be honest it's really all window dressing, the function of this history is to provide a little colour and give students the opportunity to do some exercises which are tractible.

Are scientists losing out as a result of this historical blindness? History should certainly inform us of our place in society, and our future place in society (okay - I'm talking about cash here!). I'm less sure that it has something to teach us on the 'craft' of science, this is something that comes from professional training - perhaps it would help if we were not presented with such caricatures of our scientific heroes.

So that's my view, how wrong can I be?

Monday, November 09, 2009

Pretty molecular models

And now I leap off into a topic in which I am not properly trained: molecular biology!

You sometimes get the impression  that scientists lead dull lives because they over-analyse things, they've lost their sense of wonder. The thing is: the more you know, the more you wonder.

One step up from atoms, you find molecules - atoms bound together. Starting things simple, here's caffeine:


As every chemist kno carbon (C) atoms are black, nitrogen(N) atoms are blue, oxygen(O) atoms are red and hydrogen (H) atoms are white. (Not really but those are their traditional colours in molecular models).  Isn't it beautiful? You can play with an interactive version here. In real life chemistry is more messy than this which is why I'm a physicist rather than a chemist.

The caffeine molecule is about 1 nanometer across, 1 (US) billionth of a meter. To give you a feel for the size of a nanometre: think of a grain of rice - about 1mm across, now imagine a kilometre. Walk your kilometre with the grain of rice, I walk a kilometre in about ten minutes and it takes me past two roundabouts, a gym and a postbox. Now look at you grain of rice again. To a caffeine molecule, a grain of rice is a kilometre wide.

Molecular models of this sort are a representation of reality, the things they miss out are: (1) in real life molecules are not static, they're jiggling away furiously through the action of thermal energy (2) generally they're going to be surrounded by solvent molecules (often water, which are also zipping and wiggling around) (3) they're sort of soft, fuzzy and deformable and different parts of the molecule will be sticky or slippery according to their chemical nature. Ten years ago a good question at any molecular modelling seminar was to ask about the solvent molecules, the usual answer was "there aren't any" - this usefully puts molecular modellers in their place since we're rarely interested in molecules without solvent. Perhaps things have moved on since those days.

Life specialises in bigger molecules than caffeine, exquisitely crafted into little machines. And the incredible things is that all of life (humans, mammals, reptiles, birds, snails, bees, tardigrades, sponges, plants, algae, bacteria, fungi, weird bacteria that live in hot underwater vents) share the same 4-letter DNA code, which codes for the same set of 21 amino acids which build all the proteins to make life. Many of the proteins themselves aren't hugely dissimilar across all the plant and animal kingdoms, particularly those to do with the most basic operations (processing DNA, converting food to energy).

Proteins are strings of amino acids: each different type of protein has a different sequence of amino acids.
Protein molecules typically contain many (a hundred or more) amino acids. The amino acid sequence is known as the primary structure, next up is the secondary structure: alpha-helices and beta-sheets. Different amino acid sequences can produce alpha-helices and beta-sheets that look the same. These structures are represented using "ribbons":



This is a model of lysozyme, the alpha-helices are shown in red and the beta-sheets are yellow, bits of "random coil" amino acid sequence are shown in green. Lysozyme is about 5 nanometres from one end to the other. You can play with an interactive version here. The amazing thing about proteins is that their 3D structure forms spontaneously and very rapidly when they are synthesised in the cell, this process is known as 'folding'. Furthermore the folded, or tertiary structure, of the protein is the same every time - it has to be or the protein won't do it's job. One of the great challenges in molecular biology is that, despite knowing the amino acid sequence of a protein from the DNA which encodes it, working out the 3D structure is a question of measurement, or comparison with other sequences of known folded structure.

Lysozyme is a physicist's protein, you can buy it in bottles by the gram. I've worked on lysozyme, looking to see how it unfolds on a surface when heated.

You can go see more protein structures on http://proteopedia.org/, the lysozyme model above is 132L. I could play on there for hours...

References
Green, R.J., Hopkinson, I. & Jones, R.A.L. Unfolding and intermolecular association in globular proteins adsorbed at interfaces. Langmuir 15, (1999), 5102-5110.

Saturday, October 31, 2009

Bryn Alyn - an autumn walk

Off to Llanferres yesterday for an autumn walk through the woods, along the ridge and back again. You can see the route here:


View Bryn Alyn in a larger map

It's a variant on a route in "Walking in the Clwydian Range" by Carl Rogers. A tiny domestic detail: whilst walking the Inelegant Gardener carries the guide book, preferring to navigate via prose and I carry the OS map - preferring maps. The track is from a Garmin GPS60, I don't use it for navigation but to tag my photos with location information for which I've written a little program.

We last did this walk in May, but I committed a terrible faux pas: the battery on my camera went flat and for some reason I'd not brought a spare and had deliberately left behind my second camera and my phone (which also has a camera, which is really crap).

Beech trees were definitely the best for autumnal colours, although birch produces an attractive pointillist effect, sycamore seemed best for kicking through.




At the top of the initial climb there is a little bit of limestone pavement, this is most famously found above Malham Cove but it's nice to find your own little patch.


Limestone pavement is formed when slightly acidic rain water dissolves the limestone producing deep fissures (grykes) between remaining blocks (clints). It looks like a fantastic place to break your ankle.

For reasons I can't explain I like the stray bits of ironwork left over from old fencing, parts of the Lake District are particularly good for this.


And to top it all off, a cow wearing a ginger wig:


This was one of many cows in a field we passed through, they were fine looking beef cows in a wide range of colours. We also had a "That's no cow, it's a bull" moment but I was reassured by remembering vaguely someone saying that you're okay in a field with a load of cows and a bull because the last thing on the bull's mind is going to be you. If this isn't actually true then I'd prefer to be left in ignorance, if you don't mind.

Friday, October 30, 2009

Professor Nutt and the classification of harm through the misuse of drugs

The sacking of Professor Nutt (now ex-head of the Advisory Council on the Misuse of Drugs) by the Home Secretary Alan Johnson, has been in the news today. The immediate cause of his sacking appears to have been this recently published paper which was originally presented as the "2009 Eve Saville Memorial Lecture" at the Centre for Crime and Justice Studies at King's College in July 2009. The lecture appears to have been a policy discussion based in part on his classification of relative drug harm which was first published in The Lancet in 2007:
Development of a rational scale to assess the harm of drugs of potential misuse, David Nutt, Leslie A King, William Saulsbury, Colin Blakemore, The Lancet, vol. 369, (2007), p1047-1053.
This classification of harm was based on assessment by two sets of experts: the first set of 29 from the Royal College of Psychiatrists’ register as specialists in addiction, the second set draw from a wider community involving members "ranging from chemistry, pharmacology, and forensic science, through psychiatry and other medical specialties, including epidemiology, as well as the legal and police services". The basic scheme was to ask these experts to assess the harm caused by a set of 20 substances (mainly illegal but including alcohol and tobacco) on a set of 9 measures:


This is done iteratively using what is called a 'delphic process', the experts make their initial numerical assessments independently in an initial round, but can then modify those assessments once they have seen and discussed the assessments made by others. Once they have reached some pre-determined finishing criteria they combine the average scores for each area to produce an overall measure of harm. They are pre-warned of the substances in question so they can go read up on them. The rankings of the two separate groups appeared to be very much in agreement. The resulting mean harm scores for the twenty substances are shown in the following graph:



The interesting thing about this group is that tobacco and alcohol (which I'm currently enjoying in the form of fine Chardonnay) are found in the middle of the range, below heroin and cocaine but above cannabis and Ectasy. A statement which in part has earnt Professor Nutt his dismissal.

Now you could argue that "The Lancet" paper is flawed, and Professor Nutt makes suggests for improvements in methodology, but the thing is: there is no competition. Current drug classifications into A, B and C are not made on an assessment of harm based on any published or transparent criteria. If Alan Johnson wants to argue that Professor Nutt is wrong on his evaluation of the relative harm of drugs he should do so on the basis of a transparent evaluation process not because he just doesn't like the advice he's been given.

Though I have not focussed on it in this post, the Eve Saville lecture includes this assessment of harm along with a discussion of other issues including the media reporting of deaths through drug misuse. It does also include some support for elements of government policy on drugs, in particular he says:
One thing this government has done extremely well in the last ten years is to cut away much of the moral argument about drug treatments. They have moved in the direction of improving access to harm reduction treatments, an approach that, I think, is wholly endorsed by the scientifi c community and by the medical profession.
Update
1st November 2010: Professor Nutt has published an improved version of this study in The Lancet (pdf), the process used is a little different and an attempt has been made to improve the relative weight given to different harms. This revised study finds that heroin, crack cocaine and metamfetamine most harmful to individual users and alcohol, heroin and crack cocaine most harmful to others.

Who Dr.?


After the big and shiny experience as an undergraduate I went off to do a PhD., to make me into a Dr. This was something I'd intended to do since a visit to the Campden and Chorleywood Food Research Centre as a school student; there we were shown around the labs and I was convinced that a career in science without a PhD. was going to be a serious uphill struggle involving the cleaning of much lab glassware.

The exact nature of a PhD. varies from country to country and from subject to subject. In the UK a PhD. in physical chemistry is typically 3 years long and the supervisor will usually have a big say in what the student does.

I did my PhD. at Durham University in the Interdisciplinary Research Centre for Polymer Science, supervised by Prof. Randal Richards. Prime motivation for this particular PhD. was the cash, it was funded by Courtaulds Plc and paid a research assistant salary. It also got me back to more big and shiny science, in the form of the neutron source at the Rutherford-Appleton Laboratory  (RAL) and with the added benefit that a very skilled technician made my polymers for me. This was good because I've never been "at one" with synthetic chemistry, the untidiness of the process didn't suit my temperament. Apocryphally the start of polymer science was a bit slow because the early polymer synthesisers couldn't crystallise their material, this led to much derision from other synthetic chemists who made lovely crystals from their materials, rather than black sludge that polymer scientists made. The molecular nature of polymers wasn't appreciated until the 1920's which is really rather recent.

So for 3 years I slaved away: I prepared samples - spinning thin films onto lumps of shiny flat silicon, I went down to the RAL for 48 hour experimental runs, I wrote FORTRAN programs do do data analysis, I read journal articles, I attended conferences, made posters and gave presentations. I observed, from a small distance, the activities of synthetic chemists.

The chap over the desk from me was a historical re-enacter, I watched as he made his own chain-mail.

It was whilst I was writing my thesis, entitled "Surface composition profiles in some polymer mixtures", that I first met with the elephant of despair. The elephant of despair lived in the library, he was made of a transparent material so you could scarcely see him and he was only about 6 inches tall. He stood in the gaps between the journals, waiting for when I would arrive to find an article and discover on the way a paper published 10 years ago which captured most of what I'd slaved over for the last three years. His plaintive trumpeting has haunted me on and off through the years.

I think the day I decided I wasn't going to make an effort to get "Dr." onto all my paperwork was the day I was in the bank the man in front of me was having a lengthy discussion with the cashier because the printed numbers in his saving book did not line up with the ruled lines. After he'd left the cashier turned to her colleague and said: "He had to complain, he was a doctor". As it stands the only people who call me "Dr Hopkinson" are my parents, one of my credit cards and the odd polite student.

For reasons I don't understand medical doctors appear to refer to PhD's as "proper doctors", whilst I've always considered myself a bit of fraud since I was not a "proper doctor" - who could potentially save your life. Perhaps they're just being polite.

And now I'm nearly a PhD. grandfather, I supervised three PhD. students of my own and one of these has a student who is about to do her viva. I don't have children, but I feel very 'parental' about my students - I'm immensely proud of them and their achievements.

Thursday, October 22, 2009

Talkin' about my generation

My generation have all been wallowing in nostalgia at the Electronic Revolution strand on BBC4, in particular Electric Dreams - the 80's and Micro Men - the story of Sinclair and Acorn computers. We grew up in a golden age for programming, the generation before us had no hardware and the generation after us had no need to write their own software. We programmed because we had to.

I had a Commodore VIC20, cheaper than the BBC Micro, more classy and substantial looking than the Sinclair ZX81, available slightly before the ZX Spectrum. All of these lovely old machines available for your viewing pleasure at Centre for Computing History, along with many others. Look around the internet and you can also find all manner of emulators and manuals for these early machines. We wrote our own programs, or we typed in games from magazines - this was often a rather lengthy process and a bit prone to error.

I found the "VIC20 Programmers Reference Guide" here re-typed by Asbjorn Djupdal. Here's snippet: a program which allows you to enter the scores in each quarter for an American football game and then prints them out on screen in a table:

100 DIM S(1,5), T$(1)
110 INPUT "TEAM NAMES";T$(0),T$(1)
120 FOR Q = 1 TO 5
130 FOR T = 0 TO 1
140 PRINT T$(T),"SCORE IN QUARTER" Q
150 INPUT S(T,Q)
160 S(T,Q) = S(T,0) + S(T,Q)
170 NEXT T,Q
180 PRINT CHR$(147) "SCOREBOARD"
190 PRINT "QUARTER";
200 FOR Q = 1 TO 5
210 PRINT TAB(Q*2 + 9)Q;
220 NEXT
230 PRINT TAB(15)"TOTAL"
240 FOR T = 0 TO 1
250 PRINT T$(T)
260 FOR Q = 1 TO 5
270 PRINT TAB(Q*2 + 9) S(T,Q);
280 NEXT
290 PRINT TAB(15) S(T,0)
300 NEXT
Oh, this brings back memories!

To me programming and science (or at least physics) are intimately linked, almost the first programming I ever did was to visualise beat frequencies. To this day, if I want to really understand a scientific paper I'll implement the equations in a program, as often as not a few typos in the equations are revealed in this way and I'll have learnt exactly what the paper was on about. Teaching a student is a fantastic why to learn something, teaching a computer is almost as good.

Most the programming I do is of a workmanlike nature, it drives machines for measurements; it processes data; it analyses results; it computes equations, but there is scope in programming for a deep elegance, a pared down beauty which is difficult to describe - it's like finding the answer to a cryptic crossword clue - perhaps for an artist it's like finding just the right line to give a character personality. It's an algorithm that does what it has to do with the least effort required. I still program a lot for my work (relatively small stuff that only I will use), and it's not unknown for me to waste an hour doing something elegantly rather use the quick, dirty and obvious approach.

Programming is in my genes, in two ways really - my parents were both programmers from the sixties. We once found a leaflet advertising the Elliot 503 in our loft, 400sq ft of '60s computer with substantially less processor power than the most lowly of today's devices - this is the computer on which my mum learnt to program. Dad started on an early Ferranti of some description in the late 50's.

Earlier programming for me pretty much amounted to shouting verbs at things, possibly because I used FORTRAN which at the time was ALL IN CAPITALS. Programming today feels very different, it's more like visiting a library to get a book of spells to cast or the singing of a choir. I still enjoy doing it, in fact I'm writing a twitter client in C# just so see how to do it.

You might get the impression from all of this that programming is for the mathematically minded, but it isn't - it's really for the logically minded, for some mathematical applications maths is required but otherwise it isn't.

I taught the basics of programming to first year physics students a few years ago, and the thing that really shocked me was that, out of a class of fifty, only one had any real programming experience. There is hope though, I suspect programming still holds a fascination - my single data point: father and son sitting down to program the BBC Micro on Electric Dreams.

Tuesday, October 20, 2009

Twitter, rumours and physics

The twittersphere avoided making a bit of a mistake this morning. Wikileaks had obtained a new version of the BNP membership list, which they released (the BNP claim this list is a fake). Prior to release it was claimed that a peer of the realm was on the list and immediately post release that peer was named. Only it turns out it wasn't him, someone who styled himself Lord with a very similar name was the man on the list. Fortunately the released list was detailed enough that this could be checked, someone had the wit to check before blindly repeating the name. Once they'd done this they started correcting the false rumour (in what looks like quite a vigorous manual effort). It's worth noting here that the fact-checker appears to be a trained journalist.

But it could so easily have been very different. It could have been very difficult to establish the rumour was false, it could have been that the diligent fact checker stopped to finish his cup of tea before tweeting his correction, the rumour could have been re-tweeted by someone with many followers. All of these things could have happened but didn't, will this be true the next time?

On the plus side, twitter rumours do appear to be traceable back to source and it's very easy to find the individual rumour-mongers and put them right. This is certainly true for non-malicious rumourmongering (that's to say where people have not made a special effort to propagate a rumour, nor hide their tracks).

There is a scientific link here, modelling of all sorts of networks has long been a respectable scientific field. For example, there's Per Bak's forest fire model and work that follows on from there. More recently there's been work focussing more explicitly on computer networks and social networks. To a physicist Twitter represents an example of a simple system which has nodes (with ingoing and outgoing links) and messages that are propagated between the nodes. The nodes could be trees in a forest and the thing passed could be fire, or the nodes could be computers in a network with the message being network traffic; the nodes could be scientific papers with the messages citations of other papers. The physics doesn't care about the detail of these things, it cares about a small number of parameters in the system: how many links in and out of a node? What's the probability of a message being transmitted from one node to the next?

So there's an interesting bit of network analysis to do here. How fast can a rumour propagate on Twitter? What fraction of people refrain from tweeting a false rumour to stop it propagating? What's the best way to squash a false rumour?

Having watched the no doubt frenzied activities involved in squashing today's rumour. One useful tool would be an automated rumour-quashing robot. It would search for tweets containing the rumour (probably based on a manually selected keyword) and tweet the originator with a rebuttal.

Think before you tweet!