bluesci issue 15 - easter 2009

36
www.bluesci.org Issue 15 Easter 2009 Green Living . Ice Sheets . Nasal Cycling Pulsars . Wireless Communication . Fractals Cambridge’s Science Magazine produced in association with Credit Crunch Misattributed scientific discoveries Resisting Temptation The science behind self-control Unlocking the Brain Understanding imaging techniques

Upload: bluesci

Post on 06-Feb-2016

222 views

Category:

Documents


0 download

DESCRIPTION

Cambridge University science magazine FOCUS: Lighting up the Brain

TRANSCRIPT

Page 1: BlueSci Issue 15 - Easter 2009

www.bluesci.orgIssue 15 Easter 2009

Green Living . Ice Sheets . Nasal CyclingPulsars . Wireless Communication . Fractals

Cambridge’s Science Magazine produced in association with

Credit CrunchMisattributed scientifi c discoveries

Resisting TemptationThe science behind self-control

Unlocking the BrainUnderstanding imaging techniques

Page 2: BlueSci Issue 15 - Easter 2009

Deadline for article submissions is 10 July 2009Articles should be ~1200 wordsGet in touch with potential ideas or send us the � nished articles

...Write for BlueSci

• The Science of Pain • World of the Nanoputians •• For He’s a Jolly Old (Cambridge) Fellow • Designer Babies •

Issue 1 Michaelmas 2004 A New Science Magazine for Cambridge

in association withProduced by

Issue 2 Lent 2005

in association withCambridge’s Science Magazine produced by

www.bluesci.org

Hangover Hell

• Robots: the Next Generation? • Mobile Medicine •• Climate Change • Forensic Science •

The morning afterthe night before

Einstein100 years of E=mc2

Our OriginsThe genes that make us human

Issue 3 Easter 2005

in association withCambridge’s Science Magazine produced by

www.bluesci.org

• Hollywood • Science & Subtext •• Synaesthesia • Mobiles • Proteomics •

Looking BeyondCrossing the great divide:the art of astronomy

Mars or GloryA giant leap or a distant view?

Issue 4 Michaelmas 2005 www.bluesci.org

• Artificial Intelligence • Obesity •• Women In Science • Genetic Counselling •

New Parts For OldThe future of organ transplants

Risk & RationalityWhen to trust your instincts

The Sound of ScienceNew perspectives on music

in association withCambridge’s Science Magazine produced by

Issue 5 Lent 2006 www.bluesci.org

• Grapefruit • Dr Hypothesis •• Probiotics • Quantum Computers •

ChocolateWhy do we love it?

AstrobiologyThe search for alien life

AIDS: 25 Years OnPast, present and future

in association withCambridge’s Science Magazine produced by

Issue 6 Easter 2006 www.bluesci.org

• Drugs in the Sewage • Quantum Cryptography •• Time Truck • Gaia • Pharmacogenomics •

The Energy CrisisWhat are our options?

in association withCambridge’s Science Magazine produced by

OpinionViews from Cambridge

Mobile DangersAre phones really a healh risk?

Issue 7 Michaelmas 2006 www.bluesci.org

• String Theory • Schizophrenia • Antarctica •• Science and Film • Teleportation • Systems Biology •

The Future of ScienceForeseeing breakthroughs in research

in association withCambridge’s Science Magazine produced by

Face RecognitionMind-reading computers and brain biology

Stem CellsWhat’s all the fuss about?

a-cover 19/9/06 00:45 Page 1

Issue 8 Lent 2007 www.bluesci.org

• Poincaré Conjecture • Science Documentaries • Pharmaceuticals •• Human Uniqueness • The Whipple Museum • RNAi •

• Stock Markets • Parliamentary Office of Science and Technology •

The Future of NeuropsychiatryUnraveling the biological basis of mental health

in association withCambridge’s Science Magazine produced by

Darwinian ChemistrySelection of the fittest molecules

Biological WarfareDoes biodefence research make us safer?

cover_LN 7/1/07 20:43 Page 1

Issue 9 Easter 2007 www.bluesci.org

• Fair Trade with a Difference • Science and Comic Books •• Proteins that Kill • Human Evolution • Enterprise in Cambridge •

BiometricsBig Brother is fingerprinting you

in association withCambridge’s Science Magazine produced by

All For ShrimpConservation of marine

environments

A Natural CollectorThe story of a Victorian zoologist

Issue 10 Michaelmas 2007 www.bluesci.org

Ruby Hunting • Science Blogging • Extremes of PainThe Mullard Observatory • The Government’s Chief Scientific Advisor

in association withCambridge’s Science Magazine produced by

Mining the MoonAn unexpected fuel source

The Large Hadron ColliderEurope’s £5 billion experiment

Sea MonstersIn the wake of the giant squid

African Rock Art . Intelligent Plants . Physics of RainbowsSci-fi . Human Nutrition Research . Fish Ecology

Issue 11 Lent 2008 www.bluesci.org

Synthetic BiologySynthetic BiologySynthetic BiologyThe Challenges of Engineering LifeThe Challenges of Engineering LifeThe Challenges of Engineering Life

Crowd ControlPhysics of Human Behaviour

Brain BarometerSaccades and Disease

Cambridge’s Science Magazine produced in association with Cambridge’s Science Magazine produced in association with

Saliva’s Secrets . Aubrey de Grey Appetite Control . Biofuels . Science and the Web

Hydrogen Economy Hydrogen Economy Hydrogen Economy Hydrogen Economy Hydrogen Economy Hydrogen Economy Hydrogen Economy Hydrogen Economy Hydrogen EconomyThe Future of FuelThe Future of FuelThe Future of Fuel

No Peppered MythNo Peppered MythNo Peppered MythDarwinian Evolution in ActionDarwinian Evolution in ActionDarwinian Evolution in Action

Issue 12 Easter 2008 www.bluesci.orgwww.bluesci.orgIssue 13 Michaelmas 2008

Scientists at Play . Space Travel . Scent TechnologyOrgan Donation . The Carving Power of Blood Flow

Cambridge’s Science Magazine produced in association with

Global WarmingFirst Predicted in 1895

Cuckoo TrickeryCo-evolution in Action

Science in the MediaInfl uential Science Reporting

Contact [email protected] to get involved with editing, graphics or production

www.bluesci.org

www.bluesci.orgIssue 14 Lent 2009

Randomness . Electronic Paper . Huntington’s DiseaseStories from CERN . Birds . Ultrasound Therapy

Cambridge’s Science Magazine produced in association with

Colour in NatureIridescence explained

Inside a VacuumMore than we thought

Darwin’s CompetitorAlfred Russell Wallace

Submitting Articles

Bored this summer?

Email: [email protected]

Page 3: BlueSci Issue 15 - Easter 2009

CONTENTS

Issue 15 Easter 2009

REGULARS

FEATURES

FOCUS..................17

News Scienti� c Soundbites . .............................................................................. 3Book Reviews Ross Garnaut and Siegmund Brandt .................................................. 4On the Cover Sticky Feet ................................................................................................ 5Technology Sensing our Surroundings ..................................................................... 16A Day in the Life of... A Glaciologist .......................................................................................... 24Away from the Bench Recharging Research .............................................................................. 25Arts and Reviews Rules of Repetition ................................................................................. 26History Credit Crunch ......................................................................................... 28The Pavilion Jelly� sh Burger .......................................................................................... 30Initiatives Engineering the Weather ....................................................................... 31Dr Hypothesis Answers to Your Scienti� c Stumpers ................................................. 32

Exercising Self-ControlAdam Kessler justi� es the need for self-indulgence during exam time ..............................................................................

Algae LivingDaniela Krug, Karuga Koinange and Chris Bowler look at the future of green living .......................................................

Nostril Nose BestCat Davies explores the ins and outs of nasal cycling ...........................................................................................................

Cosmic LighthousesJamie Farnes describes the discovery of pulsars ....................................................................................................................

Faster and Faster Francisco Monteiro looks at the remarkable achievements in error-free digital communications ........................................

Kapitza and the Crocodile Boris Jardine charts the history and inhabitants of the Mond Laboratory .........................................................................................

6

8

10

12

14

This issue’s FOCUS examines the science and usefulness of the glossy brain pictures, now abundant in academia and the media.

LIGHTING UP THE BRAIN

22

Page 4: BlueSci Issue 15 - Easter 2009

www.bluesci.org2

Managing Editor

Editor

EDITORIAL

Editor: Djuke VeldhuisManaging Editor: Amy Chesterton

Business Manager: Michael Derringer

Sub-Editors: Jamie Farnes, Ian Fyfe, Jon Heras, Adam Kessler, Yinglin Liu, Silke

Pichler, Dan Shanahan, Jo Smith, Arthur Turrell

Second Editors: Fran Conti-Ramsden, Harriet Dickinson, Moira Smith, Laura Soul,

Katherine Thomas, Natalie Vokes, Jonathan Zwart

News Editor: Lucinda LuiNews Team: Thomas Kluyver, Lindsey

Nield, Swetha Suresh Focus Editor: Ian Fyfe

Dr Hypothesis: Mike Kenning

Production Manager: Chris AdriaanseProduction Team: Sonia Aguera, Amy Chesterton, Rose Spear, Silke Pichler

Pictures Editor: Ian Fyfe

Distribution Manager: Katherine ThomasPublicity: Matt Child

ISSN 1748-6920

Varsity Publications LtdOld Examination Hall

Free School LaneCambridge, CB2 3RFTel: 01223 337575www.varsity.co.uk

[email protected]

BlueSci is published by Varsity Publications Ltd and printed by Piggott Black Bear (Cambridge)

Ltd. All copyright is the exclusive property of Varsity Publications Ltd. No part of this publication may be

reproduced, stored in a retrieval system or transmitted in any form or by any means, without the prior

permission of the publisher.

COME EASTER term, we long for summer sun, warm evenings and plush English gardens. But even if the sun comes out this summer, the vacation is still one large hurdle away.

Before we are rewarded with any summer fun, we must � rst endure exam term. Whether you’re an undergraduate revising for exams, a postgraduate organising the � nal few supervisions or an academic gearing up to mark exam scripts, the entire University is on hold. It’ll all soon be over, but each day, the apparent signi� cance of exams multiplies in our mind.

Instead of stressing, why not take some advice from

the experts. The article “Exercising Self-Control” explains the need for chocolate cake during prolonged concentration, and argues that the last thing we should be doing is resisting temptation! “Nostril Nose Best” provides insight into our thinking; getting rid of that sni� e or blocked nose might also be the key to a clear mind. For those writing coursework, “Faster and Faster”, explains the need for error-free ‘ones’ and ‘zeros’. And for those of you brinking on genius status, take heed as “Credit Crunch” shows students are rarely recognised for their innovative thinking.

AT SOME point during your time at University you, like many others, are likely to have received an email asking you to volunteer for a brain scan. You might even have been asked to do some sort of a behavioural task whilst being scanned, maybe you had to solve simple mathematical problems or press buttons in response to seeing certain shapes or words. Clearly, there is a bewildering array of brain research going on, but what exactly is the science behind those pretty pictures? Perhaps you do not care, but just saw an opportunity to get a copy of your brain

image. Nevertheless, you

might also have wondered what was going on inside that machine as it hummed and clicked away as you lay still, inside the small cylinder. This issue of FOCUS, “Lighting up the brain” explores the ins and outs, bene� ts, limitations, and future of brain research technology.

If that is not enough, elsewhere you can learn about everything from ants, algae and astrophysics to how to successfully resist temptation, conduct � eldwork in the arctic and use your mobile phone as a research station to name but a few. So sit back, relax and have a read.

Djuke Velduis

Amy Chesterton

Issue 15: Easter 2009

JEFF

KU

BIN

A

Page 5: BlueSci Issue 15 - Easter 2009

Lent 20093

NEWS

GamblinG is an addictive form of entertainment with a global appeal. This

might seem surprising given the fact that even experienced gamblers know it’s the house that always wins. What then causes persistence amongst gamblers who return to the roulette table again and again? a team of researchers from the behavioural and Clinical neuroscience institute, at the University of Cambridge, have tried to find out why near misses make people want to carry on gambling.

subjects were asked to play a slot machine while their brains were scanned in order to determine active brain regions. if the reel reached standstill one position from the payline, then it was

treated as a near miss. if the payline was more than one position away, the result was a loss. surprisingly, the brain scans showed that the same regions (the ventral striatum and the anterior insula) are associated with both winning money and near misses. There is evidence that the insula is activated when cocaine is taken as well.

Published in Neuron, the study concludes that the brain responds to near miss outcomes like it does to wins. The next time you’re at the slot machines, either lose completely to minimise your debts, or win convincingly. SS

Why is gambling addictive?

ExPosUrE To second hand smoke could increase the risk of developing cognitive impairment, including dementia, according to research led by Dr David llewellyn of the University of Cambridge.

Previous studies identified active smoking as a risk factor for cognitive impairment and other findings had suggested that exposure to second hand smoke could impair cognitive development in children and adolescents. However, this research is the first large scale study to conclude that second hand smoke can lead

to neurological problems in adults.The study used saliva samples from

nearly 5000 non-smokers over the age of 50. by measuring the level of cotinine (a by-product of nicotine) in their saliva and collecting a detailed smoking history from the participants, the level of exposure to second hand smoke was determined.

a number of neuropsychological tests were performed to assess cognitive functions such as memory, numeracy and verbal fluency. These results were then added to give a global score and the lowest ten per cent were identified as suffering from cognitive impairment.

one possible explanation proposed is the increased risk of heart disease and stroke which, in turn, are known to increase the risk of cognitive impairment and dementia.

llewellyn commented that, “our results suggest that inhaling other people’s smoke may damage the brain, impair cognitive functions such as memory, and make dementia more likely. Given that passive smoking is also linked to other serious health problems such as heart disease and strokes, smokers should avoid lighting up near non-smokers. our findings also support calls to ban smoking in public places.” LN

Up in smoke

CUrrEnT rEsEarCH conducted in the Department of materials science and metallurgy will allow white light-emitting diodes (lEDs) to be produced more cheaply, opening the door to more efficient and durable lighting.

over the next few years, conventional light bulbs, based on a white hot filament of tungsten, will be phased out in favour of the more efficient fluorescent bulbs. and the next revolution in lighting is already on the horizon.

Dim, coloured lEDs have been in use for some forty years. White ones are now starting to be used in particular applications – you may well have lED

bike lights – but are still too expensive to replace main indoor lighting.

a key reason for this is the cost of production. most attempts to make white lEDs focus on gallium nitride (Gan), a semiconductor which can efficiently convert electrical power to light. Until recently, Gan was grown on expensive sapphire wafers. research has shown that it can be grown on silicon wafers at about half the cost and twice the efficiency.

besides promising even greater efficiency and lifespan, lED lighting should overcome two problems with fluorescent bulbs. Fluorescent lights contain tiny amounts of mercury, a toxic metal, and have been criticised

for the quality of the light produced. lEDs contain no mercury and should be able to achieve a better balance of colour – research is even underway to use them to mimic sunlight for those suffering from seasonal affective disorder. TK

Worth a GaNder

AK

IMBO

MID

GET

JEFF

KU

BIN

A

Page 6: BlueSci Issue 15 - Easter 2009

www.bluesci.org4

BOOK REVIEWS

“Climate CHaNGe is a diabolical policy problem,” writes Professor Ross Garnaut in his recent book The Garnaut Climate Change Review. He states that its uncertainties, insidious nature, long-term time frame and requirement of international cooperation make it “harder

than any other issue that has come before our policy in living memory.” in response, he has written a no-nonsense, straight-talking book that outlines the facts in order to inform and aid policy makers, businesses and the public.

the book was commissioned by Kevin Rudd, the Prime minister of australia. it is written, therefore, from an australian perspective, but don’t let this put you off. Garnaut clearly presents the facts behind the strategies and they can be extrapolated to apply to other countries.

the clear explanations and easy-to-read layout make it an excellent reference for understanding climate change. a well-structured outline with chapter summaries of key points makes it easy to find individual topics or overviews. Clearly labelled diagrams and tables help the reader to get into the ‘nitty gritty’ of the science and statistics.

Beginning with a decision-making framework that weighs up the costs and benefits of mitigation, taking into account risk and uncertainty, the book goes on to explain the science behind climate change, man-made emissions and their environmental impacts. Garnaut then looks at our efforts to tackle climate change so far, and how we could move towards global co-operation and agreement. Finally, he looks at the australian mitigation policy with a focus on the areas of energy, transport and land use.

as a student of climate change mitigation technologies myself, i highly recommend this book for anyone who wishes to get to grips with climate change and its complex policy issues.

Tamaryn Brown is a PhD Student in the Department of Chemical Engineering and Biotechnology

“The book explains the science behind man-

made emissions and their environmental impacts”

The hARVeST of a Century by Siegmund Brandt retraces remarkable achievements in physics over the last century in 100 episodes. each episode gives an insight into the period of discovery, the influences and life of the scientist. the book starts with Roentgen’s

accidental discovery of X-rays and passes through the achievements of Curie, Rutherford, Nerst, einstein, Hubble and Fermi, culminating in Davies and Koshiba’s finding of the mass of a neutrino. Discovery of the atom, lasers, X-rays, optics, radioactivity, thermodynamic rules, quantum theories, working of conductors, transistors, nuclear reactors and magnetic resonance all find a place here.

the book is richly illustrated with photographs of scientists, instruments and excerpts from scientific communications as well as references to original papers at the end of each chapter. it moves along chronologically, so at times, the episodes feel disconnected since the early principles that form the basis of a discovery are often not implemented practically until much later. Non-physicists may also find it challenging to grasp the many theoretical concepts presented. Re-reading is definitely essential. a background of secondary school physics is also necessary. the author

does introduce basic concepts, but at times, they are in the middle of a chapter and too simplistic when compared to the rest of the text. Nevertheless, the reader’s attention is sustained, as each episode consists of only three to four pages.

the beauty of this book lies in the fact that, although volumes can be written about each major discovery, the author has identified and presented the most important concepts and experiments. it differs from a textbook in so far as every chapter is a story detailing how much was known at the time, what questions were asked and how these were answered. Overall, the book makes for good reading and, apart from being informative, it helps one appreciate the ingenuity and passion of great scientists.

Swetha Suresh is a PhD Student in the Department of Pharmacology

Ross Garnaut, The Garnaut Climate Change Review, CUP, 2008, £35 RRP

Siegmund Brandt, The Harvest of a Century, OUP, 2008, £35 RRP

Page 7: BlueSci Issue 15 - Easter 2009

Easter 2009

Ants, we know, are hard workers. Perhaps none more so than weaver ants that can easily carry more than one hundred times their own bodyweight. Running upside-down while carrying multiples of their own weight, the ants must resolve conflicting needs of adhesion and agility. How they manage to do this has been the research of thomas Endlein, first during his PhD in the Department of Zoology and now as a research assistant.

thomas has been studying the ants using a variety of methods: high-speed video recordings, microscopy and force measurements to determine how the ants are able to hold on so tightly to surfaces. It’s not so much their ability to hold heavy weights, but that they can hold them even stuck to the ceiling, defying gravity.

some of the answer lies in a fluid secreted by the ants. It has long been known that insects stick by secreting a fluid in-between their soft pads and the surface they’re on. However, this fluid only solves half the mystery: once stuck, how do they remove themselves?

thomas’ research found that the ants use several clever mechanisms to precisely control their ‘stickiness’. the ants can alter the number of legs they use in contact with the surface. Weaver ants normally run

with only three of their six legs but when weights were attached to their bodies they increased the number of legs in ground contact. the ants also change the angle of their legs towards the surface, and can alter the size of the sticky pads themselves to fine-tune their stickiness.

studying ants can be somewhat tricky as they have a tendency to escape. their highly adhesive feet allow them to crawl up almost any surface. A special coating is used to contain the colony in Zoology, housed in a temperature and humidity controlled room - but they can still find a way out, as thomas explains: “We often have escapees. You can come in early one morning and find them all over the lab. With other ants we use a vacuum cleaner to collect them but, because Weaver ants are so sticky, we have to pick them up individually.”

the Weaver ants’ sticky feet and weight bearing abilities help them to build

remarkable treetop nests. tree leaves provide the basic building blocks but it’s up to the ants to secure and seal the nest. Leaves are held together by the ants’ feet and jaws. then, carrying a recently hatched larva in their mandibles, they stimulate the secretion of a silk thread with their antenna. “they use the larvae as living needles to stitch the leaves together while others are acting as ‘living clamps’ to hold the leaves in place, motionless for hours, which is all quite amazing.”

the set-up for the cover photo was surprisingly easy. Weaver ants are well known for their aggressive and territorial behaviour; they’ll snap at anything you put in front of their jaws. “I made use of the reflex they show when they are holding leaves together: once they grab it they won’t let go and they stay there holding on motionless for hours. this is exactly what they did with the weight.” With the ant holding tight and staying still, thomas had plenty of time to compose his photo.

For more of Thomas’ work see www.endlein.org

5

ON THE COVER

Sticky FeetChris Adriaanse interviews award-winning photographer Thomas Endlein, about his striking cover image

Chris Adriaanse is a PhD student in the Department of Chemistry

www.bluesci.orgIssue 15 Easter 2009

Green Living . Ice Sheets . Nasal CyclingPulsars . Wireless Communication . Fractals

Cambridge’s Science Magazine produced in association with

Credit CrunchMisattributed scientific discoveries

Resisting TemptationThe science behind self-control

Unlocking the BrainUnderstanding imaging techniques

TH

OM

AS

END

LEIN

Page 8: BlueSci Issue 15 - Easter 2009

www.bluesci.org6

I want to prove that you are better than a rat. I know it sounds crazy, but stick with me. Imagine that you are holding a slice of warm, rich chocolate cake. It smells of fudge and lazy summer days. But, just as you lift it to your mouth, fingers tingling with desire, someone tells you that the cake is poisonous. It will taste good now, but in a few weeks you’ll keel over dead. If you believe them, you’ll be able to resist the temptation to gorge yourself on the deadly chocolate. You could put the cake down, and walk away.

Unlike a rat, which would have dived straight into the cake, you have the ability to abstain from current pleasure in order to avoid future discomfort. this restraint is not unique to humans, but it has evolved to an extremely complex level. we call it self-control, and it is one of the most important characteristics we have. Our ability to show restraint can have a whole range of implications. Poor self-control has been correlated with depression, OCD, aggression and crime, while good self-control has been linked to strong leadership skills, better relationships and greater academic and social success. nobody really knows where our self-control comes

from or why some people have more than others. By outlining current psychological theories about the role that self-regulation plays in our lives, I want to convince you that it is worth searching for.

Self-control operates a bit like a muscle. an hour in the weights room leaves you exhausted and weak. Your muscles get tired and you have to rest and re-energise. Similarly, if you exert self-control you deplete a limited store of mental ‘energy’; you are less able to exercise self-control in the near future. this can be demonstrated by giving participants two consecutive unrelated tests of self-regulation. In one experiment participants watched a distressing film, while refraining from showing any form of emotion. this required self-control as they struggled to repress their emotional reaction. this was followed by a test of physical self-control; the subject had to hold a handgrip

exerciser tightly for as long as possible. a control group watched the same film, but were allowed to express any emotion they wanted. they performed far better on the subsequent self-control task. Restraining emotions impairs your ability to hold on to a handgrip exerciser. an odd result given there is no obvious connection between the two. But this effect has been shown over and over again. People have been asked to drink sour juice, stick their hand into cold water, or not think about white bears. One task of self-control will always inhibit performance in a subsequent task.

Exercising self-control depletes a limited resource, which researchers call ‘self-regulation’. Self-regulation is needed not just for self-control, but for any task requiring us to regulate or change our mental processes. this includes making decisions, showing initiative, or giving presentations. In fact, most higher-order mental functions require self-regulation. Evidence for this comes from experiments where performing one self-regulatory task impairs performance on the second. For instance, a study by psychologist Brandon Schmeichel and colleagues looked at the effect of self-regulation on logical

Exercising Self-Control

Adam Kessler justifies the need for self-indulgence during exam time

“Nobody really knows where our self-control

comes from”

IAN

FY

FE

Page 9: BlueSci Issue 15 - Easter 2009

Easter 2009

reasoning. Participants were initially asked to watch a video while ignoring random words which appeared on the screen. This required self-regulation; the active control of mental processes. A control group watched the video, but without the words. The experimental group then performed worse on subsequent tasks of logical reasoning. Experiments like these indicate that many important mental functions rely, in some way, upon a single resource.

This has some practical consequences. This term, allow yourself to eat as much chocolate as you want. If you’ve got an important interview, ask someone else to choose which tie you should wear. Fortunately, it is also possible to increase self-regulation. One of the best ways is to make yourself happy, which increases self-regulation. Experiments that induce happiness can reverse the decline in self-regulatory performance, giving you all the more reason to eat chocolate in exam term.

Another study argues that exerting autonomous self-control – making your own choices – does not deplete self-regulatory ability, while forced self-control does. The idea is that if a nasty experimenter tells you to do something, it depletes self-control. But if you do it of your own accord, then you’ll be fine. In support of this, one experiment asked participants not to eat cookies, in order to deplete their self-control. A questionnaire assessed their reasons for not eating the cookies. Some people didn’t eat cookies because they were told not to. Other people wouldn’t have eaten the cookies anyway for more autonomous reasons; for instance, being on a diet. The latter group showed better self-control on a subsequent

task, implying that it is only forced self-control that depletes your resource.

The research that has been done on self-regulation is fascinating, but by no means complete. An observant reader may have noticed that I have avoided defining self-control. The literature offers a bewilderingly heterogeneous range of definitions, and it is extremely difficult to extract anything sensible. A typical paper, “Self regulatory failure” by Vohs and colleagues, conceptualises self-regulatory resources as an intrapsychic mechanism that controls desires, impulses and motivation. This is an almost impossibly broad definition. Saving for a pension, not thinking of white bears, and slamming on your brakes at a red light can all be seen as manifestations of self-control. These are all complex processes, and it seems unlikely

that a single construct underlies them all. It is far more likely that what we call ‘self-control’ consists of multiple psychological systems. Many of the researchers that I have cited do not recognise this, and persist with a broad, sweeping definition. The experimental techniques they choose do not distinguish between different types of self-control. However, this should not diminish the value of the research. The results I have described have been reliably observed, and we are only just beginning to explore the implications. With more research and more rigour, we could come to understand one of the most important concepts in human psychology.

077

Adam Kessler is a Part II student in the Department of Physiology, Development and Neuroscience

sfll

Aw

It’s not just humans which show self-control. Despite my initial scorn, rats can exert limited self-restraint. A simple experiment offered a rat two holes to poke its nose into. A ‘nosepoke’ into one hole was rewarded with an instant food pellet. A nosepoke into the other hole resulted in five food pellets but only after a time delay. By varying the length of the delay, you can change the amount of self-control required. Most rats are good at anything up to about a hundred seconds. Primates, of course, can do far better than that. Chimpanzees and orangutans have been taught to use a straw to suck fruit juice out of a container. when presented with a choice between a piece of fruit and a straw, they picked the straw, even if no container was present. They seem to know that the straw would eventually be more useful than the fruit, and so ignore the possible short-term gain.

Could you resist this delicious chocolate cake if you knew it was poisonous?

isTO

CK

Animal Behaviour

Page 10: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Think of algae and one might imagine a murky pond, a neglected swimming pool or a deserted stretch of coastline strewn with seaweed perhaps. in any case, it’s not normally associated with excitement or practical usefulness. however, this may be about to change.

Research worldwide is exploring the potential for algae as a clean, renewable energy source. it may have the potential for providing a truly

‘green’ solution to the ongoing global energy crisis.

Algae differs from conventional biomass crops in that useful energy can be harnessed by different means. Like traditional crops, algae can be burnt to release energy. Uniquely, algae can also be used to produce hydrogen, a far cleaner and greener method of energy production.

Under certain conditions – namely the absence of sulphur – algae switch from the production of oxygen by photosynthesis to the production of

hydrogen. To capture this hydrogen and subsequently use it in conjunction with a fuel cell would open up the potential for totally Co2 free energy consumption.

it was this mode of energy release that inspired a recent multidisciplinary design project of Cambridge architects and engineers.

The team set out to investigate the potential for the micro-generation of hydrogen from algae within a domestic residential context through a process of experimental design. They were especially interested in exploring how the needs of algae cultivation and human comfort could be reconciled into a single architectural solution. from early on in the design development process it became clear that certain environmental constraints – namely light and heat – for successful algae cultivation were analogous to those required by humans.

Eukaryotic organisms, such as algae, generally thrive on exposure to high levels of light. however, the capture of gaseous hydrogen produced by the algae necessitated its housing in some form of sealed, transparent tank. Consultation with other researchers in the field of algae cultivation, who had

completed mock-ups of such tanks, confirmed that they were highly prone to over-heating. Algae are killed at temperatures over 30ºC. humans of course, with regard to comfort, have a similar temperature threshold.

Thus, it became clear that the potential existed for the algae and domestic spaces of our ‘Algae house’ to enter a symbiotic relationship, whereby one promotes the optimum environmental conditions for the other. The form of the ‘Algae house’ façade was developed as a direct consequence of this constraint.

The guiding objective in the design was that, whilst temperature stability was essential, it was also desirable to obtain the maximum amount of light

from the sun. Multiple cylindrical tubes of small diameter were proposed to provide optimum surface area. A fixed glazing system shaded by louvres (horizontal slats) and surrounded by a water pool was developed that would

8

NEO

N JA

& R

ICH

AR

D B

ART

Z

Daniela Krug, Karuga Koinange and Chris Bowler look at the future of green living

Algae Living

“Algae and people may not present themselves as obvi-

ous bedfellows”

“Algae technologies could play a significant role in our

built environment”

Page 11: BlueSci Issue 15 - Easter 2009

Easter 2009

independently control solar heat gain and light throughout the day as well as across the year. To allow the algae to function efficiently, and to reduce artificial lighting, they would need as much sunlight as possible without risking over-exposure. Therefore, through careful consideration of the

algae tubes’ orientation to the sun, direct solar heat gain was allowed only during winter months and on spring and autumn mornings and evenings.

As the house plans illustrate, the shallow pool of water, or ‘moat’ that lies adjacent to the façade is intended to perform two basic functions. Firstly, the reflective properties of water are such that the amount of light reflected increases exponentially as the angle to the surface of the water decreases. This means that the pool reflects low angle sun up to the overhanging algae façade, whilst absorbing more of the higher energy, high angle, midday summer sun. Secondly, water absorbs up to a hundred times more energy from infra-red light than from visible light. As heat energy is mostly transferred

by infra-red light, the water should usefully absorb much of the heat from direct sunlight before reflecting it up to the algae. The amount of reflection was optimised by the addition of a reflective surface or coating to the pool floor.

In the summer the pool also benefits the occupants of the house in providing cooling as air is drawn into the house after passing across the water. The movement of the reflected light playing across the green algae tubes and the living room ceiling would also create a visually interesting and unique living space.

The total amount of energy produced through hydrogen production was calculated assuming a 10% efficiency in the conversion of light energy to hydrogen. Based on this calculation, 75 square metres of algae is estimated to produce 6570 kilowatt-hours of hydrogen per year – enough to drive an electric MINI E car from London to Beijing and back three times. To make the most efficient use of this energy, the majority of it should be converted to electricity through a fuel cell with an efficiency of approximately 50%. The associated waste heat that is produced as an inevitable consequence of this technique could be recovered to satisfy house heating needs.

Algae and people may not present themselves as obvious bedfellows, but this project shows that the use of algae as an energy generator within a house is not only feasible, but that cohabitation can result in a self-sustainable symbiotic system which opens up many exciting architectural possibilities for ‘green living’.

This recently concluded project, developed as part of a course module, has awoken great interest and

enthusiasm within our team. We feel that algae technologies could play a significant role in the future of our built environment. This conviction has motivated us to establish a web platform www.algaetecture.com in order to inspire fellow students, academics, and professionals to think of algae as a sustainable resource. We encourage you to get in touch if you have a general interest in algae or if you want to get involved in developing the algae living concept further.

9

Cross-section of the algae house designed by the Cambridge team

Daniela Krug, Karuga Koinange and Chris Bowler are MPhil students in the Department of Architecture

NEO

N JA

& R

ICH

AR

D B

ART

Z

DA

NIE

LA K

RUG

, KA

RUG

A K

OIN

AR

GE

& C

HR

IS B

OW

LER

“Research projects world-wide are exploring the

potential of algae” “Algae thrive on exposure

to high levels of light”

Page 12: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Nasal CyCliNg: not a new Olympic sport for 2012, but the alternating dominance of each nostril - a physical phenomenon present in 85% of mammals - that probably includes you. as we go about our day, one nostril is more open, allowing more air to flow through it than its resting partner. a few hours later, the open nostril rests and the other flares and takes control. Try it. Put a finger under your nose and you will feel a stronger, warmer sensation on one side. Remember to try it again in a couple of hours time and you may well find the opposite.

Unlike the test you may have just done, researchers have not been measuring nasal cycling by sitting in labs with their fingers under each others’ noses. it has been studied in a number of ways: hot-wire anemometers (ouch) should perhaps remain unexplained; the Zwaardemaker method relies on a calibrated cold mirror and condensation, and a more recent technique involves participants exhaling onto a piece of glass with red dye and then observing the resultant ink bloom. The wonky love hearts which are left behind reveal a striking manifestation of our nasal asymmetry.

This alternating vasodilation and vasoconstriction of the nostrils was first documented by Kayser, a german rhinologist in 1895 and developed by Heetderks in 1927. it has since been embraced by yoga enthusiasts in the practice of Pranayama (controlled breathing as meditation). Research into nasal cycling was taken up with gusto by David shannahoff-Khalsa at the University of California in the early 1990s leading to a number of

publications, and has more recently been investigated in relation to handedness, autism and early language impairment.

so why should cycling happen? To use an analogy from elsewhere in the body, lateralisation in the brain has been postulated to take place to make maximum use of neural tissue and avoiding duplication of function. However, nostrils need not multitask, and moreover, don’t wear out unless

they have been the unfortunate conduits to substances other than air. The intriguing claim is that the nasal cycle is linked to the rhythm of alternating brain hemispheric activity, and governed by the autonomic nervous system (aNs). Using neural imaging techniques, positive correlations have been found between hemispheric activity and dominance in the opposite nostril. suddenly and surprisingly, the nose is called upon as an integral part of cognition!

We even do better in certain kinds of test when forced to breathe through the optimal nostril. shannahoff-Khalsa and susan Jella investigated performance in cognitive tests by forcing their undergraduates to breathe through either the left or the right nostril (crocodile-clips, anyone?). When taking the right-brain based spatial tasks, the students did significantly better during left nostril breathing, whilst on the verbal tasks, more closely associated with the left hemisphere, they scored higher during right nostril breathing but not significantly so. The asymmetry in significance in this case may be due to multiple brain regions

10

“The nasal cycle is linked to the rhythm of alternating brain hemispheric activity”

Cat Davies explores the ins and outs of nasal cycling

Nostril Nose Best

DA

VID

SH

AN

KBO

NE

PER

PA

LMK

VIS

T K

NU

DSE

N

Page 13: BlueSci Issue 15 - Easter 2009

Easter 2009

mediating the skills required in the specific types of tasks.

Dolphins have mastered the ability to let one half of their brains rest while the other side stays on the lookout for predators and reminds them to go to the surface to breathe. Recent evidence from nasal cycling research suggests that there may be some propensity for one side of the human brain to be more active whilst the other takes a back seat, regardless of the task at hand. Half-sleeping has been noted in other species too – note the common sight of the ‘one-legged’ flamingo, with ducks, geese, storks and herons also making like Maasai tribesmen for stretches of time. Various theories abound including the idea that these birds are resting only one hemisphere at a time; the resting leg corresponding to the contralateral sleeping hemisphere. The other side supports the body and maintains a degree of alertness when the bird is in a vulnerable state. Evolutionarily, the theory is persuasive.

So could nasal cycling be an underdeveloped form of the same phenomenon? A feasible, but untested, hypothesis is that each hemisphere is resting and recuperating in roughly two-hour cycles. However, wouldn’t it be more efficient if neural resources were activated as and when required, with the corresponding nostril following?

An implication of nasal cycling is that if the dilated nostril is associated with greater activity in the opposite hemisphere, the less active side of the

brain may compromise the systems it mediates. Breathe on one side for too long, could certain abilities deviate from normal development? What about cases of nasal blockage or septum misalignment? Can the brain exploit its plasticity to overcome such serious implications of minor physical anomalies?

Mention nasal cycling and a common response is one of surprise. Nevertheless, the lateralisation of the brain and body is widely observed. Whenever we pick up a pen, put the phone to our ear, cross our legs, interlace our fingers or tilt our heads to be kissed we are illustrating the body’s inherent lopsidedness. The popular media commonly cite left- and right-brained tendencies to illustrate individuals’ strengths and weaknesses, and lateralisation of the brain is now a major topic within the cognitive sciences; there is even a cross-disciplinary international journal focused exclusively on lateralisation in human and non-human species. Linguists have studied neural regions and brain lesions in relation to language ability since the time of Paul Broca in the late 19th century. Such research is well established and widely respected, and what it has in common across the sub-disciplines is the top-down nature of the brain governing the body. So does, what I always understood as a facial appendage designed to warm the

air we breathe, really have the capacity to influence brain function? I would be much more likely to accept this if the causal direction was the other way around, i.e. brain beats nostril, but considering that the ANS and the hypothalamus play president and vice-president in this system of government,

it appears that nostril dominance originates from the brain itself, and then in turn affects cortical activity. The evidence seems to suggest that the ANS starts the race, the nose cycles and the brain follows behind.

So if the story of nasal cycling is true, how should we best harness it? Plug our left nostril during that presentation at work? Stick a finger in the right side during the driving test? Market a nose-flow detection kit for task/brain-optimisation? As it seems that achieving ambi-nasality is beyond us mere mortals, perhaps we’ve just got to embrace the times when we’re down with a cold, for that is when we are truly cerebrally balanced.

11

“Does our nose really have the capacity to influence

brain function?”

Cat Davies is a PhD student at the Research Centre for English and Applied Linguistics

Dolphins rest one of their brain hemispheres at a time, keeping the other half of the brain awake, exerting control over vital functions.

It is thought that flamingos stand on one leg while resting the corresponding brain hemisphere

DA

VID

SH

AN

KBO

NE

PER

PA

LMK

VIS

T K

NU

DSE

N

MA

RK

INT

ERR

AN

TE

Page 14: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Astrophysics is arguably the most difficult to visualise of all physical sciences. Attempting to envisage the vast 93 million miles between the Earth and the sun is exceptionally difficult, while comprehending the 13.6 billion light years to the edge of the observable universe is, perhaps, impossible. yet despite these enormous scales, mankind has successfully developed highly sensitive technology capable of probing the mostly dark, empty outskirts of our universe. since the 17th century, telescopes have uncovered evidence of astrophysical objects, including a host of exotic phenomena associated with dying stars, such as white dwarfs, supernovae and black holes.

pulsars are one of the many possible cosmic leftovers from the explosive death of a star, known as a type ii supernova. this occurs when a star contains enough matter that gravity eventually causes the core of the star to collapse, releasing vast amounts of energy and causing a rebound shock-wave that culminates in the outer layer of the star exploding in an enormous fireball. the core temperature of this explosion is around 100,000,000,000°c,

with an energy release equivalent to 100 trillion trillion million thermonuclear weapons. this remarkable stellar death leaves a highly dense object, known as a neutron star, at the centre of the explosion. Neutron stars are effectively giant atomic nuclei which, when they rotate, become detectable and are known as pulsars.

pulsars have an exceptionally strong magnetic field and a mass of approximately 1.5 times that of the sun, contained within a radius of just 15 kilometres. this means

that they are incredibly dense - a teaspoon full of material from a pulsar brought back to Earth would weigh about as much as 200 million African elephants! pulsars rotate up to several hundred times a second and, through processes still not fully understood, emit radio waves in a fine beam. the emission of radio waves makes them detectable by telescopes on Earth,

as the emitted beam of electromagnetic radiation sweeps across the Earth in a fashion similar to a lighthouse beam sweeping across the sea. this recurring sweeping motion results in a highly regular, periodic radio signal.

the discovery of pulsars was made in 1967 by professors Antony hewish and Jocelyn Bell Burnell using the interplanetary scintillation Array at the Lord’s Bridge site in West cambridge. Bemused by the bizarre regularity of the detected radio pulses, the signal was initially considered to be man-made and attempts were made to locate the source. however, working with the signal was not simple, as hewish explains: “the signal wasn’t always there on the days when it should have been, but it just simply wasn’t.”

it was soon realised that the signal moved across the sky at the same rate as the stars – a consequence of the Earth’s rotation. this did not rule out the possibility that the equipment of other astronomers was responsible. “if the pulses were being initiated on the ground and coming in via reflection from the ionosphere, then the signal had to be coming from

12

“The signal wasn’t always there on the days when it

should have been”

CA

SEY

REE

D

Jamie Farnes describes the discovery of pulsars

Cosmic Lighthouses

SKA

PRO

JEC

T O

FFIC

E &

XIL

OST

UD

IOS

Page 15: BlueSci Issue 15 - Easter 2009

Easter 2009

somewhere down-south, maybe in France. I had a colleague in the Royal Greenwich observatory and telephoned him to ask if he could think of any astronomical observations that could be doing this and he couldn’t. Ultimately, I began to think maybe there was something actually astronomical about it,” Hewish recalls.

Upon confirmation that this repeating radio pulse was indeed originating from space, they considered whether the signal could be a communication from intelligent extraterrestrials – a thought that led the researchers to jokingly dub the signal LGM-1 for ‘Little Green Men’. Interestingly this raised many ethical concerns: if you discover intelligent life elsewhere, is it safe to attempt to communicate? As Hewish explains, “If they were intelligent signals, perhaps they were waiting for a signal from us because they were on a planet like Earth, which is running into problems. Overcrowded planets were quite a possibility and perhaps they were launching a signal to see if there were any green fields out there that they could come to and dominate.”

If the signal was in fact sent by sentient beings on a planet, then there should have been an associated change in frequency of the received radio signal due to the orbit of the home planet about its parent star, a phenomenon known as orbital doppler shift. Hewish set about measuring this and no shift in frequency was found, ruling out a planet as the origin of the signal.

This confirmed the true nature of the signal as a newly-discovered natural phenomenon, but the puzzle of what was causing the signal. remained For Hewish, the mystery was a very exciting one, “It was a wonderful time, a terrific time, but it certainly kept me awake at night!”

Further investigation showed that the

peculiar pulsating object was less than 1000 kilometres in diameter and was roughly 100 light-years away. Meanwhile, additional pulsars were found, including one in the Crab nebula - a remnant of a supernova which lit up the night sky in 1054. As pieces of the puzzle slotted into place, the correct interpretation of these signals as originating from rotating neutron stars formed in supernovae was finally proposed. For this remarkable discovery Antony Hewish was awarded the Nobel Prize for Physics in 1974.

Over 40 years since their discovery and with more than 1,800 pulsars now detected, these enigmatic objects keep providing new scientific information. Recent discoveries include pulsars that emit X-rays instead of radio waves, a pulsar with three orbiting planets and binary pulsar systems which consist of two pulsars orbiting each other.

With an estimated 70,000 observable pulsars in our own galaxy, the Milky Way, only a fraction have been found. As

pulsar detection is inevitably limited by the sensitivity of modern radio telescopes, the problem beckons for bigger and better equipment. Thankfully, the calls for more advanced technology will be answered with the completion of the largest radio telescope ever built, the Square Kilometre Array (SKA).

The SKA is due to be constructed in either South Africa or Australia and will consist of around 5,000 dishes alongside additional observing stations up to 3,000 kilometres away. As a consequence, SKA will have 50 times the sensitivity of any existing radio telescope and will be capable (in theory) of detecting extraterrestials’ television signals from stars as far as 1,000 light years away. It is planned to be operational by 2016 and will cost an impressive US $1 billion.

The SKA will certainly revolutionise our understanding of pulsars as it should be able to detect all of the 70,000 observable pulsars in our own galaxy and will also, for the first time, be able to detect pulsars in

other nearby galaxies such as Andromeda. It is also hoped that the SKA may detect the first black hole and pulsar binary system (a pulsar orbiting a black hole).

Binary systems are particularly useful in that they allow physicists to make precise tests of Einstein’s theory of general relativity, which describes gravity as a geometrical distortion of space-time. Indeed, binary pulsars have already been used to provide indirect evidence for the existence of gravitational waves, a key prediction of Einstein’s theory.

Gravitational waves bend space-time and this subtly changes the distance between two points in space. Using the SKA as a precise timing array to time pulsars with a precision of 100 billionths of a second over 10 years, it will be possible to measure tiny distance fluctuations between us and the pulsars as a consequence of gravitational waves. This would further confirm Einstein’s theory and also provide an entirely new way of observing the universe - via gravitational waves instead of just the electromagnetic waves currently used.

So do pulsars have any further surprises in store? Hewish believes they do: “Pulsar science is only just beginning, there is all sorts of science that you can do if you detect enough pulsars and with more of these binaries turning up we are starting to directly sample the stellar atmosphere of pulsars. If we can find a pulsar orbiting a black-hole, that would be a golden dream, and there’s no reason why we shouldn’t.”

As is always the case in science, who knows what serendipitous discoveries could be in store in the future?

13

The proposed SKA telescope

“It could be a communication from

intelligent extraterrestrials”

CX

C/M

WEI

SS

Jamie Farnes is a PhD student in the Department of Physics

CA

SEY

REE

D

3D representation of pulsar J0108

SKA

PRO

JEC

T O

FFIC

E &

XIL

OST

UD

IOS

Page 16: BlueSci Issue 15 - Easter 2009

www.bluesci.org

We are the first generation that is able to contact friends on the other side of the world, from anywhere, at any time. Whether in the living room or in the middle of a park, we can use a tiny laptop, apparently connected to nothing except the air we breathe, to chat with friends on a webcam whilst a missed TV show streams in another window. We take this for granted, yet the development of error-free, wireless transmission is one of the most astonishing intellectual achievements of modern science.

Most of us know that any piece of music, painting or text can be represented by a combination of just two symbols, known as binary digits or bits (for simplicity, we call them zeros and ones). and we know that we want lots of them coming to us in a short time. But marketing tells us to ask for higher

speeds and this is misleading. Data can be received more quickly if more bits are transmitted per second, but the bits themselves do not travel any faster. So what marketing should tell us is to ask for a higher bit rate, not a higher speed.

These digital signals, in the form of zeros and ones, must be detected and decoded against corrupting background noise. For example, temperature causes random movement of electrons in receivers, which disrupts the signal. error-free transmission of binary digits under such conditions is not easy. Some ones may be mistaken for zeros and vice versa, and errors increase with faster bit rates. The challenge is to maximise the bit rate whilst minimising errors.

Is there a limit to the bit rate we can achieve, whilst keeping the link free from errors? This was one of the questions

that Claude Shannon asked in his 1948 seminal paper A Mathematical Theory of Communication. Shannon formulated the concept of a channel’s information capacity; the maximum achievable rate of error-free data transfer in a given channel (the Shannon limit). He showed

that if we transmit below the capacity of a channel, some code should exist that would allow the correction of all the bits that have been corrupted. It is similar to a word processor suggesting corrections for misspelt words; more specifically the proficiency with which it identifies the most likely correct word.

Some of the brightest mathematicians, engineers, and computer scientists devoted themselves to the problem of finding such a feasible error correction code. However, by 1993, even the best codes were still performing far from the capacity limit.

Then the unexpected happened. In a leading conference, a paper, claiming to have a feasible family of codes (dubbed turbo-codes) that operated near the Shannon limit, was presented by Claude Berrou and alain Glavieux, two French engineering professors who where rather unknown at the time to the coding theory community.

“They got it wrong,” people mumbled

14

IAN

FY

FE

Francisco Monteiro looks at the remarkable achievements in error-free digital communications

IAN

FY

FE

Progress in error correction codes allowed error rates to approach the Shannon limit

“Marketing tells us to ask for higher speeds, but this

is misleading”

Page 17: BlueSci Issue 15 - Easter 2009

Easter 2009

at the end of the presentation, “They must have forgotten to divide by two somewhere!” Everybody rushed back to their labs and tried to replicate the results. They could not believe what they found: turbo-codes were performing just as claimed. However, it was unclear why they worked.

At around the same time, Cambridge Professor David MacKay, along with Radford Neal at the University of Toronto, was looking at the problem from a fresh perspective. In 1995, he devised codes operating even closer to the Shannon limit. For some time, his Low Density Parity Check Codes (LDPCs) made Cambridge the home of some computers that were running the best error-correcting codes in the world.

Interestingly, his research revealed that LDPCs had been devised by MIT professor Robert Gallager in his 1962 PhD thesis, but had been forgotten. This was probably because there was not enough computing power at the time to implement them, or because he did not include them in his textbook published in 1968.

Mackay’s papers triggered a boom of research and LDPCs were further refined by researchers in America and Switzerland. Currently, turbo-codes play a central role in the correct detection of the bits received by mobile broadband, and help to receive images from the probes on Mars. The patent-free LDPCs will take their place soon.

It had taken almost 50 years to reach the Shannon limit. But a further burst of research in the second half of the 1990s proved that the maximum possible bit rate within a fixed spectrum had not been reached. Shannon’s formula for typical electrical channels considered thermal noise only, not additional “perturbations” such as multiple reflections of the signal in the environment, as is the case in wireless communication. For many years, this type of “self-interference” was perceived

as an additional obstacle for correct signal detection at the receiver.

However, it was later proven, mathematically and experimentally, that by considering space in addition to time when designing a code, the Shannon limit could be surpassed. With rather

complex algebra and computing, we can artificially create several independent communication streams using the so called space-time coding on multiple-input multiple-output (MIMO) systems. In electronics, this translates to the use of multiple antennas on the outside and much more processing complexity on the inside.

The same MIMO principles are now being used to take advantage of the different reflecting paths that light waves can take inside optical fibres. Even in

bundles of landline cables, the mutual interference can be used in a similar way.

Soon, 3.5G mobiles will provide gross bit rates of up to 100 megabits per second (Mbps) and, inside the house, the next Wi-Fi standard will provide up to 600 Mbps. Later, 4G mobiles will reach rates of up to 1 gigabits per second. To put that in perspective, in 2008 the average download speed in the UK was 4 Mbps. At 2 Mbps it takes 47 minutes to download a typical film; at 10 Mbps this is already down to twelve minutes, so at 600 Mbps it will literally take seconds.

These are the plans for the next decade, but a revolution has recently started in academic circles: network coding theory and collaborative networks have all users in a network helping all other users to sustain the error-free bit flow. At this stage, the capacities for such networks are unknown, and a new Shannon is needed.

15

“4G mobiles will reach rates of up to 1 gigabits

per second”

“By 1993 even the best codes were still performing far from the capacity limit”

Francisco Monteiro is a PhD student in The Computer Laboratory and in the Engineering Department

IAN

FY

FE

IAN

FY

FE

MIMO space-time processing takes advantage of multiple reflections that act to artificially create independent communication streams

Page 18: BlueSci Issue 15 - Easter 2009

16www.bluesci.org

OUR WORLD is severely affected by a variety of environmental problems. Issues like global warming and ozone depletion make the news headlines every day. These problems are acute and global, and the majority of scientists agree that the future of our planet is at stake.

In order to keep track of some of these problems, several local councils monitor the levels of traffic pollution. However, this is usually done in a small number of sparsely distributed monitoring stations, so the resolution obtained is very low, plus pollution can vary dramatically on a per-street basis.

Now imagine you had a myriad of cyclists and pedestrians carrying mobile environmental sensor devices each monitoring local pollution levels. These would create simple sensing networks that could cover an entire town. The pollution data gathered by each mobile sensor would be sent wirelessly to a central server, together with location information. This information would then be updated, in real time, and displayed as a high resolution pollution map on a public web site. This way, people would know what pollutants they are exposed to over the course of the day, allowing them to avoid areas with a high concentration of pollution. This could also trigger local and central government initiatives to reduce the concentration of pollutants in specific areas.

An important stimulus to the germination of sensing networks is the increasingly environmentally-conscious public. The possibility of contributing this type of information without having

to actively do more than carry a device could interest many. Even without any scientific training, volunteers could perform these measurement tasks for the sake of promoting awareness of pollution problems in communities and thus improve their society.

In Cambridge, these trends are already becoming reality. Bicycle couriers monitor the city air quality using mobile phones. The bicycle carries a small wireless sensor that sends pollution data via Bluetooth to the courier’s mobile. These devices also incorporate an integrated GPS receiver with location information. The mobiles then assemble this data and send it to a central server. Developed by a team of researchers, led by Eiman Kanjo from the University’s Computer Laboratory, this technology builds maps containing detailed information, at the street level, of the concentration of numerous pollutants affecting the city air.

One of the challenges of this technology is minimising the device size. The first prototypes are still big; roughly the size of a large remote control. The main culprit is the sensor. However, the Cambridge community is on the verge of revolutionising this area. Owlstone, a spin-off company from the University, develops ‘dime size’ detectors. One can envision them being integrated into small devices (including mobile phones), with every willing citizen contributing to the pollution mapping process.

Another interesting approach to this technology is the possibility of linking health problems to pollution data. It is known, for example, that asthma symptoms are linked to air pollution. Hence, by combining the mobile sensor technology described above with a separate device to measure lung function, it would be possible to correlate the patient’s symptoms with the air pollution around them. This data could then be sent

automatically to the patient’s doctor.In the future, before your morning

jog, you might surf your favourite pollution map site to find the freshest air in town. Your watch would analyse the concentration of pollutants, with information being uploaded to the website. If the concentration of pollutants is above a safe level, you would be informed of a less polluted track nearby. While on the move, your watch would also keep track of your heart beat and blood pressure. If there’s something wrong, you’ll receive a prophylactic text message telling you to stop running. Or, in a critical situation, an automatic message could be sent to the emergency services with information on your condition and location. Help would be on its way.

Imagine a myriad of cyclists and pedestrians with environmental sensor devices

Fernando Ramos is a PhD student in the Department of Engineering and in the Computer Laboratory

Sensing our SurroundingsTECHNOLOGY

Eim

aN

Ka

NjO

“Volunteers could perform these measurement tasks

without training”Screenshot of pollution software ‘Airfresh’ on a Nokia N95 mobile phone

Page 19: BlueSci Issue 15 - Easter 2009

17Easter 2009

Lighting up the Brain The origins of human thought, emotion and personality have been pursued throughout history. it was Franz Joseph gall, at the beginning of the 19th century, who believed the brain was composed of separate ‘organs’ that each controlled a different aspect of character; he examined bumps on the surface of the skull and thought these provided insight into the workings of what lay beneath. in doing so, gall was making the first attempt to determine how the structure of the brain gives rise to the mind.

Two hundred years on, we can now utilise the buzz of electrical activity and the ravenous burning of energy inside our heads to reveal the inner workings and watch the brain at work. neuroimaging has become an indispensable tool in both research and medical diagnosis.

While gall’s methodology now seems ludicrous, some of his ideas persist. Much modern research encourages us to consider the brain as a modular structure, with individual regions performing different functions. Yet it is becoming increasingly

clear that communication between regions is at least equally important.

Modern neuroimaging techniques must be considered carefully to maximise our understanding of how the brain works. each is optimal for answering specific questions, but none are able to provide a complete view of brain function. each technique has limitations and downfalls that complicate their interpretation. This issue, FoCUs considers some of the current techniques used to study the brain and how they may be best used to advance our knowledge of the human psyche.

BACKGROUND IMAGE: EQUINOX GRAPHICS

Page 20: BlueSci Issue 15 - Easter 2009

18www.bluesci.org

MR

C C

OG

NIT

ION

and

BR

AIN

SC

IEN

CES

UN

IT

Magnetic resonance imaging, or Mri, is a familiar term. even if we’ve not had an Mri scan ourselves or don’t know someone who has, familiarity comes from televised medical dramas showing impressive 3D images of internal organs. although Mri can be used to image any structure in the body, it is most well known for imaging the brain. indeed, it has become essential for learning about the structure and function of the brain and what happens when it goes wrong.

Mri takes advantage of the behaviour of protons - found throughout the body as hydrogen nuclei in water - when they are subjected to a strong magnetic field. the surrounding environment determines a proton’s precise behaviour towards the applied magnetic field, making different tissue types distinguishable. Mri allows us to determine whether a proton at a specific location is sitting in, for example, fat tissue, cerebrospinal fluid, cell bodies or neural fibres.

the high resolution structural images of the brain produced using Mri can be routinely used in medicine to diagnose cancers and prepare for surgery, detect lesions and other structural abnormalities and track the progress of neuro-degenerative disorders, such as Parkinson’s and alzheimer’s diseases. However, in the

last twenty years, Mri has also enjoyed a meteoric rise to fame in ‘functional’ neuroimaging research, attempting to determine how our brains work when healthy - not just when things go wrong.

Functional Mri (fMri) uses the same principles as ordinary Mri, but is used to detect differences in blood flow within

the brain. More active brain areas require more oxygen than less active ones, so blood flow increases to these regions; this is known as the haemodynamic response. the increased level of oxygen changes the local environment of nearby protons enough to be detected by Mri, allowing us to distinguish oxygenated blood from deoxygenated blood. this change in blood oxygenation level is taken as an indirect measure of neural activity, so can tell us which parts of the brain are most active.

Functional Mri has become so popular amongst researchers because of its non-invasive nature and high spatial resolution. its main alternative uses a

radioactive tracer added to the blood to measure metabolic activity. the process, known as positron emission tomography (Pet), loses nearly half the spatial resolution compared to fMri. it is also less safe since radioactive material must be injected. With fMri, experimental subjects can be repeatedly scanned, allowing within-subject comparisons that cannot be done with Pet since it would require repeated exposure to radiation.

it is easy to become mesmerised by the detailed images of brain function that fMri produces but it is not without its own limitations. if research using fMri is to produce meaningful data, careful experimental design is crucial.

imagine having an fMri scan for the first time. You are lying on a table with your head strapped down and a massive electromagnet that is not only inches away from your face, but makes any number of reverberating drones, whines and crashes. You are anxious, perhaps claustrophobic, and alert to listen for instructions from the experimenter. Your brain will already be buzzing with sensory input, attentional mechanisms and emotional turmoil. then you are shown pictures, about which you must make a decision, and convey your response by pressing a button. now

Measuring the MindBlueSci looks at the science behind the pictures

Functional MRI has high spatial resolution, localising activity to within millimetres

“It is easy to become mesmerised by the detailed images of brain function”

Page 21: BlueSci Issue 15 - Easter 2009

19Easter 2009

your visual cortex is in overdrive and your motor cortex has joined in, as have memory, emotional and language networks that were unintentionally triggered by the pictures. As the test continues, you may relax, perhaps your concentration lapses and you start to think about the shopping list or the film you watched last night. How can one specific cognitive process be detected amongst this bedlam of activity?

The experiment must be designed to detect a change in activity. Background activity from control conditions is later subtracted from that seen under experimental conditions leaving regions that are ‘lit up’ and assigned to specific cognitive tasks. Variation between experimental subjects, both in brain structure and cognitive approach, means that results must be heavily averaged and smoothed over time.

Interpretation of such data must be approached with caution, and there are numerous potential pitfalls. Functional MRI research previously identified a region that ‘lit up’ when subjects told a lie: the anterior cingulate cortex. It was proclaimed that this was the centre for lying and could be used to develop an advanced lie detector. But further research has shown that it is active during many other tasks that involve decision-making. Had a lie detector been developed on the basis of the original research, the consequences could have been severe.

Is the haemodynamic response even a reliable measure of neural activity? It seems reasonable to assume that blood flow increases to satisfy the oxygen demand of active neurons. But is it possible that increased blood supply can occur without any associated neural output at all? A recent paper, published in Nature earlier this year, suggests that this could be the case.

Yevgeniy Sirotin and Aniruddha Das, of Columbia University, trained rhesus monkeys to perform a cognitive task that involved periodically fixating on a visual stimulus. Simultaneously measuring neural activity and blood flow in the visual cortex, the researchers showed that the activity and flow increased periodically in time with the monkeys’ fixation. However, when they reduced

the visual stimulation, the neural activity in the monkeys almost disappeared, but the blood flow still fluctuated in the same cycle. Even in the absence of neuronal activity, it seems there can be detectable changes in cerebral blood flow, bringing into question the fundamental assumptions of fMRI.

In addition, conventional fMRI only allows study of the brain in a “modular” fashion by localising different functions to unique parts of the brain. This idea has persisted for over two hundred years, since the time of phrenology when Franz Joseph Gall proposed that the brain was composed of 27 “organs”. In 1983, Jerry Fodor published his seminal book The Modularity of Mind and ever since, cognitive psychologists have been largely concerned with carving the mind up into functional modules, with fMRI results encouraging us to think in this way.

However, this model of brain function misses one vital concept: if it is a modular system, surely the modules must communicate. How does the brain talk to itself?

IMAGInE STAnDInG on the station platform, awaiting the arrival of your train to London. In the distance, a speck of light gradually nears and the train eases to a halt. As you watch it approach, you perceive just a train, slowing down. But that is not how your brain processes it: the shapes, orientation, colours and

movement of the train are all separated along the pathway from the eye and are processed individually in the visual cortex at the back of the head where different cells respond to each aspect of the train’s form and movement. During which, auditory and other sensory

information is collected and processed in completely disparate areas of the cortex, and finally we put it all together to complete the perception of a train.

How does the brain integrate all the components to give the final perception of a single moving object? This is known as the binding problem. It is one instance of the general problem of “connectivity”. Understanding how brain regions converse with each other to integrate information may provide insight into complex mental processes such as vision, memory, attention and even consciousness.

Whilst fMRI can tell us where the different components are processed, it can tell us little about connectivity in the brain. Recent research has begun to look elsewhere for answers, using electroencephalography (EEG) and magnetoencephalography (MEG) to

Golgi-stained pyramidal neuron in the hippocampus of an epileptic patient. 40 times magnification

“Cognitive psychologists have been carving the mind up into functional modules”

Page 22: BlueSci Issue 15 - Easter 2009

20www.bluesci.org

measure brain activity and discover how sensory input can result in our internal representations of the world.

EEG and MEG use external detectors, attached to the scalp, to measure the electrical activity of firing neurons in the brain. The flow of ions in and out of cells during activity produces currents that can be measured using EEG. As for any electrical current, magnetic fields also result, which are measured with MEG. They are therefore giving a direct measurement of neural activity, unlike the indirect measurement seen in fMRI. This also means that they are more accurate in determining when activity occurs, but spatial resolution is sacrificed; activity can only be localised to within centimetres rather than millimetres.

Traditionally, EEG and MEG have been used to measure whole brain activity during sleep and in epilepsy sufferers. However, more recently, they have been used to shed light upon how different regions of the brain may converse with each other.

One recent theory attempts to explain connectivity in terms of “neural synchrony”. It suggests that if neurons from different regions of the brain want to converse, their firing patterns need to be temporally correlated. When activity in one region is increasing or decreasing, activity in the other must be doing the same. This correlated pattern of firing is often constrained to particular frequencies. For example, neurons in quite separate areas of the brain may simultaneously peak in their activity three times per second. This cyclical, frequency-locked peaking in activity may be opening and closing a window of opportunity for communication between the two regions. EEG and MEG are ideal for measuring such frequency-specific activity relating to neural synchrony.

Recent research suggests that in the visual system, activity within local neural networks (groups of neurons that work together to perform a function) synchronises at high frequencies, while activity between these networks synchronises at lower frequencies. This mechanism would allow the high frequency local networks to “bind” all the information about colour, for

example, before it is combined with information about shape, orientation and movement to result in the unified perception that we experience. This may be the solution to the binding problem posed when watching the train approach.

Synchrony has also been implicated in

more complex mental operations, such as working memory. This is the ability to keep immediately relevant information in mind for a short period of time. For example, if we are told a phone number, we are able to replay that in our mind for several seconds or until something distracts us. Successful retention requires communication between areas involved in sensory processing and frontal brain regions involved in conscious control of behaviour. This communication seems to be achieved through synchrony, with activity peaking in both regions three times per second. Successful short-term memory retention therefore seems to

rely on synchronous activity between two distinct cortical regions.

It is only by direct measurement of neural activity with high temporal resolution that such discoveries can be made. Whilst fMRI has dominated human brain research, it can never provide us with the whole picture. Similarly, EEG and MEG are unable to give precise locations of activity. But if the limitations of each technique are understood, and different questions tackled with the appropriate tools, useful progress can be made.

In combination, such techniques are more powerful, and future research could provide us with great insight into the human mind. If we can understand how the brain processes the barrage of information it receives and puts it together to construct our unique internal representation of ourselves and the world around us, then perhaps we can discover the origins of our personalities, emotions and even our consciousness.

Vicky Cambridge is a PhD student in the Department of Psychiatry

Aidan Horner is a PhD student in the MRC Cognition and Brain Sciences Unit

Clinical MRI scanner

“Functional MRI has popular because of its non-invasive

nature and high spatial resolution”

KA

SUG

AH

UA

NG

Page 23: BlueSci Issue 15 - Easter 2009

21Easter 2009

Ben Ravenhill discusses fluorescent proteins as an alternative to fMRI

InformatIon In the brain is passed along axons. axons are fibres that project from neuronal cell bodies and carry an electrical signal, in a similar way to wires in an electrical circuit, and pass information to cells that they make connections with. Whilst imaging techniques can be used to study the living human brain, animal research allows us to look directly at the paths that axons take and the brain regions that they connect, providing insight into the way in which neural networks are built. Scientists have been attempting to build “connectomic” neuronal maps for many years. But axons form bundles that resemble the chaos of cables under the average desk, and telling them apart is a challenge. recent work by a group at Harvard University, led by Professor Jeff Lichtman and Dr. Josh Sanes, has provided a colourful solution.

Green fluorescent protein (GfP) was originally found in the jellyfish Aequorea Victoria. It is what it sounds like; a protein that glows green when under blue light.

It can be inserted into the genome of almost any species and its expression can be controlled to allow easy visualisation and identification of individual cells. modification of GfP has led to the development of many derivatives with different colours; blue, cyan, yellow and red fluorescent proteins are all widely used in the same way.

the group at Harvard have exploited the properties of fluorescent proteins and their expression to create the “brainbow”. the genes for fluorescent proteins are inserted into the Dna of mice, so that they are expressed in neurons. Copies of the fluorescent protein Dna become inserted into the genome at multiple locations in each cell. at each location, one of the fluorescent proteins is expressed; which one is determined by a mechanism of the random enzyme digestion of sequences in the Dna. Different numbers of insertions and different levels of expression for each protein result in a unique combination of expressed coloured proteins in each

cell. In the same way that a tV creates all colours from just three, this combination gives each cell a unique colour.

How is this useful? Slices of brainbow tissue can be prepared on slides and analysed using a computer. analysis has so far resolved over 150 different colours. Each neuron has a largely uniform colour throughout and this means that not only the cell body is coloured, but also the axons, which allows them to be traced along their pathways to their connections with other cells.

Having successfully labelled neurons, the team at Harvard have since done the same with another major cell type found in the brain, the glial cells. these cells have long been thought to act as little more than support cells to neurons. However, evidence is beginning to show that this may not be the whole story, and that they may play more important roles in brain function. although fluorescent human brains are an unlikely development, the brainbow may make a significant contribution to neuroscience. If it can be used to accurately determine the connections between specific brain regions, this information could enhance and direct research that uses fmrI and other imaging techniques to learn about the living brain.

Another cross-section displaying the brain’s connectivity.

The brainbow allows neuronal cell bodies and fibres to be individually identified and their pathways studied; here is a brainbow of neurons in the hippocampus.

Brainbow

Ben Ravenhill is a second year medic

JOSH

SA

NES

& J

EFF

LIC

HT

MA

N

JOSH

SA

NES

& J

EFF

LIC

HT

MA

N

Page 24: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Amongst the rabble of buildings that make up the new museums site is a diminutive but striking piece of modernist architecture: the mond Laboratory. the mond is now probably best known for one peculiar feature, a leaping crocodile carved into the brick by its main entrance. After many different occupants, the building now houses the Centre of African studies, the mongolia and Inner Asia studies Unit, and a rag-bag of humanities students and visitors. however, for a brief period in the early 1930s, the mond was considered to be one of the most advanced physics labs in the world. It was one of the first in england to be built in the ‘modern’ style and, were it not for the departure of its chief scientist, it might have led to yet more nobel Prizes for the Cavendish Laboratory.

the mond lies in the courtyard of the old Cavendish - a laboratory within a laboratory. It was built specifically for the magnetism experiments of the Russian

physicist Pyotr Kapitza (1894-1984). Kapitza had arrived in england in 1921, as part of a soviet project to re-establish scientific contacts with the West. he had only intended to stay for the winter to complete an experimental training course. But Kapitza quickly impressed

Rutherford, the head of the Cavendish, and was given a project tracking the paths of a-particles. the results of this famous study were published in 1922 and 1924, by which time Kapitza was established in Cambridge and had earned his PhD. he had also started conducting experiments on magnetism, and at this point his career took off. his ability to combine engineering skills with theorisation led Rutherford to take him on as his protégé.

soon Kapitza took his first steps to independence from the Cavendish, setting up the ‘magnetic Laboratory’ in an outbuilding of the Department of Chemistry. his work was by then almost exclusively concerned with the resistance of metals in high magnetic fields. the results of his first experiments, conducted at the temperature of liquid nitrogen, were published in 1928, by which time he was working on new methods for liquefaction of hydrogen and helium. gradually it became clear that the scale of the research was too large to continue in such a piecemeal fashion. In addition, the grant that had funded the laboratory was running out. Kapitza’s work was saved by a £15,000 grant from the bequest of the industrialist Ludwig mond, given to the University to create a laboratory in his name.

With this astonishing degree of freedom and backing, Kapitza was able to construct a laboratory that catered exactly to his needs. In order to make

22

JAM

IE M

AR

LAN

D

Boris Jardine charts the history and inhabitants of the Mond Laboratory

Kapitza and the Crocodile

“The Mond was one of the most advanced physics labs

in the world”

“Kapitza was able to construct a laboratory that

catered exactly to his needs”

REP

ROD

UC

ED B

Y P

ERM

ISSI

ON

OF

TH

E fIT

ZW

ILLI

AM

MU

SEU

M, U

NIV

ERSI

TY

OF

CA

MBR

IDG

E

Page 25: BlueSci Issue 15 - Easter 2009

Easter 2009

the most of his funds, he seized upon the sparse functionalism of architectural modernism. The project was taken on by H.C. Hughes, one of the first graduates of Cambridge’s Department of Architecture. Though Hughes’ subsequent work shows that he was keen on the new style, Kapitza drew up the initial plans, and made modifications at every stage.

The main architectural challenge arose from the conflict between the generation

of intense magnetic fields, by means of a short-circuited generator, and the delicate measurement of tiny alterations in the physical properties of metals. The former caused what Kapitza called a ‘minor earthquake’ and the latter used equipment highly sensitive to vibration. Kapitza’s solution to the problem was ingenious: because the magnetic field was only generated for 1/100th of a second, the seismic effects of the short circuit could be negated by placing the sensitive apparatus sufficiently far away. The total distance required was about 20 metres; the measurements would be completed before the far end of the room started shaking. In addition, the steel-framed building is in two barely-connected

halves, so as to minimise the transmission of vibrations.

The sensitive piece of kit that all of this was intended to protect was an extensometer designed by Kapitza himself. It consisted of a small vessel, suspended in and containing oil, which was connected to a sample of metal at one end, with a diaphragm at the other. The diaphragm had a tiny hole, through which any oil displaced by changes in the sample would pass. This oil would then press against a tiny articulated mirror, which diverted a beam of light in such a way as to allow a reading to be taken; the alterations in the sample were to be magnified roughly 100,000 times. Needless to say, the extensometer was very sensitive. Indeed, as Kapitza suggested, one could turn this into a benefit and use it as a very precise seismograph. Unsurprisingly, the set-up was elaborate. In addition to placing the apparatus in a vibration-damped environment at one end of the building, Kapitza had to construct it on a “massive slate plate, suspended from the ceiling by means of four thin bronze wires”. The extensometer was literally built into the Mond; and the Mond, as the only possible setting for the instrument, was a part of the extensometer. Perhaps the most startling architectural features of all were the roofs of the liquefaction rooms, which were constructed in such a way that they would quickly disintegrate in case of an explosion. Kapitza would tell nervous visitors about a recent catastrophe in a German lab without such precautions, in which debris was spread for miles around.

In spite of this remarkable set-up, Kapitza completed only a handful of experiments in the Mond. In the summer of 1934 he visited Russia with his wife Anna, but when he tried to return in October he was refused permission. The Soviet government had decided that his work should be incorporated into the second five-year plan and he was offered funds to put together a replacement research institute on the outskirts of Moscow. In 1937, the equipment Kapitza had assembled in Cambridge was shipped over, and he returned to Cambridge only once more, over thirty years later.

After Kapitza’s untimely departure from Cambridge, the Mond had various uses.

In its current state, offices, seminar rooms, and a library, there is little evidence of its previous life. In today’s climate of building preservation, which favours facades over interiors, the crocodile has become the building’s main emblem. It was designed by the artist and typographer Eric Gill.

The crocodile has bewildered its audience. Maybe it was supposed to be

Rutherford, who himself was famously ‘snappy’? Even Kapitza seemed unsure. Sometimes he said that “in Russia the crocodile is the symbol for the father of the family and is also regarded with awe and admiration because it has a stiff neck and cannot turn back”. However, he also likened Rutherford to the crocodile in Peter Pan. The last word, I think, should be Gill’s own assessment, certainly the most mischievous of all. At the opening of the building, in February 1933, he delighted in telling the assembled reporters that the crocodile was not Rutherford at all, but stood for “science devouring culture”.

23

BOR

IS J

AR

DIN

E

Boris Jardine is a PhD student in the Department of History and Philosophy of Science

JAM

IE M

AR

LAN

D

“The Mond is now probably best known for one peculiar feature”

“Kapitza arrived in England to help the Soviets

re-establish scientific contact with the West”

Pyotr Leonidovich Kapitza (1894 – 1984)

The Mond Laboratory, New Museums Site, Cambridge.

REP

ROD

UC

ED B

Y P

ERM

ISSI

ON

OF

TH

E fIT

ZW

ILLI

AM

MU

SEU

M, U

NIV

ERSI

TY

OF

CA

MBR

IDG

E

Page 26: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Since hiS early work on a norwegian glacier in the 1980s, Dr ian Willis has worked in canada, Switzerland, Alaska, new Zealand, iceland and Svalbard and will be visiting Greenland later this summer, trying to understand how our planet’s glaciers and ice sheets work, how they are changing, and how they might change in the future.

When ian is not hiking up glaciers, he works in the Scott Polar Research institute, researching and teaching undergraduates and Masters students.

What does your research involve?i investigate the mass balance of the world’s land ice. Like many glaciologists, i am particularly interested in whether ice masses are growing or shrinking and what controls this. Using a combination of computer modelling, airborne remote sensing and ground based instrument data, we are able to map the changing extent of glaciers and ice sheets and how they might change over the next few decades in response to climate change. i am also interested in the hydrology of ice masses and their dynamics; in other words, how water moves through them and the effects this has on their movement.

How much time do you spend between research and teaching?About half and half. Of course, there is quite a lot of administration associated with both teaching and research.

What about fieldwork?i typically spend a few weeks or months each year doing fieldwork for my research. For example, this year i have trips planned to Greenland and Svalbard in Arctic norway. Teaching also involves fieldwork. in recent years i have been lucky enough to take undergraduate students to the Arolla Glacier in Switzerland. it is a great opportunity for the students to learn about the techniques glaciologists use to measure the mass balance, hydrology and dynamics of glaciers and to see first-hand the changing landscape of the Alps as the climate shifts and the glaciers, rivers and vegetation respond.

What does a field trip for you entail and what special training do you need before you explore these inhospitable places?Field trips are all quite different, depending on where i’m working. next month i’ll be working on a glacier called Midre Lovénbreen in Svalbard. The glacier is close to a research base at an old mining settlement called ny Ålesund, which at around 79°n and is one of the world’s northernmost settlements. The set up there is relatively comfortable because the infrastructure has been developed for many years now to cater for the large scientific community who work there. it is not just glaciologists that find ny Ålesund a perfect base for their research, but also oceanographers, biologists and atmospheric and space scientists.

Usually we fly by jet to Longyearbyen via Oslo and from there, via a small twin propeller aeroplane, we fly to ny

Ålesund. here, there is nothing but a lot of ice and a few polar bears between you and the north Pole. expeditions from here are via skidoo pulling a sled carrying our scientific instruments. The only training we really need is to be able to drive a skidoo and fire a rifle, as there is always a chance of an unexpected encounter with a polar bear! The days on the glacier are always exhausting, as we have a finite time to collect all the data and, although repetitive, we have a lot of work to get through. This means that at the end of a long day in the Arctic, we don’t usually notice the 24 hours of daylight during the summer months and sleep very well back at the base.

Field trips are not always this cosy. in the late 1990s, i worked on the Arolla Glacier in Switzerland. Arolla is one of the highest traditional villages in the Alps, at an altitude of about 2000 m. The infrastructure here was not at all comparable to Svalbard and involved us camping in fairly rough and basic conditions. We really embraced the bitterly cold wilderness up there but when the weather was good, there was nowhere in the world i wanted to be more.

For more information visit:www.spri.cam.ac.uk

24

Beth Ashbridge meets Ian Willis from the Scott Polar Research Institute

IAN

WIL

LIS

A DAY IN THE LIFE OF...

Beth Ashbridge is a PhD student in the Department of Chemistry

IAN

WIL

LIS

A Glaciologist

Page 27: BlueSci Issue 15 - Easter 2009

Easter 2009

An internship had always seemed like the perfect opportunity to get some sunshine and take a couple of months out of my phD while keeping my supervisor happy. Browsing company websites, i stumbled across some recent television advertisements by exxonMobil. One in particular caught my eye, the subject of which was lithium-ion batteries for electric vehicles.

Dubbed “today’s ultimate battery”, lithium-ion batteries provide power to many technologies, including mobile phones and laptops. they are portable, have high energy densities and hold charge well when not in use, leading to their use in hybrid cars.

this sounded interesting - the biggest publically traded oil company in the

world was doing something green. i decided to be direct and sent off an email. they responded and here i am.

My research lab is just outside of houston, texas, in amongst the towering oil refineries. i’m here for three months to investigate various aspects of battery separator films, the role of which is to prevent short-circuiting between electrodes (see box). part of the reason i decided to do this internship was to experience how life might be if i take the industry route at the end of my phD.

the research process works a little differently here. researchers submit samples to highly qualified technicians who then perform the experiments. the researchers subsequently analyse the results, plan future experiments accordingly and finally report to managers who decide which projects have promise and which should be dropped. At exxonMobil most of the researchers have phDs in chemistry or engineering – it is a chemical company after all – but there is one other physicist here so i am not completely alone.

Overall the experience has been a good

one. i now have a lot more insight into how industry works and a much greater knowledge of battery separators! in terms of everyday life, it’s not all that different to being in the lab in Cambridge. except that the money is better.

25

Katherine Thomas decides to take some time away from Cambridge

Katherine Thomas is a PhD student in the Department of Physics

Recharging ResearchAWAY FROM THE BENCH

Analysing images using optical microscopy

EXX

ON

MO

BIL

Lithium-ion batteries are made from four layers; the positive anode, the negative cathode and two polymer separators. The layers are pressed together allowing lithium ions to be transferred between the anode and the cathode through a liquid electrolyte. The separators prevent electronic contact between the anode and the cathode, stopping the system from short-circuiting, but also

allowing the ions to flow. The separator design is critical: if the battery overheated we would want the pores of the separator to close forming a barrier between cathode and anode, rather than allowing contact and potentially resulting in a fire. The polymer separators ExxonMobil have devel-oped have enhanced per-meability, higher meltdown temperature and melt

integrity. Due to better permeability the lithium ions can flow more easily meaning that energy can be provided more quickly, while the higher meltdown temperature means that the thermal safety margin of the battery is increased. For electric cars these are both very important. No-body is going to drive an electric vehicle if they have to carry round a boot full of spare batteries.

“The biggest publically traded oil company in the

world was doing something green”

Lithium-Ion Batteries

The structure of a lithium-ion battery:1. Anode2. Polymer separator3. Cathode4. Polymer separator

IAN

WIL

LIS

MA

UR

A B

AR

RET

O

Page 28: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Humans like order: regular patterns and straight lines. a quick glance around our homes, offices and most of what we see can be described using simple shapes: circles, triangles, squares, that are easily described and defined mathematically. However, objects in nature cannot be described so simply. From afar, a mountain might resemble a triangle, but as we look closer, it becomes apparent that the edges are not smooth. The rough, random details of the natural world were long thought too intricate and complex to be described accurately using mathematical formulas until some pioneering work in the field was carried out.

in the 1970s the work of one mathematician Benoît mandelbrot pointed towards the solution. He realised that many natural objects show self-similarity. Zoom in on one of these complicated objects and a new picture emerges that is strikingly similar to the original. Zoom in again and the same is true. at first this seems like an odd concept but a tree is a classic example. if you study a tree, you see the trunk with branches radiating from it. each branch in turn is like a mini-tree, with protruding sub-branches. The sub-branches themselves have further sub-branches sprouting from them. if you

look closely at any portion of the tree it looks remarkably like the original. This property of self-similarity is one defining characteristic of a group of objects known as fractals. a term mandelbrot coined from the latin fractus, meaning ‘broken’ or ‘fractured’, to describe these unique shapes.

The mathematics behind fractals actually began 100 years earlier with the discovery of ‘monsters’ – strange objects that have simple beginnings but quickly become too difficult to describe. The first was the Cantor set (see diagram), created by Georg Cantor in 1883. He took a straight line, broke it into thirds and removed the middle third, leaving two lines. He then repeated the process with those two lines, breaking them into thirds and removing the middle sections. He did this over and over again. This simple procedure creates an endlessly repeating pattern that reveals itself as you zoom in on any section of the Cantor set.

a similar phenomenon is known as the koch snowflake (diagram), and presents something of a paradox. Created by Helge von koch in 1904, the shape is described by a line that to the eye appears to be finite, but mathematically is of infinite length. To understand how this can happen we must look at how the snowflake is created. starting with an equilateral triangle, each side is split into thirds, the middle section removed and replaced with two lines meeting at an apex. each time you repeat this process you replace little pieces with two that are longer than the original and so the perimeter of the shape increases. if you

repeat this an infinite number of times, the line becomes infinitely long.

The koch snowflake’s infinite length helped to solve a problem that had been affecting the measurement of coastlines. in the 1940s British scientist lewis Richardson collated various measurement data for a single coastline. He noticed that if you measure the coastline of Britain with a 100 metre scale, say from a boat, you get one answer. However, if you were to walk around the coastline using a metre rule, you include more of the indentations in the land, resulting in a longer measurement. in summary, the more detail you incorporate, the longer the coastline.

Yet another of the monsters, developed by French mathematician Gaston Julia, became the particular interest of mandelbrot. Julia took a simple equation and added a feedback loop so that each result it gave was fed back in to the original equation to produce the next one. Julia tried to make sense of the output but could see no emerging patterns and was limited by the number of points he could generate.

mandelbrot was working at iBm when he began studying the ‘Julia set’ and was able to do something not previously possible: he used computer technology to repeat the iteration millions of times and graph the result. He noticed a pattern begin to appear and decided to combine many Julia sets into one striking image. This image, known as the mandelbrot set (see picture opposite), has become the emblem for fractal geometry.

26

Lindsey Nield looks into the mathematics of repeating patterns

“Self-similarity is one of the defining characteristics

of fractals”

“Fractals revolutionised the world of computer graphics

and special effects”

Rules of RepetitionU

NIT

ED S

TAT

ES F

EDER

AL

GO

VER

NM

ENT

ARTS AND REVIEWS

Lightning is an example of fractals in nature

Page 29: BlueSci Issue 15 - Easter 2009

Easter 2009

Artists and designers all over the world welcomed the visual potential of fractals. They revolutionised the world of computer graphics and special effects, allowing the detail and realism previously missing to be incorporated with a little iteration. The epic final fight scene in Star Wars III would not be complete without jets of lava spurting up around the two battling heroes. The lava was given its realistic appearance by the application of fractal design. A swirl effect was applied to a lava jet, and was then repeatedly miniaturised and reapplied. When all the layers were added together, it gave the lava a texture that looked like the real thing.

The popularity of fractals was viewed with scepticism from some mathematicians who thought they were just an artefact from computers. However, when Mandelbrot published his book The Fractal Geometry of Nature in 1977 he proved that fractals, in some form, are all around us.

Fractal-like patterns can be found throughout nature, from blood vessels in our bodies, to lightning in the sky. You may wonder what good it does to describe these beautiful natural objects with mathematics, but fractals have proven to be useful in many branches of science.

One example is the fractal antenna. The inventor, Nathan Cohen, heard Mandelbrot speak about fractals at a conference and wondered how the strange shapes might work as antennas.

So he made one in the shape of the Koch Snowflake. The antenna worked surprisingly well and enabled him to reduce its size dramatically. He discovered that fractal antennas can receive a greater range of frequencies than the norm, and found an application in mobile phones. Features such as Bluetooth and Wi-Fi each run on a separate frequency and without a fractal antenna, a phone would need at least two antennas. Since their discovery, fractal antennas have been implemented in telecommunications all over the world.

Fractals also have some promising uses in medicine. Ary Goldberger of the Harvard Medical School found that a healthy heartbeat has a fractal pattern when investigated over different time scales. This signature may help doctors to spot heart problems. At the University of Toronto, biophysicist Peter Burns is using fractals in tumour research. When a tumour first develops, a network of tiny blood vessels forms that conventional techniques are not powerful enough to image. Using fractal geometry Burns modelled the blood flow through normal, neatly bifurcating vessels and through chaotic, tangled tumour vessels and found a significant difference that may be valuable in tumour detection in the future.

Perhaps the most ambitious use of fractals is by a group from the University of Arizona who are trying to predict how much carbon dioxide an entire rain forest can remove from the atmosphere. They have found that the distribution of large and small trees in the forest closely resembles the distribution of large and small branches on a single tree.

By analysing these fractal patterns and measuring how much carbon dioxide a single leaf can take in, they can scale up to predict how much the whole forest can absorb.

Fractals, at first view, are complex, irregular patterns but they can start out with the simplest of processes. Mother Nature herself has used them repeatedly to create the world around us. Mathematics may at times seem abstract, but fractal geometry proves that it is integral to the beauty and workings of our planet.

27

Iterations of the Cantor set (top) and the Koch snowflake.

“Fractal-like patterns can be found throughout nature”

Lindsey Nield is a PhD student in the Department of Physics

Zooming in on the Mandelbrot set reveals repetition of the shapes

WO

LFG

AN

G B

EYER

Page 30: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Nearly 2000 years ago, the famous roman physician and anatomist Galen conducted a series of groundbreaking experiments on the human body. Galen was probably the most accomplished medical researcher of the roman period and a man not overly troubled by self-doubt. He knew that his work would revolutionise medicine and he wanted everybody else to know it. Thus, upon presenting his seminal work to other physicians, he included repeated reminders that he had been the first

to make such discoveries. He declared that his rivals were wholly incapable of doing the same, pronouncing them as ‘lazy’ and ‘ignorant’ and declared his work to be “as superfluous to them as a tale told to an ass.”

Galen may have used unusually cutting language, but he was right to worry about his intellectual legacy. The history of science is full of misattributed discoveries and stolen credit.

Take Carl Scheele’s particularly tragic tale of obscurity. an 18th century Swedish chemist and pharmacist, Scheele discovered eight elements, including nitrogen, oxygen, and

chlorine, but received credit for none. In some cases his findings were simply overlooked (it seems that Swedish is not a language conducive to world-renown). In others, he lost out to rival scientists who made independent discoveries and published first. a brilliant experimentalist with the unfortunate habit of tasting each chemical he encountered, Scheele accidently brought his own tale to a premature end during his 43rd year, when his assistants found him dead at his workbench surrounded by numerous toxic chemicals. a possibly apocryphal ending to Scheele’s ill-fated story is that Scheele was to be ennobled by Gustavus III for his discoveries, but the honour was instead mistakenly given to an obscure soldier of the same name.

Scheele seems to have been unusually unlucky, but his story does share a commonality with other tales of misattribution. Credit disputes often arise when many scientists are independently working on a hot scientific problem. In Scheele’s case, a number of prominent scientists, including antoine lavoisier and Joseph Priestley, were investigating the increasingly controversial ‘phlogiston theory’, as a result of which both independently discovered oxygen. Their relations were amicable, but in other cases, the competition has led to fierce battles. robert Koch and louis Pasteur fought bitterly for the credit for discovering the cause of anthrax, while a more recent struggle took place between robert Gallo and luc Montagnier for the discovery of the human immunodeficiency virus. Indeed, sometimes the investigative heat and hunger for recognition has led to cases of misattribution that involve as much malice as misfortune.

This was certainly the case during the early days of geology and paleontology, a field rife with misplaced recognition and lost fame. It wasn’t until the early

19th century, some thirty years after the first likely discovery of dinosaur bones, that geologists started to recognise that they were dealing with unique, prehistoric species. But when they did, the hunt was on.

One eager fossilist was a man named Gideon algernon Mantell, a country

physician with a passion for seeking and collecting fossils who happened upon some large teeth he suspected were of prehistoric origin. However, the leading geologist of his time, Cuvier, dismissed them as rhinoceros’s teeth, and Mantell’s friend reverend William Buckland cautioned Mantell to publish only after he was certain (during that time, it should be noted,

28

Natalie Vokes exposes tales of misattributed scientific discoveries

Credit CrunchHISTORY

“Geology and paleontology are rife with misplaced

recognition and lost fame”

“Scheele had the unfortunate habit of tasting each chemical

he encountered”

Antoine-Laurent Lavoisier (1743 - 1794)

Engraving from William Buckland’s “Notice on the Megalosaurus or great Fossil Lizard of Stonesfield”, 1824.

LOU

IS JE

AN

DES

IRE

DEL

AIS

TR

E

SVEN

SKA

FA

MIL

J-JO

UR

NA

LEN

Page 31: BlueSci Issue 15 - Easter 2009

Easter 2009

the same Reverend Buckland went ahead and published his own finding of another giant prehistoric creature, which he imaginatively named the ‘megalosaurus’). Mantell therefore spent three years carefully gathering evidence and attempting, mostly unsuccessfully, to convince his peers that the teeth belonged to a previously undiscovered species from the Mesozoic era.

Though his interpretation was eventually confirmed and his species named the iguanodon, Mantell became so consumed by his hobby that he neglected his medical practice and was ultimately forced to sell off his large collection of fossils to avoid financial ruin. Even so, he continued downward into destitution and his wife abandoned him in despair. Ruined and alone, Mantell then had the misfortune to fall

from a moving carriage. He became entangled in the reins and was dragged behind the horse for some distance, leaving him with chronic, debilitating pain and a crooked spine.

Mantell’s misfortunes were heavy indeed, but at this point he was still recognised for his contributions to

geology and for the discovery of several new species, especially the iguanodon. But geology was an extremely competitive field, and by that time a particularly fame-hungry, ruthless anatomist named Sir Richard Owen had been quietly and illicitly claiming what credit he could. In fact, Owen had famously opposed Mantell’s assertions that the iguanodon was a new reptilian species, just as he had tried to ruin the careers of other promising young scientists. Thus, when Mantell suffered his accident, Owen used Mantell’s weakened condition to go about expunging Mantell’s contributions from the record, renaming and claiming the species Mantell had discovered. When Mantell died in 1852 from an opium overdose, Owen added insult to injury and had a section of Mantell’s twisted spine removed, pickled, and stored on a shelf of the Royal College of Surgeons.

Like Scheele, Mantell was unusually unlucky, and like Scheele, he worked in a highly competitive field. But he also had another card stacked against him: he was attempting to work outside the circle of accepted experts. Though science was far less institutionalised then than it is now, and amateur scientists were far more common, Mantell’s interpretations had to be confirmed by those in positions of authority, and that put him in a weaker position. Indeed, it is now widely recognised that prestige and power go a long way to securing an individual’s scientific reputation. A sociologist named Robert Merton coined the term ‘Matthew Effect’ to describe this phenomenon, whereby more prominent scientists receive more credit than their less established peers.

The Matthew Effect is especially prominent in contemporary science, where research is often carried out by a team of investigators. The injustice toward Rosalind Franklin is well known, but a similar example is that of

pulsars, hailed as the most important astronomical discovery of the 20th century. Though the graduate student Jocelyn Bell actually carried out the immediate research, she did not receive the Nobel along with her Cambridge supervisor Anthony Hewish. Bell has publicly stated that it would have been inappropriate for her to receive the award for work she did as a PhD student, but there has been considerable controversy nonetheless.

Of course, for the less powerful, there is one way to secure long-term recognition: write the history of the discovery. This tactic has helped many secure their place in posterity, as many of the most famous scientists were also excellent rhetoricians and writers of history. To be sure, a written history may not secure fame during one’s lifetime, but with an engaging history perhaps one can establish a more enduring notoriety. So to all you scientists out there – developing your writing may be as essential as your pipetting. And watch out for nefarious fossil collectors.

29

Natalie Vokes is a Part II student in the Faculty of Philosophy

“Mantell was attempting to work outside the circle of

accepted experts”

Carl Wilhelm Scheele (1742 – 1786)

Joseph Priestley (1733 - 1804)

In science, the credit goes to the man who convinces the World, not to the man to whom the idea first occurs.

- Sir Francis Darwin (1848-1925) (son of Charles Darwin)

LOU

IS JE

AN

DES

IRE

DEL

AIS

TR

E

SVEN

SKA

FA

MIL

J-JO

UR

NA

LEN

ELLE

N S

HA

RPL

ES

Page 32: BlueSci Issue 15 - Easter 2009

THE PAVILION

30JELLYFISH BURGER, 2009, Digital Composite. DAVE BECK, digital artist, www.davebeck.org and JENNIFER JACQUET, marine scientist, www.scienceblogs.com/shiftingbaselines

Page 33: BlueSci Issue 15 - Easter 2009

Easter 2009

Up and down the country there are hundreds of engineers working on a multitude of projects that take into account climate change. despite a relatively small pool of data sources in the UK, the data can be bewildering to the uninitiated. Even in the UK where we have strong institutions, educated, committed professionals and a regulatory framework that is pushing for climate adaptation and preparedness, there is a gulf between climate science and corresponding engineering challenges.

predicting reservoir yields is important for keeping bills low and waste minimal. Mott Macdonald Ltd works with several UK water companies to help produce ‘Water Resource Management plans’ that consider climate change. Mott Macdonald uses regional rainfall predictions to build detailed models and by factoring in precipitation, river flow and groundwater levels, they are able to estimate reservoir yields. However, uncertainties in rainfall prediction can give unreliable results. Underestimating yields results in unneeded storage provisions paid for through increased

water bills. On the other hand, if companies overestimate their yield, there is insufficient water to meet demand causing water rationing and supply disruption. The key is providing accurate models which minimise uncertainties.

although uncertainty in climate modelling is becoming more transparent, it is still present. The Intergovernmental panel on Climate Change (IpCC) currently puts sea level rise by 2099 at somewhere between 18 and 59

centimetres. put in context, for a family in the East of England this can make the difference between staying or having to relocate away from rising seas. Similarly by 2080 we can expect London summers to be like those in Southern France today and we have to ensure that buildings are able to cope with rising temperatures.

Engineers have a reputation for simplifying where possible: reducing systems to a black box and using what comes out. They have a tendency to treat climate data in the same way. When they indicate that Mediterranean summers will come to London by 2080, what this actually means is that this is true according to the 2002 UK Climate Impacts program medium-high emissions scenario for 2080 when run on the Hadley Centre version three model, and that this temperature rise of six degrees is accurate to within one-and-a-half degree Celsius. This is just one of four possible UK Climate Impact program scenarios. The water companies’ precipitation predictions for the same period have an uncertainty margin of 30%, making planning water resources particularly challenging.

Current models are a good start but IpCC calculations do not consider the release of greenhouse gases from thawing tundra, nor do their sea level calculations account for carbon cycle feedback (their temperature predictions do). Climate models are of course calibrated on current situations and assume continuing validity which makes it very difficult to account for tipping effects and future forcing mechanisms as our climate changes.

at a local level we must work on the regionalisation of models. Even basic dynamic downscaling of global models is currently computationally expensive. and as models get more regional the results become population specific and must account for local changes. These may include river geometry, local glacier area

changes, increasing city sizes, changing land use and ecology and the feedback interactions between these factors.

The engineering sector faces a huge challenge as it helps the world prepare for climate change. as a sector we need to educate our professionals on how to deal with climate change data. The models engineers use will need to consider changes in land cover,

ecology, glaciation, population changes, industrialisation and many more. There is also a desperate need for models that denote the uncertainty in a meaningful, regionalised way that can be presented to governments and private clients to inform policies and practices.

Engineers dealing with climate change work in an exciting and engaging sector which requires both innovative engineering and also rigorous, well-communicated science that helps to prepare us all for a more uncertain future.

31

“Engineers have a reputation for simplifying

where possible”

Ian Ball explores the interplay between engineering and science

Engineering the WeatherINITIATIVES

Ian Ball recently graduated as a Part III in the Department of Engineering

DEA

TH

OR

GLO

RY

Flooding in Horncastle, Linconshire in 2007.

“The data can be bewildering to

the uninitiated”

Page 34: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Dear Dr Hypothesis,I’ve just spent the last of my phone credit trying to navigate a useless automatic call centre system. Is there no hope on the horizon for its improvement?Creditless Colin

DR HYPOTHESIS SAYS:The future you are waiting for is in a technology called Artificial General Intelligence. This kind of programming specifically starts with a set of parameters that you or I take for granted, such as the sky being blue or basic mathematical logic. It applies this knowledge with a contextual database of past choices to builds something you might identify as Artificial Intelligence.

One firm has recently launched the first program to be made commercial, specifically targeted at providing a phone service. Known as SmartAction, the system can understand, for example, the implied difference between he, she or it, the idea being you can talk to it as you might a person. In the meantime though while the technology finds its feet, you’re still far better off talking to a human!

Dear Dr HypothesisI’m a busy girl, and one thing I hate waiting around for is my phone and laptop to charge. Is there no faster way of doing it?Chatty Caroline

DR HYPOTHESIS SAYS:The lithium-ion batteries in your phone and laptop may soon be replaced by cheaper lithium iron phosphate (LiFePO

4) batteries. The problem with

these is in the interaction between the lithium ions and the cathode, where lithium must try and enter or leave via tiny pores. This lag slows the process of charge and discharge considerably.

However, three new technologies are here to help. The first is coating the cathode with carbon. The electron-cloud surface allows easier movement of the ions across the surface in search of a suitable pore. The second is another coat, this time lithium phosphate, again

allowing greater ease of movement. The

last uses a cluster of nano-balls as the electrode, greatly increasing surface

area and thereby the available pores for the Li ions. This wouldn’t just have ramifications for your mobile and laptop, which would be able to charge fully in seconds, but also for hybrid cars and electric cars which would be able to charge in minutes.

Dear Dr Hypothesis,I’m a keen advocate of renewable energy, but my understanding is that a lot of the electrical energy we produce is wasted as heat when it’s transmitted. Is there no way of improving the system?Sparky Simon

DR HYPOTHESIS SAYS:There are several projects currently underway to improve power transmission. The one that instantly springs to mind is the use of superconducting cables. These are made of specific metal-ceramic mixtures, which when cooled to low temperatures give almost no resistance, reducing the loss in heat.

However, this can be improved yet further with the use of High Voltage Direct Current (HVDC). Although originally Thomas Edison’s direct current (DC) lost out to Tesla’s alternating current (AC) because down-transformers for DC didn’t exist, we now have that technology. The use of DC over long distances reduces the capacitance effect seen with AC - induction of small magnetic forces outside of the cable, greatly increased when in water or underground.

Better still is that superconducing HVDC cables currently in testing (Chubu University, Japan) can use their magnetic field as a power store, flattening out the current delivered which would be ideal for sporadic generation from renewable energy sources.

32

Dr HypothesisLETTERS

Email Dr H with all your scientific conundrums [email protected]

DJU

KE

VEL

DH

UIS

Page 35: BlueSci Issue 15 - Easter 2009

www.bluesci.org

Page 36: BlueSci Issue 15 - Easter 2009