lawrence livermore national laboratory livermore, california … · 2019-05-29 · printed on...

16
Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence Livermore National Laboratory P.O. Box 808, L-664 Livermore, California 94551 October 1996 Lawrence Livermore National Laboratory Nonprofit Org. U. S. Postage PAID Livermore, CA Permit No. 154 Also in this issue: Six New R&D 100 Awards

Upload: others

Post on 25-Apr-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

Printed on Recycled paper.

Assessing GlobalClimate Change

Science &

Technology Review

Lawrence Liverm

ore National Laboratory

P.O.B

ox 808, L-664Liverm

ore, California 94551

October 1996

Lawrence

Livermore

National

Laboratory

Nonprofit O

rg.U

.S.Postage

PAID

Livermore, C

APerm

it No.154 Also in this issue:

Six New R&D 100 Awards

Page 2: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

About the Review

• •

About the Cover

Assessing GlobalClimate Change

Also in this issue:

Six New R&D 100 Awards

October 1996

Lawrence

Livermore

National

Laboratory

Lawrence Livermore National Laboratory is operated by the University of California for the Departmentof Energy. At Livermore, we focus science and technology on assuring our nation’s security. We alsoapply that expertise to solve other important national problems in energy, bioscience, and theenvironment. Science & Technology Review is published ten times a year to communicate, to a broadaudience, the Laboratory’s scientific and technological accomplishments in fulfilling its primary missions.The publication’s goal is to help readers understand these accomplishments and appreciate their value tothe individual citizen, the nation, and the world.

Please address any correspondence (including name and address changes) to S&TR, Mail Stop L-664,Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551, or telephone(510) 422-8961. Our electronic mail address is [email protected].

Prepared by LLNL under contractNo. W-7405-Eng-48

S&TR is available on the Internet athttp://www.llnl.gov/str. As references becomeavailable on the Internet, they will be interactivelylinked to the footnote references at the end of eacharticle. If you desire more detailed informationabout an article, click on any reference that is incolor at the end of the article, and you will connectautomatically with the reference.

Electronic Access

We want to know what you think of ourpublication. Please use the enclosed survey formto give us your feedback.

What Do You Think?

SCIENTIFIC EDITORS

Becky Failor and Ravi Upadhye

PUBLICATION EDITOR

Sue Stull

WRITERS

Arnie Heller, Dale Sprouse, Katie Walter, and Gloria Wilt

ART DIRECTOR

Kathryn Tinsley

DESIGNERS

Ray Marazzi and Kathryn Tinsley

GRAPHIC ARTIST

Treva Carey

COMPOSITOR

Louisa Cardoza

PROOFREADERS

Ellen Shorr and Al Miguel

S&TR is a Director’s Office publication,produced by the Technical InformationDepartment, under the direction of theOffice of Policy, Planning, and SpecialStudies.

2 The Laboratory in the News

4 Patents

5 Commentary on the Importance of Climate Change

Feature6 Assessing Humanity’s Impact on Global Climate

Computation expertise developed at Livermore is being applied to the challenging task of understanding and predicting changes in global climate.

Research Highlights14 Livermore Wins Six R&D 100 “Oscars”16 Electronic Dipstick Signals New Measuring Era18 Signal Speed Gets Boost from Tiny Optical Amplifier20 SixDOF Sensor Improves Manufacturing Flexibility22 A Simple, Reliable, Ultraviolet Laser: the Ce:LiSAF24 Giant Results from Smaller, Ultrahigh-Density Sensor26 Thinner is Better with Laser Interference Lithography

28 Abstract

S&TR Staff October 1996

LawrenceLivermoreNationalLaboratory

Printed in the United States of America

Available fromNational Technical Information ServiceU.S. Department of Commerce5285 Port Royal RoadSpringfield, Virginia 22161

UCRL-52000-96-10Distribution Category UC-700October 1996

Page 6

Page 24

The stylized map is taken from our featurearticle on global climate modeling, beginning onp. 6. The actual Figure 5a (p. 12) in the articleshows the ten-year mean surface temperaturevariability in December, January, and Februaryfrom 1979 to 1988. Using massively parallelprocessing computers at Livermore, theLaboratory performs calculations of this type inresearch coordinated with the internationalAtmospheric Model Intercomparison Project.

Page 20

Page 3: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

3

Science & Technology Review October 1996

2 The Laboratory in the News

Science & Technology Review October 1996

Technique to hunt dark matter could search for planetsAn astrophysics technique used to search for “dark matter”

may prove valuable in finding planets orbiting suns near thecenter of our galaxy. That is the view of Laboratory scientistsDavid Bennett and Sun Hong Rhie in a paper for theAstrophysical Journal.

Bennett is part of an international team that is searching fordark matter: nonvisible astrophysical objects—such as blackholes, white dwarfs, brown dwarfs, and neutron stars—that areestimated to account for 90% of gravitational mass in ourgalaxy. Rhie is an expert on theoretical aspects of gravitationallensing by double stars and planetary systems.

In hunting for dark matter, astrophysicists use a methodknown as microlensing, in which gravity from a large darkobject passing in front of a distant star makes the light from thatstar appear brighter for a time. This change in brightness can begraphed in the form of a light curve, typically a bell shape.

If the same search technique is oriented toward the center“bulge” of our galaxy, Bennett and Rhie say, faint stars could beused to microlens more distant and brighter stars. A planetassociated with the faint star could then be detected within theresulting light curve. The planet would appear as a briefmodulation of the bell-shaped curve.

The work by Livermore astrophysicists shows that anambitious microlensing program could detect planets rangingfrom Jupiter down to the mass of less than ten Earths. Whilethere are several other techniques for finding planets,microlensing appears to be the only ground-based techniquethat is sensitive to small planets. Contact: David Bennett (510) 423-0656 ([email protected]) or Sun Hong Rhie (510) 423-0660 (sunhong2igpp.llnl.gov).

Livermore provides spectrometer to PolandPolish border guards are in a better position to do on-the-spot

analysis of suspicious materials entering or leaving theircountry, thanks to a portable gamma-ray spectrometer theLaboratory has provided by way of the Department of Energyand Department of State.

Before delivery of the device in June, the only way Polishborder guards could analyze suspicious materials was bysending them off to the Polish Central Lab. At an internationalforensics conference last fall, the director of the Polish CentralLab voiced the need for a portable system, noting more than100 border incidents involving suspicious materials in 1995.

The surplus spectrometer was provided by the Laboratory’sEmergency Preparedness and Response Division, a unit of theNonproliferation, Arms Control, and International Securitydirectorate. In addition to tracking and analyzing materials atPolish border crossings, the spectrometer will be used forradiological monitoring.Contact: Tom Smith (510) 422-8252 ([email protected]).

Portable treatment system promises cleanup savingsAutomated, portable groundwater treatment facilities

developed by Laboratory scientists promise to save time andmillions of dollars in environmental cleanup costs.

Spurring development of the portable treatment units iscleanup of groundwater beneath the Livermore site. Groundwatercontamination is primarily volatile organic compounds largelyleft over from the time when the site was a naval air trainingstation. There are five stationary treatment facilities currently inoperation at the Livermore site, treating water pumped from 27 extraction wells.

Through geologic and geophysical analysis of subsurfaceconditions and the use of computer models, scientists can estimatethe optimum locations for extraction wells that connect to surfacetreatment facilities. Because those locations change over thecourse of cleanup operations, the versatility of the new portableunits will allow Livermore scientists to attack specific areas ofcontamination as the cleanup proceeds—at lower costs for thefacility, piping, and manpower (for more information see S&TRJan./Feb. 1996 and May 1996).

Laboratory remediation experts expect to save more than$10 million with the new portable facilities, which will substitutefor previously planned stationary treatment facilities. The portabletreatment approach is also expected to speed cleanup byallowing remediation experts to move the treatment systems easilyto different locations to most efficiently remove the pollution.Contact: Ed Folsom (510) 422-0389 ([email protected]).

Lab achieves chip production breakthroughs Two breakthroughs by Laboratory researchers could help

U.S. manufacturers produce computer chips with 1,000 timesmore memory than today’s chips—and do so ten times fasterthan current technology.

The advances appear to largely overcome two critical hurdlesthat could have blocked the use of extreme ultraviolet (EUV)light to make computer chips in a process called EUVlithography. EUV lithography would allow 21st centurycomputer chip makers to work with light wavelengths 20 timesshorter than those of today’s technology, reducing line widthsor feature sizes on chips from 0.35 to 0.1 micrometer andsmaller.

Lawrence Livermore’s advances came in two key areas:• A critical 20- to 50-fold improvement in accuracy formeasuring the surface shapes of optical components used in thelithography process.• A 300,000-fold reduction in the number of defects in themultilayer-coated reflective masks used to transfer circuitpatterns onto silicon wafers, or chips.

The reduction in mask defects springs from an ion beamsputter deposition system developed by the Laboratory andVeeco Instruments Inc, a Plainview, New York, semiconductorequipment company.Contact: Andrew Hawryluk (510) 422-5885 ([email protected]).

Lab conducts test of airborne multisensor podResearchers from the Laboratory’s Nonproliferation, Arms

Control, and International Security directorate conducted airbornetests earlier this year of a multisensor unit they developed toremotely detect small quantities of chemicals and radionuclides.Designed primarily for weapons treaty verification, the unit haspotential applications in environmental monitoring or in theevent of an industrial accident or natural disaster.

The 5-m-long by 1-m-wide cylindrical unit, designed toattach to the underside of an aircraft wing, is called the EffluentSpecies Identification (ESI) pod. A miniature laboratory, thepod contains four effluent sensors: an ion mass spectrometer foridentifying chemicals, a radionuclide analyzer to detect

radioactivity, a krypton sampler, and an aerial atmospheresampler. Sensor guidance is provided by a target trackingsystem located in a small revolving turret on the underside ofthe pod.

Laboratory researchers developed one of the sensors andintegrated it and the three others into the ESI pod, which theyalso developed. Collaborating with the Laboratory were PacificNorthwest National Laboratory and the Savannah RiverTechnology Center. The pod is one of several being developedas part of the Department of Energy’s Airborne MultisensorPod System, a nonproliferation program involving severalDOE laboratories, the U.S. Navy, and private industry.Contact: Joe Galkowski (510) 422-0602 ([email protected]).

Seeping gases can aid detection of nuclear testsTiny amounts of radioactive rare gases that seep to the

surface from underground nuclear explosions could foil nationssecretly trying to evade a proposed ban on nuclear tests. Thisfinding by Lawrence Livermore scientists offers internationalagencies another possible tool for monitoring a nuclearweapons test ban.

In the August 8 issue of the British journal Nature, thescientists reported that gases produced by nuclear explosionsand released along natural faults and cracks in the Earth can beused to detect clandestine nuclear tests.

The finding is based on an experiment conducted in 1993 at the Department of Energy’s Nevada Test Site. In theexperiment, the Lawrence Livermore team mixed smallamounts of two nonradioactive gases, helium-3 and sulfurhexafluoride, into chemical explosives in a non-nuclear testthat simulated a deeply buried underground nuclear explosion.

Says geophysicist Charles Carrigan, who led the Livermoreteam: “Our experiment shows that people who attempt toconduct a clandestine nuclear test will not have any guaranteethey can hide it from detection during an on-site inspection.”Contact: Charles Carrigan (510) 422-3941 ([email protected]).

The Laboratory in the News

Page 4: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

Summary of disclosure

A single-ended ultra-wideband receiver with a self-regulating amplifierthat maintains a predetermined average output voltage. An inputchannel is connected to one input of the amplifier and a strobe generatoris connected to the input channel. A single Schottky detector diode or apair of series-connected Schottky detector diodes are placed in the inputchannel.

The process of producing an electrorheological crystalline mass of amolecule by dispersing the molecule in a fluid and subjecting themolecule dispersion to a uniform electrical field for a period of time duringwhich the electrorheological crystalline mass is formed. Crystallization isperformed by maintaining the electric field after the electrorheologicalcrystalline mass has formed, during which at least some of the moleculesin the mass form a crystal lattice.

Electrical leads that are shielded, vacuum- and cryogenic-temperature-compatible, totally nonmagnetic, and bakable. They are suitable formultiple signal and/or multiple layer applications, and the leads aresuitable for voltages in excess of 1,000 volts. The assembly is easilyfabricated by simply etching away certain areas of a double-cladsubstrate to form electrical leads on one side.

An organic chemical-analysis instrument that has sensitivity and massresolution characteristics of laboratory benchtop units, but is portable andhas low electrical energy consumption. The instrument weighs less than32 kilograms and uses less than 600 watts at peak power. Theinstrument incorporates a modified commercial quadrupole massspectrometer to achieve the instrument sensitivity and mass resolution,comparable to larger laboratory instruments.

A laser system using phosphate laser glass components having anemission bandwidth of >26.29 nm and a coefficient of thermal expansion,from 20 to 300°C of <145 ¥ 10–7/K. The laser glass components consistof a multioxide composition, Al2O3 for chemical durability and thermalmechanical properties, and K2O. The system operates at an energy levelof <0.1 MJ. The glass is desirable for laser operation and manufacturing.

A heat-capacity laser operating in two modes: a firing cycle and a coolingcycle. In the firing cycle the solid-state laser is not cooled. In heatcapacity operation, as lasing proceeds, the active medium will heat upuntil it reaches a maximum acceptable temperature. The waste heat isstored in the active medium itself. After lasing is complete, the activemedia are cooled to the starting temperature, and the laser is ready to fireagain.

A double-layer capacitor of carbon foam electrodes. Several foams may beproduced—including aerogels, xerogels, and aerogel-xerogel hybrids—thatare high-density, electrically conductive, dimensionally stable, andmachinable. The electrodes are formed from machinable, structurallystable carbon foams derived from the pyrolysis of organic foams.Integration to form the capacitor is achieved using lightweight components.

5

Science & Technology Review October 1996

NE grand challenge facing the international scientificcommunity is determining the record of the Earth’s climate

since the last ice age and assessing whether humans havesignificantly impacted the climate in recent years. If we concludewith confidence that human activities do indeed affect climate andthat the consequences pose real dangers, responding to such dangerswill present tremendous political and economic challenges to everynation. Working on such a problem is a worthy mission for anational laboratory; Livermore’s multidisciplinary expertise enablesus to contribute substantive solutions.

Our understanding of climate variability has increased greatlyover the past few decades because the record of climate since the lastglaciation has been developed through studying sediments frommelting episodes in ice sheets and ice caps, and pollen, dust, andisotopic records in ice caps and lake and river sediments. We havecome to appreciate that we live in a system that has experiencedgreat temperature and precipitation swings in the recent millennia.

A consequence of this insight is that interpreting global warmingin simple terms such as an average warming of a fraction of a degreeCelsius over the Earth’s surface could greatly underestimate theactual effects. Change will have a regional pattern, causing warmingin some regions and cooling in others, as global patterns ofprecipitation and other atmospheric variables shift. In particular, anyperturbation of the ocean currents that make up the thermohalinecycle (believed to be stable for the past several thousand years)could produce dramatic regional thermal effects such as coolingEurope by several degrees Celsius and drying the center of the NorthAmerican continent.

We know from both the historical and archaeological recordsabout events such as the Little Ice Age in Europe and hundred-yeardroughts in North and South America. If our actions produce similarclimate change, the human consequences—ecological, political, andeconomic—could rival any natural disaster in human record. Butbefore worrying too much about disasters, we must have moreinformation.

The attribution of climate change to human activities, coupledwith increasing confidence (now not available) in the ability topredict the effects of climate change, will lead to the need toevaluate the options, costs, and credibility of measures to mitigatethe effects of such change. This evaluation will require anunprecedented combination of scientific, economic, and policy skills

to reassure the political system if the consensus required to modify theeconomies of the developed world and the path of the developingworld is to be achieved. Unprecedented confidence in the reliability ofscientific assessments will be necessary.

The work described in this issue beginning on p. 6 summarizesseveral LLNL research projects that have advanced our understandingof the climatic consequences of human activities and increased ourconfidence to detect climate change and determine if it is linked tohumans. We have shown that the burning of fossil fuel that increasesatmospheric carbon dioxide and causes warming globally also injectssulfate aerosols that promote local cooling. This knowledge makes itpossible to predict patterns of global warming and cooling fromsignatures that are associated with human activities.

In developing tools for testing climate models, we havestatistically compared the past climate record with predictions basedon fossil fuel consumption records. We found remarkably suggestiveagreement in the geographic patterns of warming and cooling. Whilethese results are currently the subject of controversy regarding theevaluation processes of the Intergovernmental Panel on ClimateChange, there is strong consensus within the scientific communitythat the data suggest a human origin for the global warming that iscurrently being observed.

Livermore’s strengths and role in climate studies come fromseveral capabilities. We are conducting an international program tocreate standard data records and methodologies to test the credibilityof climate models. Our scientists study inadequately characterizedmechanisms of climate such as atmospheric chemistry, aerosol, andradiation effects, and they develop models that allow realisticcoupling of atmospheric and ocean processes. Modeling activitiesgrow with collaborations such as the Accelerated Strategic ComputingInitiative (ASCI), which links us to new computational capabilitieswithin the Department of Energy. Measurements of the carbon-14record of the modern carbon cycle and the isotopic records of thepaleoclimate are provided by projects conducted at our Center forAccelerator Mass Spectrometry.

Together, our powerful computational, modeling, andmeasurement capabilities give us the confidence that LawrenceLivermore will continue to play a major role in responding to thiscritical scientific challenge.

■ Jay C. Davis is the Associate Director for Environmental Programs.

Commentary by Jay C. Davis

The Importance ofClimate Change

O

4Each month in this space we report on the patents issued to and/orthe awards received by Laboratory employees. Our goal is toshowcase the distinguished scientific and technical achievements ofour employees as well as to indicate the scale and scope of thework done at the Laboratory.

Patents

Patent issued to

Thomas E. McEwanGeorge D. Craig

Bernhard Rupp

Rajeev R. RohatgiThomas E. Cowan

Brian D. AndresenJoel D. EckelsJames F. KimmonsDavid W. Myers

T. HaydenStephen A. PayneJoseph S. HaydenJohn H. CampbellMary Kay AstonMelanie L. Elder

Georg AlbrechtE. Victor GeorgeWilliam F. KrupkeWalter SooySteven B. Sutton

James L. KaschmitterSteven T. MayerRichard W. Pekala

Science & Technology Review October 1996

Patent title, number, and date of issue

Ultra-Wideband Receiver

U.S. Patent 5,523,760June 4, 1996

Electrorheological Crystallization ofProteins and Other Molecules

U.S. Patent 5,525,198June 11, 1996

Fan-Fold Shielded Electrical Leads

U.S. Patent 5,525,760June 11, 1996

The Portable Gas Chromatograph–MassSpectrometer

U.S. Patent 5,525,799June 11, 1996

Phosphate Glass Useful in High EnergyLasers

U.S. Patent 5,526,369June 11, 1996

High Energy Bursts from a Solid StateLaser Operated in the Heat CapacityLimited Regime

U.S. Patent 5,526,372June 11, 1996

Carbon Foams for Energy StorageDevices

U.S. Patent 5,529,971June 25, 1996

Page 5: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

Assessing Humanity’s Impact on Global Climate

7

Science & Technology Review October 1996

ARTH returns the Sun’s heat tospace in the form of thermal infrared

radiation. But atmospheric carbondioxide (CO2) and trace gases help keepour planet warmer than it otherwisewould be by absorbing some of thisradiation, thus blocking its escape.Human activities, especially the burningof fossil fuels, can intensify this naturalgreenhouse effect by pumping increasedlevels of CO2 and other so-called“greenhouse gases” into the atmosphere.

Do these activities mean that ourclimate will become noticeably warmer, with a rate of warming (andaccompanying changes to other climaticparameters like rainfall and sea level)great enough to harm human societiesand natural ecosystems?

Other than waiting for the future tohappen, the only means to answer thisquestion is with computationalmodeling—specifically with generalcirculation models (GCMs) thatsimulate weather and climate in detailaround the world. For something ascomplex as the climate system, thesemodels are typically complex as well.These elaborate computer programsrequire the utmost in machineperformance because they incorporateother state-of-the-art models of keyphysical processes affecting climate.

At Lawrence Livermore NationalLaboratory, we are applying computationexpertise—originally developed tosimulate nuclear explosions—to thechallenging task of climate modeling.We also make use of Livermoreexpertise in atmospheric science thatgrew out of efforts to model fallout from

nuclear explosion testing. These model-building and simulation efforts in climatestudies are synergistic with otherLaboratory programs, in that they alladvance sophisticated techniques forprogramming simulation models onstate-of-the-art computers.

While the increase of atmosphericCO2 since the Industrial Revolution 200 years ago is apparent from geologicand instrumental records, it is not soobvious that a warmer climate hasresulted (Figure 1). The Earth’s surfacehas warmed slightly, on average, overthe last century. So far, the increase isirregular and small, particularly whencompared with GCM-based predictionsof 21st century global warming, but notsmall compared to predictions ofwarming expected to date. The data alsoshow that human production of CO2will not be the only factor in globaltemperature change.

Three Decades of WorkGlobal climate research has been a

part of our work at Lawrence Livermorefor three decades. (See Energy &Technology Review, September 1984,for a description of past work.) Today,we play a leading role in climateresearch, as is appropriate for aDepartment of Energy laboratory withmissions that include studying the useof fossil fuels and their potential impacton global and regional environments.1

At Lawrence Livermore, our goal isto better understand global climate andhumanity's impacts on it. Most of theLaboratory’s global climate work isdone in the Environmental Programs

Directorate. The directorate’sAtmospheric Sciences Divisiondevelops and applies climate modelsthat represent key processes affectingthe atmosphere, oceans, and biosphere.Using these complex models, we seekto improve scientific understanding ofthe mechanisms of global change in theenvironment and climate.

Our major climate research effortsare directed toward:• Assessing the effects of aerosols.• Modeling the carbon cycle.• Applying advanced computingtechniques.• Finding the limits of climatepredictability.

In these studies, climate researchersfrom other Laboratory areas are alsoinvolved, such as those in the Programfor Climate Model Diagnosis andIntercomparison (PCMDI), whodocument climate model performancein order to reduce systematic errors (seebox, p. 10).

Assessing Aerosol EffectsIn recent years, we have been

addressing the apparent disparitybetween the GCM predictions ofglobal warming and the observationalrecord. According to the models,greenhouse gases such as CO2 shouldhave raised average temperaturesworldwide by 1°C during the past 100 years. Instead, temperaturesclimbed by about only half a degree, as shown in Figure 1.

One hypothesis to explain thedisparity states that atmospheric sulfateaerosols might partially offset the

Global Climate Modeling6

Assessing Humanity’s Impacton Global Climate

Capitalizing on strengths in computer

modeling, Livermore researchers are

working to provide policy makers

a quantitative picture of our

changing global climate.

E

Page 6: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

9

Science & Technology Review October 1996

considered the difference compared tothe control run (Figure 2b). These twosets of results can be compared to theobserved temperature changes. Figure2c depicts the difference betweentemperature data taken in 1948 and1988. The run depicted in Figure 2b,which included both CO2 and sulfuremissions, predicted results much closerto the temperature difference map,which is based on observations.

These results showed that the sulfateaerosols offset CO2-induced warmingand could even produce net cooling inregions of the Northern Hemispherewhere sulfur emissions are highest.4Follow-up statistical studies found thatthe patterns of climate change resultingfrom both greenhouse gases and sulfateaerosols are a closer match to actualobserved temperatures than patterns ofchange predicted by models that onlyinclude greenhouse gases.5,6

These Laboratory results are includedin a United Nations report prepared bythe Intergovernmental Panel on ClimateChange.3 That report, written by dozensof internationally prominent scientistsincluding several from LawrenceLivermore, contains the most recentmodel-generated predictions oftemperature change to the year 2100 (an increase between 1 and 3.5°C) andincludes the presence of both sulfateaerosols and greenhouse gases. Thesulfate aerosols counteract globalwarming to some extent; however, thepotential warming that the reportdescribes may still be significant enoughto pose a threat to human economies andnatural ecosystems. Also, it is importantto note that greenhouse gases remain inthe atmosphere far longer than sulfateaerosols, and thus their effects woulddominate even more if present sulfurand greenhouse emission rates continue.

Modeling the Carbon Cycle Most of the carbon dioxide added to

the atmosphere by human activitiesresults from burning fossil fuels,

Global Climate Modeling8

Science & Technology Review October 1996

effects of greenhouse gases. Suspendedin the atmosphere, these micrometer-size particles tend to cool the Earth byscattering sunlight back into space. Theaerosols result from photochemicalreactions of sulfur dioxide emitted intothe atmosphere through the combustionof fossil fuels.

To test that hypothesis, we developedthe world’s first global chemistry–climatemodel. This model involved combiningthree others: (1) the LLNL version of anatmospheric model developed by theNational Center for AtmosphericResearch for use by the global climateresearch community, (2) a simple oceanmodel that represents conditions of theocean’s upper layers (within 50 metersfrom the surface), and (3) theGRANTOUR tropospheric chemistrymodel developed at Livermore.GRANTOUR simulates the transport,transformation, and removal of varioussulfur species in the troposphere (lowest10 to 20 kilometers of the atmosphere). Itwas needed for predicting the formationof sulfate aerosols from sulfur dioxidegas released into the atmosphere.

We used the chemistry–climatemodel in a series of experiments thatwere the first attempt to simulate howtemperatures are affected bycombinations of carbon dioxide andsulfate aerosols.4 Numericalintegrations began with a control runusing the pre-industrial CO2 level andno sulfur emissions. Next, we ran anexperiment to simulate CO2 increasedto the present-day carbon dioxide leveland examined the difference intemperature compared to the control run(Figure 2a). The next run combinedCO2 and sulfate aerosols, and again we

Global Climate Modeling

Figure 1. (a) On average, the Earth’ssurface has warmed slightly over the lastcentury.2 (b) CO2 concentrations over thepast 100 and 1,000 years from Antarcticaice-core records and (since 1958) MaunaLoa, Hawaii, measurement site.3

Figure 2. Temperature-change maps show that observed patterns of near-surface temperatures arein better accord with predictions from models that consider CO2 and sulfur emissions than with modelsthat consider CO2 only. Notes: all temperature changes are for Sept., Oct., and Nov. in °C; whiteareas in (c) indicate missing data.

90N

50

10N

10S

50

90S

(a) Modeled near-surface temperature change: present-day CO2 levels minus pre-industrial CO2 levels

90N

–1.8–1.5 –1.2

–0.9–0.6

–0.3 00.3

0.60.9

1.21.5

1.51.8

50

10N

10S

50

90S

(b) Modeled near-surface temperature change: present-day CO2 and sulfate aerosols minus pre-industrial levels

90N

50

10N

10S

50

90S

(c) Observed near-surface temperature change: 1988 data minus 1948 data

180W 160 120 90 60 30W 0 30E 60 90 120 160 180E

1880 1900 1920 1940 1960 1980

320

300

280

260

360

340

320

300

280

360

340

CO

2 co

ncen

trat

ion,

ppm

vC

O2

conc

entr

atio

n, p

pmv

800 1000 1200 1400 1600 1800 2000

0.5

(a)

(b)

0.4

0.3

0.2

0.1

0

Tem

pera

ture

cha

nges

(°C

)

1880 1900 1920 1940 1960 1980 2000

D57 D47 Siple South Pole Mauna Loa One-hundred-year running mean

2000

Year

Yearly average temperature Five-year running mean of the above

Ice core records

Page 7: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

11Global Climate Modeling10

Science & Technology Review October 1996

although substantial amounts of CO2(20%) result from less plant absorptiondue to deforestation. Only about half theCO2 that is released into the atmosphereremains there, however, and whathappens to CO2 that does not remain inthe atmosphere is uncertain. As carbondioxide comes in contact with the seasurface, some is absorbed into theocean; as it comes in contact with theleaves of plants, some is absorbed andtransformed into plant tissue. However,the amounts and rates at which the seaor plants can absorb CO2 are still poorlycharacterized. Hence, our modelscannot adequately predict how much ofthe approximately 6 billion tons peryear of CO2 that is released today fromhuman activities will be found in theocean, in plants, or in the atmosphere10, 20, or 100 years from now.

We must narrow these uncertaintiesin order to make reliable predictions ofthe climatic consequences of fossil fuelburning and deforestation. To do this,we are developing a carbon-cycle modelthat includes transport of CO2 in theatmosphere, the consumption andrespiration of CO2 by terrestrialecosystems, and the absorption andemission of CO2 by the oceans. Themodel incorporates a treatment of

Global Climate Modeling

PCMDI: Reducing Systematic Model Errors

Diagnosing why climate models behave the way they do is a nontrivialtask: as models have become more complex, the disagreement amongthem—as well as that between models and observations7—remainssignificant, yet poorly understood. The Laboratory established theProgram for Climate Model Diagnosis and Intercomparison (PCMDI) in1989 to develop improved methods and tools for evaluating globalclimate models.

As part of its mission, the PCMDI is coordinating the AtmosphericModel Intercomparison Project (AMIP) on behalf of the internationalWorld Climate Research Programme. In this project, virtually all of theworld’s 30 atmospheric modeling groups are simulating the climate ofrecent decades, using observed sea surface temperature as a boundarycondition.

AMIP has already gained substantial insight into atmospheric models.8For the first time, disagreement among models can be assessed precisely.For example, PCMDI researchers have found that the models generallyagree well in their predictions of temperature and winds but disagreewidely in their predictions of clouds. Systematic errors common to allmodels have also been revealed, e.g., discrepancy between predicted andobserved absorption of solar energy in clouds.

In addition to its work for the AMIP, the PCMDI has entered into aproject with the World Climate Research Programme to compare theperformance of various coupled ocean–atmosphere–sea-ice models. Thesemore complete models are being used in forecasts of 21st century globaltemperatures.

The PCMDI also has provided tools and information to facilitateclimate model analysis. These include model documentation, a databaseof observations for comparison with model output, and a visualization andcomputation system for both model-produced and observed climate data.9

Figure 3. Carbon dioxide fluxes into and outof the atmosphere. The red curve showsthat the terrestrial biosphere (plants andsoils) was a net absorber of carbon from theatmosphere until about 1950. The observedyearly change in the carbon content of theatmosphere (gray line) is equal to themeasured fossil-fuel emissions (pink line)plus the modeled flux of carbon into or out ofthe ocean (black line) plus the residual fluxinto or out of the terrestrial biosphere (redline). Accuracy of this residual CO2 value isdependent on the accuracy of the measuredor modeled data comprising the other terms.

1800 1850Year

1900 1950–1

0

1

2

3

4

5

6

7

Terrestrial biosphere flux Net atmospheric CO2 flux Fossil-fuel emissions Ocean flux

CO

2 flu

xes,

gig

aton

s pe

r ye

ar

modified GCM is specifically designedto run on massively parallel processingcomputers that simultaneously employlarge numbers of arithmetic processorswith memory distributed locally to each.

We have used a technique known asdomain decomposition to distribute thecalculation across many processors. Asshown in Figure 4, the basic idea is todivide the grid points covering the planetinto rectangular “tiles,” or subdomains.Each of these subdomains is assigned

Depth

Latitude

Longitude

Subdomain

carbon isotopes that is more detailedthan can be found in any other globalcarbon-cycle model. Carbon isotopedata from biomass and ice samplestested at facilities such as LLNL’sCenter for Accelerator MassSpectrometry are contributing to ourconfidence in the model’s predictivecapability. Computer experiments usingan initial version of this model showthat simulations of changes in carbonstorage over the past two centuries areconsistent with our understanding of thehistory of deforestation and withobserved changes (see Figure 3).

The oceanic portion of our carbon-cycle model incorporates models ofocean circulation, chemistry, isotopicprocesses, and biology. We use a state-of-the-art ocean GCM with a dynamicand thermodynamic sea-ice model thatruns on massively parallel computers.This GCM model shows how dissolvedcarbon dioxide and other chemicalsimpact the carbon cycle; it includesglobal distributions of natural andnuclear-explosive-produced radiocarbon.With this model, we have simulatedoceanic absorption of carbon for the pastfew centuries. To our knowledge, this isthe first completed oceanbiogeochemistry model in use today.

The terrestrial ecosystem portion ofour carbon cycle model, still underdevelopment, is based on a detailedmodel of how a terrestrial ecosystemfunctions and on a detailed simulationof biochemical processes that occurduring photosynthesis. Already widelypublished, the model successfullysimulates carbon fluxes at specific siteswhere detailed measurements have beenmade. As a consequence, the terrestrialportion is considered by many to be themodel of choice for application to forestgrowth rates. The fact that this model isphysically based and well tested givesus confidence that we will be able toincorporate it into the larger carbon-cycle model.

Science & Technology Review October 1996

Figure 4. LLNL scientists use this two-dimensional domain decomposition of theglobe to accomplish an efficientdistribution of climate-model calculationsto a masssively parallel computingsystem with distributed memory.

Applying AdvancedComputing Techniques

Typical atmospheric GCMs calculatetemperature, pressure, wind velocity,and dozens of other variables at millionsof points around the globe. Eachcalculation must be repeated to advancethe simulated climate hour by hour.However, the cost of computationaltime severely limits the use of GCMs,even on the fastest of today’ssupercomputers.

To address this problem, the DOEestablished the Computer Hardware,Advanced Mathematics, and ModelPhysics (CHAMMP) Program. Withsupport from CHAMMP, we modifiedan atmospheric GCM to run on thenew-generation computers that promisesignificantly greater speed. Our

Page 8: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

13

Science & Technology Review October 1996

We assume that the more generalcharacteristics of climate can be predictedfor considerably longer periods of time.However, the climate system may havevery long time scales of natural variability,originating in part from the nature oflarge-scale ocean circulation patterns.In this context, it becomes difficult todiscriminate between systematic effects(such as possible global warming) andlow-frequency natural climate variations.Finding the limits of natural climatepredictability in this sense is obviously aprerequisite to making useful predictionsof possible anthropogenic effects.Experiments with fully coupled models,analogous to our ensemble work with theAMIP, are a first step in this direction.

We are also very interested indetermining the possible impact ofglobal climate change on scales of directpractical importance, on the order oftens to hundreds of kilometers (regionalscales). It is on these scales that possibleimpacts on managed and naturalecosystem and water resources, forexample, would be most apparent. Thisresearch is in a very early stage, but itwill play an increasing role in the future.One approach that we will pursue is touse the global-scale climate modeloutput to drive regional-scale models ofhydrologic and ecological processesandthus capture local effects due tovariations in topography, land use, andsoil properties.

Such studies require world-class, high-performance computing capabilities, amultidisciplinary teamwork approach,and long-term institutional commitment.With new computing resources based onthe knowledge we are gaining fromcollaborations such as ASCI, theLaboratory is positioned to continuemaking important and uniquecontributions to the science base ofglobal climate research and to assist inthe assessment of the consequences ofpotential climate change.

Global Climate Modeling

Science & Technology Review October 1996

to a processor. A particular processoris responsible for advancing thesolution only for those grid pointscontained within its subdomains. To dothis, however, requires informationabout the state of the grid points justoutside the subdomain. Interprocessorcommunication of this data surroundingthe subdomain is accomplished on the computer’s internal network via

explicitly programmed message-passing techniques. Our challenge is tominimize this communication yetensure that all available processors areassigned roughly equal amounts of work.

We perform both atmospheric andoceanic GCM calculations very rapidlyas a result of the availability of the Cray T3D and other massively parallel machines at Livermore. In the

largest series of calculations to date, we performed an ensemble of 20 simulations for the AtmosphericModel Intercomparison Project (seebox, p. 10). Different calculationsvaried only in their initial conditions,allowing an assessment of the naturalvariability of climate due to theinherently chaotic nature of theatmosphere. Understanding such naturalvariability will allow better climatesystem predictions (Figure 5). We areanalyzing this ensemble data andpreparing it for dissemination to thewider climate modeling community.

Research ChallengesProgress toward a predictive

understanding of global climate changedepends on our ability to improve the computer simulations we use. This process is sometimes slow andoccasionally controversial. The computersimulations are very complex becausethe processes that determine climate arenonlinearly coupled across a widespectrum of space and time scales. Forvalidation, we must rely on laboratory-scale experiments—which can shed lighton isolated, individual processes—andon extensive field measurement programsto gather essential observational data. It is only with controlled simulationsthat we can explore the myriad “whatif” scenarios.

One particularly important questionthat we now can address involves thepredictability of the climate system.Short-term weather predictions arefundamentally limited by the chaoticbehavior of the atmosphere: no matterhow perfect the forecast model, theweather cannot be predicted beyond afew weeks. This is because even smallerrors in initial conditions—which arealways present, because of limitedprecision and spatial resolution ofobservational data—are amplified by theturbulent nature of the atmospheric flowso that the statistical significance of theforecast is diminished after a few days.

WILLIAM P. DANNEVIK is the Atmospheric Sciences Divisionleader, a position he has held since 1995. He came to LawrenceLivermore in 1988, as a member of the A-Division code group.Dannevik received his B.S. in engineering science from theUniversity of Texas in 1969 and his Ph.D. from St. Louis Universityin atmospheric science in 1984. In previous positions, he led anengineering consulting firm from 1974 to 1980 and was on theresearch staff of the Princeton University program in applied and

computational mathematics from 1984 to 1988. He has published articles oncomputational fluid dynamics, boundary layer meteorology, high-performanceclimate simulation, and turbulence theory and modeling in the Journal of ScientificComputing, Physics of Fluids, Atmospheric Environment, Parallel Computing,Journal of Supercomputing, Computer Physics Communications, and Plasma Physicsand Controlled Fusion Research.

About the Scientist

12 Global Climate Modeling

–35–30–25–20–15–10

–50

510

1520

25

(a) Ten-year mean surface temperature in degrees Celsius (1979–1988)Mean 10.6636 Max 30.0534 Min –56.9907

90S

50

10S

10N

50

90N

0.5 1

1.522.53

3.54 4.55.5

(b) Standard deviation of the above meanMean 0.404064 Max 7.75839 Min 0.00246782

90S

50

10S

10N

50

90N

180W 150 120 90W 60 30 0 30 60 90E 120 150 180E

Key Words: Atmospheric ModelIntercomparison Project (AMIP), carboncycle, carbon dioxide, climate modeling,global climate, global climate model,greenhouse effect, massively parallelcomputers, sulfate aerosols.

Notes and References1. See the July 1995 issue of Science &

Technology Review, pp. 28–30, foradditional LLNL work in regionalclimate modeling.

2. J. Hansen et al., “Global Surface AirTemperature in 1995: Return to Pre-Pinatubo Level,” Geophysics ResearchLetters 23 (1996), pp. 1665–1668.

3. J. T. Houghton et al. (Eds.), ClimateChange 1995: The Science of ClimateChange (Cambridge University Press,Cambridge, 1996).

4. K. E. Taylor and J. E. Penner,“Response of the Climate System toAtmospheric Aerosols and GreenhouseGases,” Nature 369 (June 30, 1994).

5. B. D. Santer et al., “A Search ForHuman Influences on the ThermalStructure of the Atmosphere,” Nature382 (1996), pp. 39–46.

6. B. D. Santer et al., “Towards theDetection and Attribution of anAnthropogenic Effect on Climate,”Climate Dynamics 12 (1995), pp. 77–100.

7. W. L. Gates, “AMIP: The AtmosphericModel Intercomparison Project,”Bulletin of the AmericanMeteorological Society 73 (1992), pp. 1962–1970.

8. W. L. Gates (Ed.), Proceedings of theFirst International AMIP ScientificConference (Monterey, California, May15–19, 1995), World MeteorologicalOrganization Report WMO/TD-No.732 (WCRP-92) (1996).

9. D. N. Williams and R. L. Mobley, “ThePCMDI Visualization and ComputationSystem (VCS): A Workbench forClimate Data Display and Analysis,”Program for Climate Model Diagnosisand Intercomparison, Report No. 17,Lawrence Livermore NationalLaboratory, Livermore, California,UCRL-ID-116890 (1994).

Figure 5. Temperature variability in the AMIP ensemble of 20 simulations. These maps show(a) the December–January–February mean surface temperature and (b) the variability ascharacterized by the standard deviation of the mean temperature. The standard deviation, notuniform over the globe, is largest in the extreme high latitudes, which are characterized bysnow-covered land and sea ice.

For further information contact William Dannevik (510) 422-3132([email protected]).

Page 9: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

15

Science & Technology Review October 1996

• A small, noncontact optical sensor will improve themanufacturing processes that employ robots by eliminating thetime-consuming and expensive process of “teaching” roboticmachinery new motions when manufacturing changes arerequired. This six-degrees-of-freedom (called SixDOF) sensorcan sense its position relative to a piece being machined,allowing the robot to autonomously follow a pre-describedmachining or manufacturing path. As its name implies, theSixDOF sensor senses its position in all six degrees offreedom (the x, y, and z axes as well as the turning motionaround those axes). Its nearest competitor can sense just threedegrees of freedom.• A new optical crystal (Ce:LiSAF) makes an all-solid-state,directly tunable, ultraviolet (UV) laser commercially viable forthe first time. Developed jointly with VLOC Inc ( a division ofII-VI Inc) of Tarpon Springs, Florida, the crystal consists oflithium–strontium– aluminum–fluoride doped with cerium, arare-earth metal. The crystal is a component of a compactsolid-state laser that is practical, robust, and well suited to suchremote sensing applications as detecting ozone and sulfurdioxide in the environment or detecting certain components ofbiological weapons. It could also be used for laser radarsystems or for secure wireless communication links.• The advanced magnetic sensor, a critical component inmagnetic storage devices such as computer hard-disk drives,has been developed in conjunction with Read-RiteCorporation of Fremont, California. This new sensor offersgreater sensitivity and 100 times greater storage densities than

current commercial products. In fact, its storage density limit approaches the projected limit of magnetic disk drivetechnology of 100 gigabit/1 in2 (6.4 cm2). Using thin-filmtechnologies previously developed at Livermore, the sensoris built of alternating layers of thin magnetic andnonmagnetic materials.• The development of cost-effective, large-area, laserinterference lithography is a way to precisely and uniformlyproduce regular arrays of extremely small (less than 100atoms wide) electron-generating field-emission tips. It willsignificantly advance the effort to fabricate field-emissiondisplay (FED) flat panels. FED flat panels are a majorimprovement over active matrix liquid crystal displaytechnology because they consume less power and can bemade thinner, brighter, lighter, and larger, and with a widerfield of view. Potential applications range from more efficientportable computers to virtual-reality headsets and wall-hugging TV sets.

These six Lawrence Livermore National Laboratory R&D 100 Awards and the inventors who made the newtechnologies possible are featured in the articles that follow.

For further information contact Karena McKinley(510) 422-6416 ([email protected]).

14 Research Highlights

N October 14 in Philadelphia, R&D Magazine will honorLivermore researchers with six of its annual R&D 100

Awards, considered the “Oscars” of applied research. Since1978, when the Laboratory began to participate in thecompetition, technologies developed at Livermore have won61 R&D 100 Awards.

All entries in the competition are judged by a teamcomposed of R&D Magazine editors and other experts wholook for the year’s most technologically significant productsand processes. Past winners have included the fax machine,Polacolor film, and the automated teller machine—productswithout which we can hardly imagine life today.

Two of this year’s Laboratory winners stem frompartnerships with American industry. Karena McKinley,acting director of the Laboratory’s Industrial Partnerships andCommercialization office, says, “It is always a pleasure to seethe industrial community recognize Laboratory work that hassprung from our basic mission activities. This recognitionindicates that many of our newly developed technologies willmake an impact on the American economy.

“We are proud of our Laser Programs Directorate, whichthis year produced five winners. We hope that the individual

winners and the winninginterdisciplinary teams willserve as models for otherinventors. Exciting researchrelevant to industry istaking place all over the

Laboratory, even though all six winners this year—includingthe one from the Physics Directorate involving an opticalamplifier—have some connection to laser technology. Many other Livermore-developed technologies have similar potential.”

The technologies that were honored have a range fromeveryday to very specialized uses:• The latest micropower impulse radar (MIR) application is anelectronic dipstick to sense the level of fluid or other materialstored in tanks, vats, and silos. It can also be used inautomobiles to read levels of a variety of fluids. The dipstickis impervious to condensation, corrosion, or grime on thesensor element, which is a simple metal strip of wire severalinches to dozens of feet long, depending on the application.MIR works like conventional radar by sending out a pulse andmeasuring its return, but each microwave pulse is a fewbillionths of a second in duration. • A tiny, semiconductor optical amplifier uses a miniaturelaser to boost data communications signals at ultrahigh(terabit-per-second) rates. It solves many of the problems thathave plagued similar amplifiers: it is much smaller andcheaper than fiber amplifiers, which are used today to allowhundreds of thousands of telephone conversations on a singlefiber-optic cable, and it is virtually free of crosstalk and noiseat high transmission rates, unlike conventional semiconductoroptical amplifiers. This amplifier will be useful in cabletelevision distribution systems and other computerinterconnections in fiber-to-the-home applications.

O

Livermore Wins Six R&D “Oscars”

Science & Technology Review October 1996

Page 10: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

17

Science & Technology Review October 1996

Electronic Dipstick

The LLNL transient digitizer, which is the world’s fastest,functions as a high-speed oscilloscope combined with adigital-readout device. The instrument records many samplesfrom single electrical events (a brief signal called a“transient”), each lasting only 5 nanoseconds (5 billionths of asecond). Compared to competitive products, such as the bestoscilloscopes, the transient digitizer is much smaller and morerobust, consumes less power, and costs far less.

While developing the transient digitizer, project engineerMcEwan had an important insight. The sampling circuitsdeveloped for it could form the basis of a sensitive receiver foran extremely small, low-power radar system. What ensuedwas the development of micropower impulse radar (MIR).(For more MIR information, see January–February 1996Science & Technology Review.)

The principal MIR components are a transmitter with a pulsegenerator, a receiver with a pulse detector, timing circuitry, asignal processor, and antennas. The MIR transmitter emitsrapid, wideband radar pulses at a nominal rate of 2 million persecond. This rate is randomized intentionally to create adistinctive pattern at a single location, which enables the systemto recognize its own echo, even with other radars nearby. Thecomponents making up the transmitter can send out shortenedand sharpened electrical pulses with rise times as short as 50 trillionths of a second (50 picoseconds). The receiver, whichuses a pulse-detector circuit, only accepts echoes from objectswithin a preset distance (round-trip delay time)—from a fewcentimeters to many tens of meters.

The MIR antenna determines many of the device’s operating characteristics. A single-wire monopole antenna only4 centimeters long is used for standard MIR motion sensors,but larger antenna systems can provide a longer range, greaterdirectionality, and better penetration of some materials such aswater, ice, and mud. Currently, the maximum range in air forthese low-power devices is about 50 meters. With anomnidirectional antenna, MIR can look for echoes in aninvisible radar bubble of adjustable radius surrounding the unit.Directional antennas can aim pulses in a specific direction andadd gain to the signals. The transmitter and receiver antennas,for example, may also be separated by an electronic “trip-line”so that targets or intruders crossing the line will trigger awarning. Other geometries, with multiple sensors andoverlapping regions of coverage, are also being explored.

The first application McEwan dreamed possible was aburglar alarm, but other popular spin-offs of the MIRtechnology have been the electronic dipstick, auto safetydevices such as an anticrash trigger, a heart monitor thatmeasures muscle contractions instead of electrical impulses,mine-detecting sensors for the military, and corrosion

detectors for rebar buried within concrete bridges. Within thenext few years, the MIR technology may well become one ofthe top royalty revenue-generating licenses connected withany U.S. university or national laboratory. So far, over adozen companies have entered into license agreements withthe MIR technology, generating nearly $2 million in licensingagreements with the Laboratory, and soon royalties will add tothat amount. To date, most of these licenses (9 of 15) are forthe electronic dipstick.

How the Electronic Dipstick WorksThe electronic dipstick uses the MIR fast-pulse technology

to launch a signal—from a launch plate rather than anantenna—along a single metal wire rather than through air andmeasures the transit time of reflected electromagnetic pulsesfrom the top of the dipstick down to a liquid surface. Theair–liquid boundary is the discontinuity that reflects the pulse;the time difference between a pulse reflection at the top of thedipstick and a reflection at the air–liquid boundary indicatesthe distance along the line. The liquid level is thus measuredfrom the top of the tank (the dielectric is air, which for allpractical purposes does not vary with temperature or vaporcontent). The transmission line for the dipstick may beconfigured as microstrip, coaxial cable, or twin lead,whichever suits the application.

The strength of the pulse reflected from the air–liquidboundary and from the subsurface liquid–liquid boundary canbe measured. When the liquid has a low relative dielectricconstant, such as JP-3 jet fuel, only a portion of the pulse isreflected at the air–liquid boundary, and the remaining portioncontinues into the liquid until another discontinuity is reached,such as an oil–water boundary or the tank bottom itself. Thus,the dipstick can also provide additional information aboutconditions within the tank. The photo on p. 16 shows theentire dipstick assembly with its simple digital output display,although the output could be connected directly to an analogmeter. The dipstick’s 14-bit, high-resolution output providescontinuous readout that is accurate to within 0.1% of thewire’s maximum length, and it functions at temperatures from–55 to 85°C (–67 to 185°F). Already, companies have shippedproducts that use this technology of the future.

Key Words: electronic dipstick, micropower impulse radar (MIR),R&D 100 Award, transient digitizer.

For further information contact: Thomas McEwan (510) 422-6935 ([email protected]).

16 Research Highlights

ANCY recalling for your grandchildren the flourishes youonce made with the special (oily) rag and foil-like

automobile dipstick, lunging (and cajoling) the dipstick into thenarrow sleeve to measure the oil level before embarking on thefamily vacation. Already, a young face looks back at you withdisbelief or rolled back eyes because that old dipstick and otherfluid measuring devices were replaced way back in the mid-1990s with Tom McEwan’s invention—the electronic dipstick.

One result of a string of spin-off technology developmentsin the Lawrence Livermore Laser Programs, the electronicdipstick is a device that measures the time it takes for anelectrical impulse to reflect from the surface liquid in acontainer, so fluid level can be calculated. At better than 0.1% accuracy, extremely lowpower, and a cost of lessthan ten dollars,applications include

measuring fluid levels in cars, oil levels in supertankers, andeven corn in a grain elevator. Unlike ultrasound and infraredmeasurement devices, the electronic dipstick is not tripped upby foam or vapor, extreme temperature or pressure, or corrosivematerials. Over time, the technology will make other fluid-levelsensing devices obsolete.

Spin-Offs from Digitizing to MeasuringLawrence Livermore is home to the 100-trillion-watt Nova

laser. Developed for nuclear fusion research, the ten-beampulsed Nova laser generates subnanosecond events that must beaccurately recorded. In the late 1980s, Laboratory engineersbegan to develop a new high-speed data acquisition system tocapture the data generated by Nova and the next-generationlaser system, the National Ignition Facility. The result was asingle-shot transient digitizer—itself a 1993 R&D 100 Awardwinner described in the April 1994 issue of Energy& Technology Review.

Electronic Dipstick Signals New Measuring Era

F

The electronic dipstick is a metal wireconnected by cable to an MIRelectronic circuit. As a highly accuratefluid-level sensor with no moving parts,this device has myriad applications inmanufacturing and is significantly lowerin cost than laboratory equipmentperforming the same task.

Page 11: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

19

Science & Technology Review October 1996

Optical Amplifier

interchannel depletion of gain allows the signal at onewavelength to modulate the signal at another, causingcrosstalk among channels.

In Livermore’s new amplifier, the waveguide supplyingthe signal gain incorporates a very small laser that operatesperpendicularly to the path of the signal through thewaveguide. This “vertical cavity surface emitting laser,”composed of a stack of cavity mirrors that are fabricatedduring semiconductor crystal wafer growth, replaces thestandard gain medium of a conventional SOA.

This new laser amplifier takes advantage of some basicproperties of lasers to reduce crosstalk by a factor of 10,000.In a typical laser, electrical current is introduced into the gainmedium, which is situated between two sets of mirrors. Muchtoo rapidly to be seen, the photons in the gain medium bounceback and forth between the sets of mirrors, constantly gainingin intensity. Because no mirror is perfectly reflective, some ofthe photons are lost through the mirrors during this back-and-forth process. But once the gain is equal to the losses or, putanother way, equal to the reflectivity of the mirrors, thephotons will begin to “lase.”

A laser’s gain thus has a cap. By introducing a laser into anSOA waveguide, the signal gain can be “clamped” at aspecific level. Then, when signal channels at multiple opticalwavelengths pass through the waveguide, there is virtually nocrosstalk across the independent optical channels.

The lasing field also affects the recovery time of signalsthrough the waveguide. After every “bit” of the optical signalpasses through the gain medium, the medium requires a shortrecovery time before it can accept the next bit. This gainrecovery time in a conventional SOA is typically a billionth of asecond. Attempts to push the amplifier to faster bit rates thanthe gain medium can accommodate often result in one bitdepleting the gain of the subsequent bit, which is another formof crosstalk. The introduction of a lasing field prompts themedium to recover much more quickly, on the order of 20 trillionths of a second. This means that the amplifier cansuccessfully track the amplification of a serial bit stream atvery high bit rates.

A New Ubiquitous Amplifier?This new amplifier is truly the optical analog of the

electronic amplifier, the electronics industry’s ubiquitousworkhorse. Because the new amplifier relies on standardintegrated circuit and optoelectronic fabrication technology, itcan be incorporated into many different types of photonicintegrated circuits.

In the near term, because this amplifier puts SOAperformance on a par with fiber amplifiers, it could be used asa replacement for or complement to fiber amplifiers in long-haul communication networks.

Looking farther into the future, if tiny, inexpensive opticalamplifiers provide the broad signal bandwidth needed totransmit visual images as well as computer data, many peoplemay someday work in “virtual offices” in their homes. Viatwo-way video, they will be able to confer with colleagues,participate in meetings, and hear the latest company newswithout commuting to work. Two-way, high-resolution,panoramic video will also facilitate remote learning with ateacher in one place and one student or hundreds of students inanother.

These kinds of applications, involving many individualusers, will require an enormous number of amplifiers forsignal propagation and distribution. Livermore’s new laseroptical amplifier could well become ubiquitous.

Key Words: fiber-optic communications; semiconductor opticalamplifier; photonic integrated circuit; R&D 100 award; verticalcavity surface emitting laser.

For further information contact Mark Lowry (510) 423-2924 ([email protected]), Sol DiJaili (510) 424-4584 ([email protected]), or Frank Patterson (510) 423-9688 ([email protected]) .

18 Research Highlights

EMANDS on data communications systems are growingby leaps and bounds. Information travels faster and farther

than anyone might have dreamed possible even 20 years ago,but still the Information Superhighway wants more.

Lawrence Livermore National Laboratory’s Sol DiJaili,Frank Patterson, and coworkers have developed a small,inexpensive optical amplifier. It incorporates a miniature laserto send information over fiber-optic lines at a rate of more than1 terabit (1 quadrillion bits of information) per second. Theamplifier is about the size of a dime, which is 1,000 timessmaller than comparable amplifiers, and in productionquantities it will cost 100 times less than the competition.

Fiber amplifiers can operate at comparable bit rates, butthey are large and expensive, which limits their usefulness.For example, erbium-doped fiber amplifiers currently enablehundreds of thousands of simultaneous telephone conversationsacross continents and under oceans on a single fiber-opticcable. But their high cost makes them economical only forlong-haul systems, and their large size means that they cannotbe integrated easily with other devices.

On the other hand, conventional semiconductor opticalamplifiers are inexpensive and relatively small, but crosstalkand noise at high transmission rates limit their performance toabout 1 gigabit (1 billion bits of information) per second orless.

The Laboratory’s new amplifier combines the best of bothworlds. Its small size, low cost, and high performance make itan excellent candidate for use in wide-area networks, local-areanetworks, cable TV distribution, computer interconnections, andanticipated new fiber-to-the-home applications that will requiremultiple amplification steps and therefore many amplifiers.

A Vertical Laser at WorkIn a conventional semiconductor optical amplifier (SOA),

the signal passes through a waveguide that has been processeddirectly onto a direct bandgap semiconductor. Inside thewaveguide is a gain medium through which the optical signalpasses and where the signal gains in intensity. The problemwith these conventional SOAs is that the gain cannot becontrolled, so signals tend to fluctuate. A signal at one

wavelength can deplete the gain of asignal at another wavelength. This

Signal Speed Gets Boostfrom Tiny Optical Amplifier

DTiny optical amplifiers about the size of adime are inexpensive and have excellentperformance for communicationsapplications of the future.

Team members sharing the award for theminiature optical amplifier include (from left, front)inventors Sol DiJaili and Frank Patterson; (rear)fabrication techs William Goward and HollyPetersen, program leader Mark Lowry, andinventors Jeff Walker and Robert Deri.

Page 12: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

21

Science & Technology Review October 1996

SixDOF Sensor

90 degrees into photo diode P3. The otherhalf of the beam passes through thebeam splitter, into a focusing lens L3, andonto photo diode P2.

The light from the second reflective surface,the bar, also passes through the filter. However,because this reflective bar is tilted relative to thedot, the laser light reflecting from it is at agreater angle of divergence. The greater anglecauses the light to pass through a differentlocation of the filter, missing the collimating lensand illuminating another photo diode (P1).

Through creative use of mirrors and lenses, each of the threephoto diodes has a different sensitivity to the relative positionof the sensor and the reflectors. P1 is most sensitive to straight-line motion between the bar and the sensor z and the rotation ofthe sensor about that axis (Rz). P2 is most sensitive to tiltabout the x and y axes (Rx and Ry), and P3 is most sensitive tostraight-line motion of the sensor relative to the reference dot(x and y). Information from all three sensors is needed todetermine all three positions and three orientations of thesensor relative to the part.

The signals from the three photo diodes are processed byelectronics remotely located from the sensor head. The analogdata from the diodes are digitized and fed into a computerwhere they are decoupled to define the six axes ofinformation. The processed data are then available to theoperator for recording or sending commands to change theposition of a computer-controlled machine.

A Better MouseAmong other future uses for Vann’s new sensor is a

SixDOF cursor for personal computers, which would allow auser to perform much more complicated tasks than arepossible today with a typical two degrees-of-freedom mouse.The sensor could also be used to help doctors diagnose musclerecovery by evaluating the effects of physical therapy. Withreflective reference points mounted on a patient’s injuredlimb, a robot with a SixDOF sensor could generate a SixDOFmap of muscle motions. The sensor could also remotelyperform dangerous tasks such as manipulating radioactive,toxic, or explosive materials. For example, a robot with aSixDOF sensor could track reflective references mounted onthe hands of an operator who disassembles a dummy bombwhile another robot, electronically following the motions ofthe first robot, disassembles the real one.

Its potential applications are diverse, but the SixDOFsensor will likely find its greatest use in manufacturing wherehighly agile and accurate machines have been limited by theirinability to adjust to changes in their tasks. Enabled to senseall six degrees of freedom, these machines will be able toadapt to new and complicated tasks without humanintervention or delay.

Key Words: manufacturing, robotics, R&D 100 Award, six-degrees-of-freedom (SixDOF) sensor.

For further information contact Charles Vann (510) 423-8201 ([email protected]).

20

Science & Technology Review October 1996

Research Highlights

ODERN manufacturing makes heavy use of robots,which are better than humans at repeating the same task

over and over. But when even minor changes need to be madein the manufacturing process—in the shape of a car door, forexample—a human operator must “teach” the robot the newshape by guiding it by hand through each motion and everyorientation in the operation. Besides being time consumingand therefore expensive, this process is often inaccurate.

Charles Vann, a Lawrence Livermore National Laboratorymechanical engineer and manager, has developed a sensor thatcan make manufacturing robots smarter, saving both time andmoney. His small, noncontact, optical sensor increases thecapability and flexibility of computer-controlled machines bydetecting the sensor’s relative position to any mechanical part inall six degrees of freedom. (In mechanics, degrees of freedomrefer to any of the independent ways that a body or system canundergo motion, i.e., straight-line motion in any one of the threeorthogonal directions of space or a rotation around any of thoselines.) The six-degrees-of-freedom (SixDOF) sensor can bemounted on the tool head of a multi-axis robot manipulator totrack reflective reference points attached to the part. Once therobot knows where it is relative to the part, a computer caninstruct the robot to follow a path predescribed inmultidimensional computer drawings of the part, or the robotcan be programmed to follow a path of references mounted onthe part. The sensor eliminates the need for “training” the robotand enables process changes without halting production becausesoftware can be downloaded quickly into the robot’s controller.

The nearest competitor to the SixDOF sensor is one thatdetects only three degrees of freedom. But manymanufacturing operations require information on all sixdegrees of freedom. Welding, for example, requiresinformation on three degrees of freedom to locate the weld(the x, y, and z axes) and the other three rotational degrees offreedom to properly orient the tool relative to the part.Compared to the competitor, the new SixDOF sensor is fourtimes smaller and five times lighter because it uses lateral-

effect photo diodes (light- and position-sensitive diodes),which are smaller and lighter than the cameras used by thecompetition. And the SixDOF sensor costs one-sixth as much.Yet for an equivalent field of view, it is more than 250 timesfaster and up to 25 times more accurate.

How the Sensor WorksThe SixDOF sensor is composed of four assemblies: a laser

illuminator, beam splitting and directing optics, lateral-effectphoto diodes, and signal-processing electronics. The lasersource is a 5-milliwatt diode laser. Two small mirrors (M1and M2 on the illustration, next page) guide the 1-millimeterlaser beam to the primary optical axis of the sensor. The beamthen passes through two negative lenses (L1 and L2) thatdiverge the beam at about 0.3 radians. This high divergencecreates a 2-centimeter laser spot at about 3.5 cm from the faceof the sensor. The beam divergence, depth of field, and spotsize can be changed by choosing different negative lenses.

Two reflective reference points, a 4-millimeter dot and a 1-by-1-mm bar, are mounted on nonreflective tape andapplied to the part being worked on. The laser light reflectsoff the references and back into the sensor. Because the beamis diverging, the reflections are magnified in area when thelight returns to the sensor, allowing most of the light to goaround the negative lenses and through a large, collimatinglens (L3) instead. After collimation, the beam continuesthrough a notch filter, which passes the laser light but blockslight at other wavelengths.

Inside the sensor, light from the dot is divided into twobeams by a beam splitter. Half of the beam is reflected

SixDOF SensorImproves ManufacturingFlexibility

M

Filter

L1 and L2L3

~3 cm

L4

Beam splitter

P1

P2

P3

M2

Connector

Laser

M1

BarDot

Workpiece

Photo of sensor with a quarter lyingalongside. Below is an interiorschematic view of sensor.

For robotic manufacturingapplications, CharlesVann measures curvedand sloping surfaces with his six -degrees-of-freedom (SixDOF)sensor.

Page 13: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

23

Science & Technology Review October 1996

Ce:LiSAF Laser

The atomic structure of cerium provides the property oftransitioning to the ultraviolet wavelengths when it is bombardedwith light. This is an advantageous property, and scientists havebeen investigating cerium’s potential as an ultraviolet laser sincethe early 1980s. However, early demonstrations were sodiscouraging that cerium was all but dropped from activeexperimentation and commercial consideration. What washappening was that cerium-doped crystals had developed twomajor energy-loss problems: solarization and excited-stateabsorption. Solarization is the loss of transparency in a crystal—it becomes colored—from ultraviolet radiation, like sunlight.Excited-state absorption causes input energy to turn into heatinstead of laser light, which also can be debilitating to the laser.

To make use of cerium’s ultraviolet properties, acomplementary host medium was needed to provide it withstability against ultraviolet radiation and the effects of excited-state absorption. Over the years, Laboratory scientistsinvestigated different host media and built up a body of dataabout them. In the mid-1980s, Stephen Payne and co-workersinvented the LiSAF laser host medium for use with chromium-doped infrared lasers. This very successful material, used incommercial laser designs, was selected for trials with cerium.

CRADA Partnering For the experiments, making the crystals was an important,

but difficult and time-consuming first step. Much can gowrong during the process, resulting in damaged, contaminated,or flawed pieces with the wrong optical characteristics.Fortunately, the Laboratory scientists had an opportunity toenter into a CRADA with the scientists at VLOC, who hadexpertise in growing crystals and manufacturing them withhigh yield. In addition, VLOC provided a range of facilitiesfor precision crystal cutting and polishing and a means forpurifying chemicals for the crystal growth process. Thecloseness of the Laboratory–VLOC collaboration, in whichlaser-crystal analysis and fabrication cycles were carefullycoordinated, eventually led to the successful production of thecerium laser crystal on a routine basis.

The new cerium laser offers efficiencies approaching 50%,which means that extracted laser light is about half of theenergy that was input into the laser. This value is in the range

of efficiencies forcommercial laser products atnonultraviolet wavelengths.Furthermore, this material is from 10 to 100 times moreefficient than other types of solid-state laser material that havebeen studied in the past. Finally, because the ultraviolet light isprovided in a wavelength range corresponding to approximately10% of the bandwidth, the ability to tune, or shift, to anotherwavelength for particular applications is an additional criticalfeature that this laser provides.

ApplicationsBecause it is straightforward technology, Ce:LiSAF is

expected to usher in a new era of laser applications. It isparticularly well suited to remote sensing environmentalapplications because many targeted molecules, including ozoneand aromatic compounds, have characteristic absorption bandsin the ultraviolet. Already, a cerium laser has been deployed toremotely detect ozone and sulfur dioxide in the environment.

The U.S. Army is considering its use to monitor the presenceof tryptophan, a common component of biological weapons.Another potential military use could be to secure wirelesscommunications links between infantry units over shortdistances of approximately 1 kilometer on a battlefield. Becauseultraviolet light from a cerium laser can be tuned to attenuate, ortaper off, around 1 kilometer from the source, it can be detectedonly by receivers within less than about 10 kilometers of thetransmitter. This feature makes remote detection of thecommunication signals (for example, with a satellite or behindenemy lines) impossible.

The power, simplicity, and reproducibility of Ce:LiSAF willchange traditionally difficult, expensive, and sensitiveapplications into commercially feasible ones. Because of thiscrystal, tunable ultraviolet lasers may move rapidly from thedomain of scientific research laboratories into industry.

Key Words: Ce:LiSAF, cerium crystal, R&D 100 Award, tunableultraviolet laser, solid-state laser.

(a) A new method of generating tunable ultravioletlight using a Ce:LiSAF laser crystal (nonlinearconversions denoted by square boxes) is comparedto (b) the typical existing commercial approach. At right: sample crystal sizes.

For further information contact: Chris Marshall (510) 422-9781 ([email protected]).

22

Science & Technology Review October 1996

Research Highlights

O through a supermarket checkstand and, chances are,your purchases will be scanned by a laser. Watch TV and

see advertisements for excimer laser surgery to correctnearsightedness. The list goes on, for laser technology hasinfiltrated modern life. But this hardly means that laser researchand development is complete. Now, Lawrence LivermoreNational Laboratory laser scientists are engaged in twodirections of research: advancing to more and more difficultapplications, and refining current technology so that lasers canbe made ever more efficient, reliable, and cost effective. Forconsumers, the progress of this work will be marked by seeingpreviously exotic applications become commercially feasible.

The move toward smaller but more powerful, more reliable,and less expensive lasers has taken a jump with the discoveryof Ce:LiSAF, a laser crystal developed under the terms of aCooperative Research and Development Agreement (CRADA)between the Laboratory and VLOC, a Division of II-VI Inc(formerly Lightning Optical Corporation) in Tarpon Springs,Florida. Ce:LiSAF is the nomenclature for a crystal made ofcerium embedded, or doped, in a host medium consisting oflithium strontium aluminum fluoride (LiSrAlF6). It is anoptical crystal, emitting ultraviolet light in a range ofwavelengths that make the laser tunable.

The new cerium laser crystal is a significant product for tworeasons. First, it provides the ability to generate ultraviolet lightdirectly, compared to previous methods that were far morecomplicated, less predictable, and worked only under restrictiveconditions. Ultraviolet light is desirable for applications thatrequire finely focused, high-intensity beams or for sensingmaterials with ultraviolet absorption bands. Of the various kindsof laser light—infrared, visible, and ultraviolet—ultraviolet isthe most difficult to obtain because it consists of the highestenergy wavelengths. The capability of generating ultravioletlight simply and directly will extend laser applications.

The older, usual method for generating tunable (variablecolor) ultraviolet laser light is to take available, longerwavelengths and use various means to step them up throughintermediate wavelengths until the ultraviolet portion of theenergy spectrum is reached. This delicate process is calledfrequency conversion. The figure at right compares the

frequency conversion required for the new laser crystal withan existing commercial approach for generating ultravioletlight. In the laser using the Ce:LiSAF crystal, input energyfrom a Nd:YAG (neodymium-doped yttrium aluminumgarnet—a commonly used crystal) laser undergoes nonlinearfrequency conversion twice. That light is beamed through theCe:LiSAF crystal, and ultraviolet light is the result. In theexisting commercial approach, one beam pumped through aNd:YAG crystal undergoes frequency conversion; theresulting light is used as input energy for a nonlinear opticalparametric oscillator. A second beam from the Nd:YAGcrystal goes through two frequency conversions, is combinedwith the output from the oscillator, and is mixed in an opticalparametric amplifier. The resulting light must then go throughfrequency conversion before ultraviolet light is attained. Withtwo fewer critical frequency conversion steps, the Ce:LiSAF-based method results in more reliable and efficient generationof ultraviolet light, with less energy lost along the way.

The second reason why the new crystal is significant is thatit makes an ultraviolet solid-state laser system commerciallyfeasible. Generating laser light is not simple because much ofthe energy put into the laser system ends up as heat. Yet lightenergy gain must be larger than the losses if lasing is to occur.Therefore, in every part of the laser system, the objective is tomaximize energy gains while minimizing the losses. Thecerium laser crystal, which was specifically designed to be anamplifying agent in a solid-state system, generates usefulultraviolet wavelengths so simply that it makes possible acompact, robust, and cost-effective laser system.

Crystal PropertiesIn the Ce:LiSAF crystal, cerium is the light emitter while

lithium strontium aluminum fluoride serves as the crystallinehost and preserves the favorable optical properties of cerium.

A Simple, Reliable, Ultraviolet Laser: the Ce:LiSAF

G

Nd:YAG laser 2ω 4ω

Tunable light at 0.3 micrometers

ω = harmonic generation

Tunable light at 0.3 micrometers

(a)

(b)

Ce:LiSAF laser

Nd:YAG laser 2ω

2ω 3ω Optical parametric amplifier

Optical parametric oscillator

The Livermore team that developed the ultraviolet Ce:LiSAFlaser includes (front, from left) Stephen Payne andChristopher Marshall; (back, from left) Andy Bayramian,John Tassano, Joel Speth, and William Krupke.

Page 14: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

25

Science & Technology Review October 1996

The width and length of the sensor are small enough to allowthe individual ferromagnetic layers to form as single magneticdomains with a preferred magnetic orientation. These orientationsare deliberately designed to be antiparallel to each other.Typically, the width of the sensor is 100 to 500 nanometers, andthe length is 250 to 1,000 nanometers—a tiny shoe box shape(see photo and figure below left).

In the absence of an external magnetic field, the sensorrelaxes to its lowest energy state, with the alternatingferromagnetic layers aligning in an antiparallel configuration.Sensor resistivity is higher in this state because current thatflows perpendicularly through the multilayer planes encountersgreater electron scattering, which increases the resistivity. Whena sufficiently large magnetic field is applied to the sensor, theferromagnetic layers rotate into a parallel magnetization state.In this configuration, current flowing through the multilayerplanes encounters reduced electron scattering and as a result,resistivity is lower. The two resistivity states can be used asthe two states in a digital magnetic sensor.

The performance of the CPP–GMR sensor is a significantimprovement over conventional GMR sensor design. Stearns’design has the GMR multilayers rotated 90 degrees so that thecurrent flows perpendicular to the plane of the sensor (seefigure above). Because the signal from the CPP–GMR sensoris inversely proportional to its cross-sectional area, the signalactually increases as the sensor is scaled to higher and higherdensities (and smaller sensor size). This scaling provides greatmanufacturing cost advantages for the future.

There are additional advantages to the CPP–GMRarchitecture. Conventional GMR sensors require shielding tobe placed around the sensor for protection from stray flux, andthe shields must be electrically isolated from the sensor. Theextra parts and related design effort affect sensor size and cost.

On the other hand, the Laboratory’s CPP–GMR sensor iswell integrated into the design of the magnetic head: it uses the“write” poles of the magnetic head to serve as shields for thesensor and as conductors for the magnetoresistive current. As aresult, the sensor conductors, the shields, and the inductivewrite poles are all integrated into a single structure to simplifythe design, manufacture, and cost of the magnetic head.

Industrial CollaborationAfter hearing about the potential for the CPP–GMR sensor,

Read-Rite Corporation of Fremont, California, one of theworld’s leading supplier of magnetic heads, became anindustrial collaborator with the Laboratory and has beenworking closely with the team to bring the product to market.The first results of this collaborative work included modifyingthe design to produce a linear sensor response, a conventionalpractice in commercial manufacturing, and devising a self-aligning process for multilayer manufacturing, which willgreatly drive down manufacturing complexity and costs.Together the two groups are currently developing fabricationsequences and designing tools to use in manufacturing thesensor.

The CPP–GMR sensor will be able to function over a rangeof information storage densities, spanning from the current state-of-the-art at approximately 1 gigabit/in.2 (1 gigabit/6.4 cm2)which is at the size limits of magnetic disk drive technology.To scale up to the sensor’s upper density limits requires nochange in the magnetic head architecture. The CPP-GMRsensor will be more robust and more sensitive as it is scaleddown in size.

One interesting potential application for the sensor is the“patient ident-a-card,” which would be credit-card size andcontain the entire medical history of an individual. Anotherapplication could be storage on a single PC the equivalentinformation of the Library of Congress. The primaryimportance of the sensor, however, is that it contributes tocontinued growth in the computer industry. With higher-density sensors and heads, the industry will be able tocontinue developing products with greater performance atreduced cost.

Key Words: CPP–GMR (current perpendicular to the plane–giantmagnetoresistive) device, disk drive, R&D 100 Award, ultrahigh-density magnetic sensor.

Ultrahigh-Density Sensor

Configuration of theCPP–GMR sensor.

For further information contact Andrew Hawryluk,(510) 422-5885 ([email protected]).

24

Science & Technology Review October 1996

ORE, faster, smaller, and cheaper” is an underlyingtheme of the computer industry. Ironically, one recent

technical development that is contributing to this pace camefrom a group of Lawrence Livermore scientists who wereheaded toward quite different scientific objectives. But theirscientific expertise coupled with some creative thinking ledDaniel Stearns and his colleagues in the AdvancedMicrotechnology Program of the Laser Programs directorate todevelop a laser-based spin-off for computer industry use. Theydesigned an advanced, ultrahigh-density magnetic sensor thatwill solve a problem facing the disk drive industry: how to getbeyond the density and storage limitations of present magneticsensor technology.

Magnetic sensors are a key component of the disk drive.They determine how much and how fast the disk drive can“read.” Because computer users are constantly demandinghigher density disk drives (with more memory in the samephysical space), manufacturers must constantly push to makesmaller and smaller magnetic sensors. Unfortunately, practicalsize limits have been reached with present sensor technology.As sensors are made smaller, their performance degrades. Theperformance limits have to do with signal-to-noise ratio:smaller sensors give off smaller signals. At some point, thesignals become too small to distinguish from the “noise”coming from the rest of the sensor environment.

The team (pictured above) was working in some of thesame areas of expertise as those in magnetic sensor

research and development, and they became intrigued by thesolutions that industry researchers were proposing for themagnetic sensor problem. They wondered what they might beable to do to help solve it.

Industry researchers were investigating magnetoresistiveand giant magnetoresistive (GMR) devices.Magnetoresistance—the change in a conducting material’sresistivity when a magnetic field is applied to it—was alsorecognized by the Livermore team as a promising tool forsensing very small volumes of magnetic media. As they beganthinking about the problem, they looked at the GMR devicesalready developed. They applied their expertise about thin,multilayer films, magnetics, and microfabrication technologiesand emerged with a variation on the most promising GMRconcept at the time, the “spin-valve” GMR sensor. Theycalled their sensor the CPP–GMR sensor (CPP stands forcurrent perpendicular to the plane).

Simply Different StructuresThe CPP–GMR sensor is a microstructure made up of

alternating ferromagnetic layers and nonmagnetic metallayers, or spacers. The layers are thin, generally less than 5 nanometers each, and each sensor may contain a total of 10 to 100 individual layers (depending upon material choicesand applications). The layer thicknesses are selected tomaximize the GMR effect. The total thickness of the sensor isoften only approximately 100 nanometers thick, about one-thousandth the width of a human hair.

Giant Results fromSmaller, Ultrahigh-Density Sensor

The CPP-GMR sensoruses alternating thinfilm layers of magneticand nonmagneticmaterials and shapeanisotropy that isinherently stable alongthe long axis.

Livermore team members include(clockwise from top, left) StephenVernon, James Spallas, Charles Cerjan,Nat Ceglio, Andrew Hawryluk, and DonKania. (Not pictured are Benjamin Lawand Daniel Stearns.)

Sensing area

Research Highlights

Magnetic layer

Non-magnetic conducting layer

“M

Page 15: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

27

Science & Technology Review October 1996

Laser Lithography

Interference lithography has been used in a variety of otherapplications for more than 15 years, especially for fabricatingdiffraction gratings.* The technology offers the promise oflow-cost, high-resolution, bright, and energy-efficient displaysthat are ideal for applications ranging from portable computersand instruments to virtual-reality headsets and large workstations.What’s more, the technology may have direct applications tolower manufacturing costs of other products, such as computermemory chips.

The LLNL process can easily produce a high-density arrayof posts or holes 0.2 to 0.5 micrometers wide in a photosensitivematerial, perfect for creating densely packed and preciselyarrayed patterns required for FED production. The technologyallows the use of inexpensive substrates such as silicon andglass and works with proven photoresist materials and processesthat are used in traditional lithography techniques.

Using Lasers to Produce Precise PatternsThe laser interference technique is based on the pattern

produced by two interfering laser beams of a given wavelength.The standing wave interference pattern produces alternatinglight and dark fringes with a spacing determined by the angleat which the beams intersect. For a typical near-ultraviolet orviolet laser operating in the range 0.3 to 0.4 micrometers, linesdown to 0.2 micrometers can be fabricated, a resolution easilyexceeding that required for FED manufacturing. With multipleexposures, essentially any pattern that can be formed byintersecting lines can be fabricated.

In order to apply interference lithography to array areaslarger than 6,000-cm2 (1,000-in.2), the LLNL researchersfurther developed specialized techniques. For example,meniscus coating allows the substrate (such as silicon or glass)to be coated with the liquid photoresist solution to exactly thedesired thickness.

Another technique that the team developed indicates whenthe pattern geometry is optimized. The growth of the features ismonitored in real time during the development step. Thisprocess is critical because although the Livermore technologyis relatively straightforward, many variables such as laserintensity, coating thickness, and temperature come into playsimultaneously.

With the integration of these new fabrication procedures,the team has succeeded in fabricating 2,500-cm2 (400-in.2)arrays of submicrometer photoresist material suitable for theproduction of field emitters. The large arrays contain about 1 trillion submicrometer structures, with better than ±5%spacing uniformity.

Impressive Results Attracting IndustryThe technical results have been so impressive that several

major U.S. display producers and lithography vendors arecollaborating with the LLNL development team. Some ofthese firms have successfully converted the pattern left by thephotoresist material into functioning emitters by a series ofetching and evaporation steps.

Team members say the new technique will find direct usein other applications requiring deep submicrometer patterns.The most significant may be a new method for the criticallithography steps in DRAM (dynamic-random-access-memory) chip manufacture, a $150-billion-per-year market.LLNL researchers are currently discussing the approach withmajor U.S. manufacturers to evaluate the DRAM application.

The first commercial products with FEDs manufacturedwith the Lawrence Livermore process may be high-resolutionunits for military needs such as in aircraft and ground vehicles.Somewhat later, FEDs should start appearing in such consumerproducts as portable and desktop computers and even flat-screentelevisions with picture quality comparable to that from the bestconventional cathode ray tube TV displays.

The laser interference lithography process is part of a muchlarger effort involving a dozen industrial collaborations workingto advance flat-panel display technology, with funding providedby the Department of Energy, Department of Defense, andindustry. All of the flat-panel efforts take advantage of LLNLexpertise in lasers, optics, and materials science and state-of-the-art facilities.

Key Words: display, field-emission display (FED), laser, laserlithography, R&D 100 Award.

For further information contact Michael Perry (510) 423-4915 ([email protected]).

26

Science & Technology Review October 1996

Research Highlights

ROM digital watches to portable computers, flat-paneldisplays form an integral part of a myriad of consumer and

military products. The $8-billion annual worldwide market forflat-panel display technology, now overwhelmingly dominatedby active matrix liquid crystal displays (AMLCDs), is projectedto grow to more than $20 billion by the end of the decade.

However, as anyone using a portable computer can attest,liquid crystal displays have significant limitations in brightness,angle of viewability, and power consumption. For U.S. securityand economic experts, an even more significant factor is the factthat liquid crystal display technology is dominated by Japanesecompanies; American firms control less than 3% of the market.A breakthrough for display manufacturing by LawrenceLivermore researchers, however, may well put U.S. flat-panelproducers in a position to lead the market with a simple, cost-effective way to produce field-emission displays (FEDs).

Consuming less power than AMLCDs, FEDs are a newkind of flat-panel display technology that can be thinner,brighter, larger, and lighter. They have numerous potentialapplications in portable and large area displays and can, inprinciple, cost much less to manufacture.

Moving to FEDsActive matrix technology uses liquid crystals sealed between

two thin plates of glass, with the display divided into thousandsof individual pixels that can be charged to transmit or blocklight from an external source to form characters or images on ascreen. In contrast, each pixel in an FED acts as a microscopic

cathode ray tube (CRT) and produces its own light. Instead of asingle electron beam sweeping across an array of phosphorpixels as in a conventional CRT, the FED has millions ofindividual CRT-accelerated electrons crossing a vacuum gapand impinging upon a phosphor-coated screen to emit light.

Switching on blocks of emitters that comprise a pixel in agiven sequence achieves the same effect as changing aselected pixel in a liquid crystal display. What’s more, FEDsproduce high brightness over the full range of color, but couldrequire only one-tenth to one-half the power of a conventionalliquid crystal display.

The main problem with FEDs has been that their fabricationrequires a micromachining technology with the ability to patternvery small structures over large areas. The display’s electron-generating field emitter tips are less than 100 atoms wide andmust be made precisely and uniformly over the entire screenarea. Now, only small-scale (1 sq. in.) FEDs can be produced bythe extremely slow and expensive process of electron-beamlithography. Conventional photo-lithographic techniques, whilecapable of producing larger arrays (approximately 10 sq. in.)cannot produce sufficiently small emitters.

Citing U.S. firms’ mediocre penetration into the criticalflat-panel display market, the federal government formed theU.S. Display Consortium and assembled a White House FlatPanel Display Task Force. Both the consortium and the taskforce concluded that to develop a viable domestic flat-paneldisplay industry, U.S. firms could either partner with anestablished Japanese manufacturer or “leapfrog” thetechnology with a new approach.

Leapfrogging the CompetitionA leapfrog approach was demonstrated by a Lawrence

Livermore team headed by Laser Programs physicist MichaelPerry. The team perfected the process, called laser interferencelithography, and they demonstrated its applicability to large(>2500 cm2) patterning. The process is expected to aidsubstantially in the successful commercialization of high-performance FEDs and enable the technology to capture asignificant share of the flat-panel market.

F

Thinner Is Better with Laser Interference Lithography

Developing this new flat-panel display technology are AndresFernandez, James Spallas, Nat Ceglio, Jerald Britten, AndrewHawryluk, Hoang Nguyen, Robert Boyd, and Michael Perry.

Large-format field-emitter mask pattern produced bylaser interference lithography and the field-emissiondisplay (FED) concept.

Gate

Insulator

Cathode

Phosphor Vacuum

Light

Phosphor

Gate

Emitter tipElectrons

Faceplate

*The group has been involved in two previous Livermore R&D 100 awards: the highly dispersive x-ray mirror in 1987 (Ceglio, Hawryluk, and Stearns) and themultilayer dielectric gratings for high-power lasers in 1994 (Boyd, Britten, and Perry).

Page 16: Lawrence Livermore National Laboratory Livermore, California … · 2019-05-29 · Printed on Recycled paper. Assessing Global Climate Change Science & Technology Review Lawrence

28

Science & Technology Review October 1996

Assessing Humanity’s Impact on Global Climate

Lawrence Livermore’s Atmospheric Sciences Division isapplying computation expertise—originally developed tosimulate nuclear explosions—to the task of climate modeling.We also make use of Livermore expertise in atmospheric sciencethat grew out of efforts to model fallout from nuclear testing.These model-building and simulation efforts in climate studiesare synergistic with other Laboratory programs that depend oncombining computing with information and communicationmanagement.

Major efforts are aimed toward understanding how thebiosphere and oceans take up and remove carbon dioxide, whatrole pollutants from fossil fuels play in determining sulfateaerosol concentrations and the impact on climate, and to whatdegree climate naturally varies within the biosphere. Inaddition, we work to reduce systematic errors in the models incollaboration with researchers in LLNL’s Program for ClimateModel Diagnosis and Intercomparison.■ Contact: William Dannevik (510) 422-3132 ([email protected]).

Abstract

© 1996. The Regents of the University of California. All rights reserved. This document has been authored by the The Regents of the University of California under Contract No. W-7405-ENG-48 with the U.S. Government. To request permission to use any material contained in this document, please submit your request in writing to the Technical Information Department,Publication Services Group, Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551, or to our electronic mail address [email protected].

This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of Californianor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information,apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service bytrade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or theUniversity of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California andshall not be used for advertising or product endorsement purposes.

U.S. Government Printing Office: 1996/784-071-60002