2010 student capstone journal

70
Capstone Journal Volume II 2010 WORCESTER ACADEMY Graduation Projects: Explorations in the Real World

Upload: worcester-academy

Post on 25-Jul-2016

222 views

Category:

Documents


2 download

DESCRIPTION

2010 Student Capstone Journal Worcester Academy Graduation Projects: Explorations in the Real World Volume II

TRANSCRIPT

Page 1: 2010 Student Capstone Journal

Capstone JournalVolume II2010

WORCESTER ACADEMY Graduation Projects: Explorations in the Real World

Page 2: 2010 Student Capstone Journal
Page 3: 2010 Student Capstone Journal

TableofContents

HigherDimensions:

VisualizationandPracticalApplicationbyNimishAjmani,‘10

Pages4–16

TheEffectofSubunitMutationonCooperativityinTetramericScapharcaHemoglobinJulienAngel,‘10

Pages17–34

AbovetheOilInfluence

TheRenewableEnergyInterventionWeNeedButAren’tReadyForbyElenaStamatakos,‘10

Pages35–70

Page 4: 2010 Student Capstone Journal

Higher DimensionsVisualization and Practical Application

by Nimish AjmaniJune, 2010

Page 5: 2010 Student Capstone Journal

Introduction

What is a dimension? In the realm of mathematics, the best definition of a dimension is a unit direction in space. It is a basic axis on which we model figures such as lines, squares, and cubes. Most people are familiar with and comfortable with the idea of living in a three dimensional world. These three dimensions are length, width, and height. In mathematics, each of these dimensions is measured along an axis, and the three axes used for a three dimensional coordinate system are represented by the letters x, y, and z.1

The idea of a fourth dimension is complex for many people, even mathematicians. As our existence is a three dimensional one, there exist many ways to visualize the fourth dimension. The most common of these is that the fourth dimension is that which we call time, though Professor Oliver Knill of Harvard University has told me that this is not the best way at all to represent the fourth dimension. The popularity of this representation probably stems from The Time Machine by H. G. Wells, in which the unnamed Time Traveller states that any object in order to exist “must have extension in four directions [dimensions]: it must have Length, Breadth [width], Thickness [height] and–Duration.” But at the core, the concept of the fourth, and higher, dimensions is the focus of this paper. How can one start to visualize these dimensions in a manner that allows for clear mathematical representation? How do higher dimensions have a practical application in society? These two questions only begin to scratch the surface of this elusive concept which, as represented by H. G. Wells novel, has had humans scratching their heads for quite some time.

1

1 For the purposes of this paper, these three axes are oriented in a z-up manner, that is the x and y axes represent an objects length and width, and the z axis represents the objects height.

Page 6: 2010 Student Capstone Journal

Visualizing the Fourth Dimension

Although time has been discussed as one of many methods that can be used to visualize a fourth dimension, it is admittedly crude. Observers in our three dimensional space are required to “see” the fourth dimension through the animation of an object over the dimension of time, and as the Time Traveller from The Time Machine has stated, we are limited in our movement in this dimension to one direction, forward. Due to our limitations in manipulating time, we are unable to perform any real experiments using it, and therefore for this paper, I am abandoning this particular method. Rather, a much clearer way in which to visualize the fourth dimension involves the use of density. To best explain this concept, I will start out by drawing analogy to the second and third dimension. Density in an object can be defined as the concentration of mass at any specific point in that mass. Most objects people tend to work with in daily calculations usually involve constant density, and this is a good place to start. Let us for a moment take the idea of a planar figure with constant density. Now of course a planar figure does not actually exist, so the best way to describe one with constant density stems from an equation omnipresent in multivariable calculus, the equation for finding the area of a plane figure lying on the xy-plane:

Let’s take a look at exactly what this equation is saying in the first place. The inner integral, in variable y, is telling us the length of the figure we are evaluating, and the outer integral defines the width. The limits of these integrals, a and b represent the numerical values of the length and width, and are constants in this equation. These parts of the equation boil down to a simpler equation familiar to basic geometry, A = l • w. However, if we continue to look at this equation, and take a look at the number one, we see that this figure has a density of one throughout. This can be represented in many ways, the first of which is a shaded solid.

This image shows a rectangle, with dimensions a • b, just as defined in the integral. The constant density represented here shows that there is no variation in color represented inside the rectangle.

2

Page 7: 2010 Student Capstone Journal

If we were to be finding the area of this figure by multiplying the color darkness at points in the image by area in that section, we would be tempting the third dimension, and for this I require a different method of representation, a three dimensional one:

This figure gives us a slightly different way of seeing this image. In this representation, the density at any point is represented by thickness of the figure, and at a constant density of one, the yielded three dimensional figure is a flat box of height one. No geometer will argue that the volume of this three dimensional figure is the same as the area of the two dimensional figure above. This equality is the key for understanding how we may use density to represent the fourth dimension, but first lets take a look at a two dimensional figure with unequal density at all points, represented by the following double integral:

In this equation, the density is represented by x2+y2, so that it increases the further away a point is from the origin. The two dimensional representation for this kind of figure is this:

Notice that in this figure, the density, represented by the greyscale gradation, is gradually increasing from the center, with white being zero density. The other way to represent this kind of figure also involves three dimensions, and it is with this that we can start to crack the fourth

3

Page 8: 2010 Student Capstone Journal

dimension. But before I jump ahead of myself, let us take a look at the three dimensional figure generated from this density equation.

This is starting to look more like a three dimensional volume problem than a two dimensional density problem, and it is rightfully so. In this case, the total mass of the two dimensional rectangle is now the volume beneath the solid shown above and the xy-plane. In fact, a trained mathematician will note that the integral shown above is actually equivalent to this following integral, which is representative of the volume just described:

What we have here is known as a triple integral. Unlike a simple triple integral, this integral has the z limits defined by a plane below (in this case the xy-plane) and a z function above. However, the density of this new solid figure is one, and we can visualize this by imagining that the density of a figure can be represented by the concentration of particles in a particular unit of space. Another way to think of this is temperature, thinking that each point in an object is at a specific temperature, and a density of one represents thermal equilibrium throughout the object.

4

Page 9: 2010 Student Capstone Journal

Moving forward with three-dimensional space, we will take a look at a simpler three dimensional object now, a box:

This is a box in the first octant with sides that are a, b, and c, in length, boiling down to the equation for volume, which is V = a • b • c. In this case again, the box has a density of one, or constant density. As we saw earlier, this can be represented by a simple solid, equally dense figure, in this case a box.

This box like the planar figure before has constant density throughout. However, it is at this point where figures will begin to fail me, for there is no easy way to display an even density throughout this box. As I said before, this density method will be what we will use to try to visualize the fourth dimension. Before I had shown how we can extend the density of a 2 dimensional figure into the height value of a three dimensional solid. For this next step, we will be doing something very similar, except because there is no way to generate an image representative of the fourth dimension. Now an object of variable density is relatively simple to imagine compared to a fourth dimensional object. For the purpose of this example, we will use the object defined by the following equation.

5

Page 10: 2010 Student Capstone Journal

This equation defines the same box we saw above, but the density, or temperature based on which method is easier for you to conceptualize, is directly proportional to the z value of the figure: that is when z is zero, the density at that point is zero, and when z is one, the density is one, etc. This kind of object is actually very common in our daily lives, as many objects used day to day are of variable density based on their purpose. This even includes the MacBook Pro I am writing this on, because within the laptop, the cells of the battery are of a different density than the CD drive’s laser. Other examples of variable density also include objects like boats and airplanes. Boats especially serve as a prime example of this, as the thick metal walls designed to keep water out have a far greater density than the air within the boat, and it is this principle that allows the boat to float and carry passengers. But moving on from the three dimensional variable density problems, we shall now attempt to force our minds to visualize a fourth dimension. Like we saw earlier, density in the second dimension can be forced into a constant density third dimensional figure by extending points upward to the +z direction. By extension, what we will be doing for the fourth dimension is extending the variable density of the third dimensional box shown above into the fourth dimension. Admittedly, this will be hard for even trained mathematicians to do. Because there is no frame of reference for us to see a fourth axis, we cannot determine in which way these points should extend. Now for a moment here, I will beg the use of time to aid us, and let us say that density in this case will determine how far into time a point of the figure will travel. For this example, we see that the figure has a density based on the z coordinate of a point, so if a point exists at the base, where z = 0, then the point will become non-existent as soon as we start this clock. Points one unit up will survive for one unit of time, two for two, etc. What this results in is a box that is slowly shrinking until it reaches the top where it becomes non-existent. The reason why time cannot be used as a practical method for visualizing the fourth dimension is because “we can not move freely forward and backward through time.” (Knill) I had the pleasure to attend a lecture by Mr. Knill on higher dimensions, where he demonstrated that the best way to see the fourth dimension was through a method of projection into the third dimension. For me to best explain this, I will need to now make a detour to the book Flatland by Edwin A. Abbot.

6

Page 11: 2010 Student Capstone Journal

Flatland and Higher Dimensions

Flatland is a book published by Edwin A. Abbot in the 19th century about the nature of higher dimensional mathematics. The protagonist of the story is a square by the name of A. Square, who discovers at the turn of a new millennium that a higher dimension exists above him, the third dimension. The trouble A. Square has in visualizing this dimension, however, is analogous to the trouble we had of visualizing the fourth. To help me better understand the nature of the fourth dimension, I put myself to the task of creating my own animated version of Flatland. Lending myself to an excellent cast, I found that in the animation process, I learned quite a bit about the nature of space as we see it by putting myself in A. Square’s shoes, so to speak. When presented with a being of a higher dimension, A. Square was forced to, at first, limit his viewing of this being to a projection of that being into his space, and it is this method that we shall attempt to use in order to visualize our fourth dimensional object. But before we deal with two and three dimensions, let us step back to one dimension, and A. Square’s encounter with the King of Lineland. When A. Square was conversing with the King of Lineland, he was attempting to explain the existence of his second dimension, but the King of Lineland was unable to comprehend this idea. He was unable to fathom moving in another direction because as he was concerned, he has always been able to move in one line, and he never can move himself away from that line. It took A. Square’s practical example of moving in and out of the line that allowed him to convey that he did exist, even though the King of Lineland himself still did not believe. But what we will focus on here is how the King of Lineland saw A. Square. When A. Square entered Lineland, the King was only able to see a line. This line was expanding and contracting, yes, but it was still only a line as far as the King was concerned. This line that he saw is what we call a cross section in Lineland of the two-dimensional square. If he were to extend the thought, he would be able to “see” the square by putting together the many linear shapes that the square was composed of, therefore beginning to grasp the nature of a square. By extent, A. Square was faced with a similar challenge only moments later, when he was confronted in his living room by A. Sphere. Mr. Sphere was attempting to explain to A. Square his third dimensional nature using the same method A. Square used, by providing A. Square with multiple cross-sections of his figure, circles. The square had the same trouble the line had in visualizing his higher dimensional friend, that is that he cannot see a three dimensional figure because he does not live in three dimensions. It took the sphere several third dimensional tricks to convince him even partially, and only when A. Sphere pulled A. Square out of Flatland did he believe in the existence of a third dimension. Unfortunately for us, we cannot pull ourselves into the fourth dimension, not yet at least. What we can do however is use this theory of cross-sections to visualize objects of a fourth dimensional nature. If a sphere when entering a two dimensional plane has a cross section of a circle, then a hypersphere, the fourth dimensional counterpart to a sphere, should make a series of spheres as it passes or falls through our three dimensional plane of reality. However, this does not do us much good as though we can conceptualize that the hypersphere makes a series of spheres, we cannot fathom exactly what it is a hypersphere should look like. The next method we can use is something called a projection.

7

Page 12: 2010 Student Capstone Journal

A projection is exactly what it sounds like, an object is literally projected onto the dimensional plane below it. For example, if we were to take a circle and project it onto a line, we would get a line whose length is equal to the diameter of the circle. To shake this up a bit, we could project a square onto our line, except this time the line would vary depending on the orientation of the square. At it’s shortest, it would only be the length of one side of the square, but it could reach a length as great as the diagonal of the square. With this we could actually determine the nature of basic shapes of a two dimensional object based on it’s one dimensional projection. For this to work, we would need to take a look at the shapes produced by our two dimensional figure. When the line does not fluctuate, as in a circle, it can be determined that it is a circle. When it does fluctuate as in a triangle or a square, we can use the measurements of the line at it’s longest and shortest to determine the nature. Combine this with the frequency of the shifts and given the time it takes for the figure to undergo one rotation, it can be determined the shape of regular and basic 2-d shapes. To extend this to three dimensions, we have to again start by looking at the projections we are dealing with. A sphere, for example, will have a circular projection. A good example of how this works is grounded in an actual projector. If you were to project a sphere with a projector, your eye would notice and recognize it’s a sphere based on the levels of lighting and the curvature of the texture of the shape. This is what is known to us as depth perception, and is an acquired skill. But to explain how it would be a circle, I shall again make reference to A. Square of Flatland. When A. Sphere was showing Mr. Square around Spaceland, one of the objects A. Square saw was a box. However, because he had no idea to the concept of shading, he could only see the box as an irregular hexagon that morphed shape. The same applies to our projected sphere. Without the shading seen in a 3-d image of one, the sphere breaks down and becomes a simple circle. Boxes become hexagons or squares, and other shapes take on various forms as they are projected. With enough training, though, it would be possible to determine what these shapes are based on their projections into the second dimension. Again using the exact shape and the way it changes over time, one would be able to determine a general rough idea of it’s 3-dimensional configuration, and the same extends to even the fourth dimension. One of the most popular shapes used to represent the fourth dimension is the omnipresent hypercube. A hypercube projected into 3-space looks like a large cube lattice with a smaller cube on the inside, like in the figure to the right. The animation of a hypercube through time is commonly mistaken to be the cube turning itself inside out, that is the inner cube rotates outward as vertices on the outside rotate inward. However, this is a definition that lends itself to the limited nature of our three dimensional minds. What the cube is actually doing is rotating about it’s fourth dimensional axis, and in doing so is turning on over itself. This would be the equivalent of a square rotating about one of its own axes, and it’s projection changing because it is rotating out

8

Page 13: 2010 Student Capstone Journal

of the plane of projection. This is what results in the warping projection provided by the hypercube. The nice thing about this projection method is that we do not need to rely on time to see the shape of the projection. Though time can be used to extend the shape into a manner allowing us to see its fourth dimensional rotation, it is not necessary to create an image of the shape, as is proven by the existence of the hypercube above. If we were to use time to illustrate this rotation, what we would see is the cube rotating around one of its 4 axes. Because animation is very hard to do on paper, I have taken the animated sequence of a hypercube rotating about one of its fourth dimensional axis and made an image sequence below.

Although this looks like a cube turning inside out, it is acting in this manner because of the way the projection is working in the third dimension. If the cube were rotating about one of the three axes we’re familiar with, then we would see a solid figure rotating, but only because of the way the cube projects onto our space, we can see this in-turning of the cube. Although this is all fun math to play around with, the application of the fourth dimension is the truly interesting part. Even though most of the applications are only theoretical, they are based in sound science and math. However, there is one real application used every day; multi-vector data analysis of any number of dimensions.

9

Page 14: 2010 Student Capstone Journal

Practical Application and Beyond

The fourth and higher dimensions have few known applications in the real world, in mathematics and beyond. In the realm of mathematics, the fourth dimension is most commonly used in multi-vector calculations. Data points can easily be represented by vectors in as many dimensions as necessary, and it is not uncommon. However, the really fun applications of these higher dimensions are, because of humanity’s limited technological development, still in the theoretical stage. The first of these, and the most popular, is the concept of a wormhole. A wormhole, for those who do not know, is a tunnel through space-time that allows one to traverse vast distances almost instantaneously. To describe how this would work, I must again make analogy of the lower dimensions. To start off this puzzle, I first will ask a riddle-like sort of question. On a two dimensional sheet of paper, what is the quickest way to get from any one point to the other on that page. For example, what is the shortest route between the “p” in the word Practical at the head of this page, and the period at the end of this sentence. Most peoples’ first answer will be a direct linear route, i.e. a straight line from the two points given. However, there is a shorter route that involves the third dimension.

The diagram above is a common diagram of a wormhole. What has happened is that in order to shorten the distance between two points on a two dimensional plane, we have folded that two dimensional plane in the third dimension, and put the two target points next to each other. In this way, we can simply jump from one point to the other, through the third dimension. The same idea can hold true for the third dimension. In order to transverse two points in space, it would be required for us to fold this third dimensional space through the fourth dimension, and jump the space in between to reach the target point. Although this idea seems far fetched, space is not unbendable as one would think. In fact, we’re living in a divot in the space-time fabric right now; the gravitational field of the Earth itself. Gravity has been proven to bend the space around the objects it originates from, enough to even bend light around objects like the moon and the sun. This bending is what causes the flare behind the moon right before the sun finally disappears in a solar eclipse. This effect is also observed around the moon when stars are just at the horizon from ur point of view. This is what allows us to see stars that sit just behind the edge. The most prominent example of warped space involves a black hole. Black holes, according to mathematics, have a noticeable lensing effect around them, resulting in a visual

10

Page 15: 2010 Student Capstone Journal

distortion of space and time. But another question that is commonly asked is that when matter enters a black hole, where exactly does it go. It is possible that it enters some sort of supermassive and superdense mass at the center, but the idea of the fourth dimension comes when one links black holes. It is possible in theory to accelerate two black holes enough that they distort to form either a wormhole or a time machine. This however requires a massive amount of energy, beyond the scale of anything humans are capable of.

Beyond the fourth dimensions is even the fifth and sixth. These dimensions are hard for humans to visualize, much harder than the dimensions below it. There are no know applications for these, outside of theoretical mathematics and multi-term data analysis. However, just because we don’t know what to do with these dimensions in real life does not mean we need to stop exploring. Every scientific discovery or mathematic discovery starts in the theoretical, and so does this journey into higher dimensions. And if we are to take any lessons from our friend A. Square, we should not close our minds to these ideas, because for all we know, tomorrow we will be visited by fourth dimensional beings ready to show us their world.

11

Page 16: 2010 Student Capstone Journal

Works Cited

Wells, H. G. The Time Machine. (1895) H. G. Wells Complete and Unabridged, Seven Novels. New York: Barnes and Noble. 2006. pp. 3-64. Print.

12

Page 17: 2010 Student Capstone Journal

The Effect of Subunit Mutation on Cooperativity in Tetrameric Scapharca

Hemoglobin Julien Angel

5/28/2010

With the assistance of Dr. William E. Royer, PhD.

Page 18: 2010 Student Capstone Journal

1

Introduction

Hemoglobins are proteins that bind and transport oxygen in red blood cells. Specifically, oxygen

molecules bind to the heme group, an iron complex present in all hemoglobin. Many varieties of

hemoglobin exist, with many variations between species. Many of the variants are comprised of

multiple subunits, leading them to have many useful and interesting properties. Among these properties

is cooperative binding: when the chemical affinity increases with the amount of bound substrate.

Understanding the mechanisms behind cooperativity has been a goal of researchers for some time.

The hypothesis of this project is that structural changes to one subunit of a cooperative complex affect

the structural changes of other subunits and impact the cooperativity of the entire protein. This

hypothesis will be explored through oxygen-binding experiments on mutants of Scapharca inaequivalvis

tetrameric hemoglobin. Changes to the structure of the protein implemented by site-directed

mutagenesis will be analyzed for their effect on the oxygen affinity and cooperativity of the protein.

Background

DNA is made up of four basic components, called bases: adenosine, tyrosine, cytosine, and guanine. The

ordered combination of these four molecules forms the genetic blueprint for an organism. DNA’s

physical structure is a double-stranded helix. Its main function is to hold the genetic information used to

create proteins. The process of

turning this sequence of bases

into a large biomolecule has

two main stages: transcription,

which is the copying of genetic

information, and translation, which

is turning this information into functioning proteins.

Figure 1: Connection between DNA and Amino Acid Sequences. Adapted from U.S. National Library of Medicine.

Page 19: 2010 Student Capstone Journal

2

Proteins are large biomolecules comprised of many smaller molecules. These smaller molecules are

called amino acids. There are 20 such amino acids occurring in nature. When these amino acids are

connected in sequence according to the genetic code established in DNA, the molecules interact, which

causes them to fold into complex 3D shapes. Proteins serve a variety of functions in organisms. Some

proteins catalyze, or accelerate, the speed of chemical reactions. These are called enzymes. Other

proteins perform transport functions, like hemoglobin, or serve in the structure of cells.

Proteins have three, potentially four, levels of structure. The first level is its primary structure: the

sequence of amino acids that comprise the protein. From the primary structure arises the secondary

structure. This is formed by interactions between the side chains of amino acids and their backbones,

and typically results in either an alpha helix, or beta pleated sheet. Next comes the tertiary structure:

the physical 3D arrangement of the protein. The tertiary structure of a protein is primarily determined

by interactions between the amino acids that comprise it, and specifically interactions between the

hydrophobic elements. A protein’s quaternary structure

is the arrangement of multiple subunits of a protein, if

they exist. Only proteins that are comprised of multiple

smaller proteins have quaternary structures.

Hemoglobins are proteins with a specific function: to

bind and transport oxygen to where it is needed, and

then release it. Many hemoglobins are proteins with

multiple parts, called subunits. Proteins with two

subunits are called dimers; those with four are called

Figure 2: Demonstrates tetrameric HbA and heme groups. Reprinted from Encyclopædia Britannica, Inc.

Page 20: 2010 Student Capstone Journal

3

tetramers, etc. Human hemoglobin, HbA, is an example of this. The key component of hemoglobin is

the heme group. The heme group is an iron ion held in place by other molecules, and is what actually

binds to oxygen. Hemoglobin is found in red blood cells in humans.

A commonly occurring statistic when discussing hemoglobin is its oxygen affinity. This is the tendency of

oxygen to bind to the protein. A hemoglobin protein with high oxygen affinity is one that binds oxygen

very easily and quickly; low affinity indicates the opposite. Oxygen affinity is primarily based on the

structure of the protein, as most chemical and biological reactions are. The accessibility of the heme

group, the shape of the interface where the bonding occurs, and many other factors influence how

easily oxygen can bind.

Often, a protein such as hemoglobin can have multiple conformations that have differing attributes. For

example, hemoglobin has an R conformation and a T conformation. The R, or relaxed, conformation is

the one with significantly higher oxygen affinity, but is energetically unfavorable. Therefore, the protein

commonly exists in the T, or tensed, conformation, which while having less affinity to bind its ligand, is a

much more energetically favorable state.

When hemoglobin proteins are comprised of multiple subunits, oftentimes their function is altered by

cooperative binding. When a hemoglobin protein is bonded to oxygen, it undergoes structural changes

that increase its oxygen affinity. In cooperative binding, when one binding site increases its oxygen

affinity, other binding sites also increase their affinity, causing them to bind oxygen more readily. Plainly

stated, it is the connection between increase in bound ligand and increase in affinity. The first binding in

a cooperative hemoglobin sets of a chain reaction that allows the rest of the oxygen to bind more

quickly. This benefits the hemoglobin by making it more efficient, as a hemoglobin molecule not carrying

its full capacity of oxygen is wasting space.

Page 21: 2010 Student Capstone Journal

4

In experimenting with protein and other biological molecules, various elements of biotechnology are

employed. One of the most important involves modification of a naturally occurring structure. Plasmids

are circular rings of DNA, single or double-stranded, that serve as part of the genetic code of bacteria.

They serve the same purpose as DNA in any other cell, but are mobile. They can replicate, produce

mRNA, and serve as important parts of bacterial cells’ functionality. Also, they are capable of infecting

other bacterial cells, a process called transformation.

Because plasmids are small and mobile, they are easily altered and adapted by biologists. The alteration

of genetic information is called mutagenesis. When this is directed at a specific section or sequence of

genetic code, it is referred to as site-directed mutagenesis. A common technique among biologists and

biochemists is to add genetic material to

plasmids, usually for mass production of a

protein. When plasmids are used in this manner,

they are known as vectors.

When adding genetic information to a plasmid,

restriction enzymes, also known as restriction

endonucleases, are invariably used. Restriction

enzymes are types of proteins that recognize

specific sequences of DNA. When it comes across

that sequence of DNA, it breaks the DNA chain at

that point. Some restriction enzymes leave sticky

ends when they cut DNA. This is when the

enzyme leaves one strand of the DNA longer than the other. The significance of a sticky end is that it

allows the DNA to reconnect with another end that was cut by the same restriction enzyme. Since the

Figure 3: Demonstration of the use of plasmids as vectors. Reprinted from the University of Alabama in Huntsville.

Page 22: 2010 Student Capstone Journal

5

same enzyme cut both ends, it left the same offset in the DNA, allowing the two sides to fit together and

reconnect.

Structure and Function of HbII

This project uses homodimeric and heterotetrameric hemoglobins from the blood clam Scapharca

inaequivalvis. These two proteins (ScHbI and ScHbII)

are well-documented examples of cooperative

hemoglobin function, and therefore are excellent

systems in which to study the mechanisms and

functionality of oxygen affinity and cooperativity in

hemoglobin.

Scapharca hemoglobin is similar in many ways to human hemoglobin (HbA). Like HbA, it exists as a

heterotetramer: a tetramer with two variants of subunits. In HbA, these are known as α and β. The

tetramer consists of two of each. The tetramer produced by these acts cooperatively, for maximum

efficiency. The most significant differentiating factor between human hemoglobin and Scapharca

hemoglobin lies in the interface in which it binds oxygen. Human hemoglobin relies on interaction

between α and β subunits. In ScHbI and ScHbII, however, the binding interface is created by interactions

between the E and F helices of each subunit. In both, ligand-binding involves significant structural

transformation, making Scapharca an excellent resource for exploring a wide range of globins. Where

hemoglobin differs from many other oxygen-binding proteins is in its cooperativity. Many of these other

proteins either have non-cooperative quaternary structure, or are monomers.

Scapharca inaequivalvis hemoglobin exists in both dimer and tetramer forms, as previously mentioned.

The tetramer form of the protein has two homodimers, commonly labeled as subunits A and B. While

Figure 4: Tetrameric and dimeric forms of Scapharca hemoglobin.

Page 23: 2010 Student Capstone Journal

6

the dimeric form of this hemoglobin is often the target of experimentation, here the focus is on the

tetrameric form. This is because subunits of a homodimer cannot be modified independently of each

other. For experimentation to be done concerning differing subunits, each must be able to be altered

individually; hence the usage of a dual-homodimeric tetramer.

Before the specific mutations can be addressed, it is important to understand the structural

conformations that occur during oxygen binding in hemoglobin. When liganded, the protein’s heme

group moves, Phe 97 undergoes conformational changes, and there is change in the arrangement of

water in the interface. These structural changes are integral to the function and cooperativity of HbI and

HbII. The most prominent of these is the change to Phe 97. In unliganded HbI, Phe 97 is tucked into the

heme pocket, and restricts the heme iron from being accessible. In this T (tensed) conformation, the

oxygen affinity of the hemoglobin is lowered. When liganded, however, this Phe swings outward,

allowing the heme iron to descend.

This shifts the complex to its R

(relaxed) conformation, increasing its

oxygen affinity. When Phe 97 is

replaced by a tyrosine, however, the

protein is locked in its high-affinity R

state. The hydroxyl group of Tyr

creates a larger side chain, preventing it from tucking into the histidine pocket. This keeps the heme iron

accessible, greatly increasing its affinity. As seen in Figure 5, crystallography shows that this change does

not have extreme effect on the shape of the hemoglobin; the change in affinity comes from locking it

into a high-affinity state. Tetrameric HbII is tested with a single subunit locked in a high-affinity state,

and the results are compared to the same protein with both subunits mutated, and neither mutated.

Figure 5: Demonstrates the structural effect of the F97Y mutation

Page 24: 2010 Student Capstone Journal

7

Mutagenesis

The desired mutations to the wild-type Scapharca inaequivalvis HbII hemoglobin require amino acid

replacements. The structural change on the A subunit is achieved by replacing the phenylalanine at the

97th position with a tyrosine. On the B subunit, this same substitution occurs at the 99th

To produce this mutation, XL1-Blue E. Coli cells will be transformed with recombinant HbII genes. Two

mutants are designed: a single mutant, with a wild-type A subunit and a mutant B subunit, and a double

mutant, with both subunits mutated. The mutants are labeled A(F97Y)B(WT) and A(F97Y)B(F99Y). This is

made possible by the heterotetrameric nature of HbII. Because there are differing subunits, coded for by

different genes, mutations may be made to only some of the subunits, allowing observation of the effect

on the final protein. Oligonucleotide primers, short DNA sequences containing the desired mutation, are

designed from the Scapharca hemoglobin genes, according the specifications outlined in the

QuikChange protocol (see Appendix). These short pieces of DNA serve as starting points for later

replication.

position. These

changes are abbreviated as F97Y and F99Y, respectively.

The plasmids that will be used as vectors are double-stranded, so two oligonucleotides must be made

for each mutation. These are sense and antisense, one for each direction of the double-stranded DNA

plasmid. To switch the Tyr at the 97th position for a Phe, the codon for Tyr must be replaced with one for

Phe. Multiple codons for Tyr exist, however, so one must be selected. This is done by obtaining data on

which codons occur most often in wild-type E. coli. For phenylalanine, this is TAT. The switch from Tyr to

Phe requires only a single point mutation, as only one base differs between the codons. The sequence

for the oligonucleotide is selected for a specific melting point, and other conditions favorable to

annealing and replication.

Page 25: 2010 Student Capstone Journal

8

Once obtained, the oligonucleotide primers are then annealed to plasmids, and extended in a

polymerase chain reaction as outlined in the QuikChange protocol (see Appendix). Once the plasmid

containing the desired mutation is sufficiently replicated, it is then prudent to check the product for

error using gel electrophoresis. Once the samples are checked, the paternal DNA is digested using DPN I

endonuclease, leaving only the recombinant DNA. This is possible because the paternal DNA is

methylated, while the newly replicated plasmids are not. Digesting the paternal material leaves only the

replicated, recombinant genes in plasmid form. The remaining plasmids are then transformed into XL1-

Blue supercompetent E. coli cells and grown on LB-ampicillin agar plates. Once colonies develop on the

plates, the surviving mutated colonies are selected, and 6 liter quantities of the cells containing the

mutated hemoglobin are grown.

Purification

Once quantities of transformed E. coli cells are grown, they are purified to obtain mutated Scapharca

hemoglobin. The cells were broken open in a pressure cell, to release the intracellular proteins, along

with the desired mutant hemoglobin. The mutants must be separated from the rest of the cell debris

and protein. The first step of purification is to run the cells through a nickel column. This takes

advantage of a “His tag”, or series of histidine residues, that was added to the vector beforehand.

Histidine tags bond easily to nickel, so by running the contents of the broken cells, the mutated proteins

will stick to the nickel column. Other materials, such as cell components or proteins, likely lack this

bonding capacity and will run through. Such a technique would have been unavailable in the past,

before mutagenesis became available, as scientists were forced to rely on a protein’s natural

characteristics when purifying.

Once the mutated proteins are bonded to the nickel column, the column is washed with an excess of

imidazole. Imidazole is a compound that bonds more readily to the His tags than the Ni of the column,

Page 26: 2010 Student Capstone Journal

9

and therefore disrupts the bonds between the protein and the column. Released, anything that was

bound to the column can run free.

This is not sufficient to purify the protein, however, because of all the other compounds and proteins

that are likely present. Therefore, the His tags are cut off of the hemoglobin proteins using thrombin, a

restriction enzyme. The added gene was designed with a site recognized by thrombin, so that it could be

easily removed at this stage. This results in the mutant hemoglobins no longer readily binding to the

nickel column, unlike the rest of the product collected from the column. The results of the previous

purification step are then run again through a nickel column, with the result being purified HbII protein.

Further purification is required, however, because the result of the previous purification consists of both

dimeric and tetrameric forms of the hemoglobin. Because the intended target of experimentation is the

tetrameric form, any mutant hemoglobin that exists in dimer form must be removed. Both the A dimer,

the B dimer, and the desired AB tetramer are present in the previous product. Also, any hemoglobin that

may have oxidized in a previous step must be removed. For this, size-exclusion chromatography is used

to extract the larger tetrameric form from oxidized or dimeric forms. Size-exclusion chromatography

involves a column of a porous polymer. As a sample passes through the column, the smaller particles are

able to enter more of the pores in the material, slowing their travel time. Inversely, larger particles

travel a more direct route, and pass through the column quicker. This allows the separation of a solution

based on molecular size. Used in conjunction with spectrographic techniques, this allows only the

tetrameric form to be selected. By observing the absorption at a specific wavelength, which should

differ between the tetramer, dimer, and any oxidized product, each can be identified. This size-exclusion

chromatography is used to obtain pure tetrameric HbII hemoglobin for further use.

Page 27: 2010 Student Capstone Journal

10

00.10.20.30.40.50.60.7

500 520 540 560 580 600

Abso

rban

ce v

alue

s

Wavelength (nm)

Absorption Spectrums of wild-type HbII

Deoxy

Oxy

Experimentation

The purified hemoglobin is then analyzed in a series of oxygen binding experiments. These are done to

assess the oxygen affinity of a particular mutation of hemoglobin. First, a sample of HbII is selected and

fully deoxygenated. This is done by flushing the tenometer it is contained in with excess nitrogen, which

releases any bound O2 or CO.

The tenometer is then sealed to

prevent any contamination.

Using a spectrophotometer,

absorption is measured at

specific wavelengths between

500nm and 600nm. Oxygenated

and deoxygenated hemoglobin

have different absorption

spectrums. Deoxygenated Scapharca hemoglobin has a single peak, while oxygenated has two (see

Figure 6). The change is absorption values is predictable, based on the physical structure of the protein.

The selected wavelengths that are tested consist of 5 data points, and 4 wavelengths that are known not

to vary as the hemoglobin shifts from oxygenated to deoxygenated, which are used to normalize the

results. Specific amounts of air are then added to the tenometer. Smaller amounts are used in higher-

affinity samples. After each addition, the sample is mixed for a period of ten minutes to allow the

oxygen to bind. After each addition of air to the sample, absorption is measured at the same

wavelengths. The rate of change in absorption values as the hemoglobin oxygenates and other factors

allow for the affinity of the protein to be inferred. These experiments are conducted on the wild-type

A(WT)B(WT), the single mutant A(F97Y)B(WT), and the double mutant A(F97Y)B(F99Y).

Figure 6: Difference in absorption spectrum between deoxygenated and oxygenated wild-type HbII

Page 28: 2010 Student Capstone Journal

11

The data gathered from the absorption data is used to calculate various metrics of affinity and

cooperativity. A p50 value, or amount of oxygen required for half-saturation, is generated to assess the

hemoglobin’s oxygen affinity. A Hill coefficient is also generated. The Hill coefficient, first created to

measure hemoglobin cooperativity, relates the concentration of ligand with the ratio of bound to

unbound sites on the protein.

Results

Oxygen binding experiments on the HbII mutants demonstrate that the structural changes related to the

T-to-R conformational change have significant impact on the cooperativity of the protein. Figure 7

demonstrates that while the oxygen affinity of the single mutant and double mutant are both higher

than that of the wild-type, the cooperativity decreases dramatically. Also, the cooperativity of the

double mutant is slightly higher than that

of the single mutant. This points to two

conclusions. First, that the structural

mutation which prevents it from

undergoing the full T to R transition

decreases the cooperativity of the

protein, indicating that the structural

conformation is a key part of cooperative

binding. Second, the single mutant having

the lowest cooperativity shows that

subunits which undergo similar structural

modifications in the T to R transition are

required to have a highly cooperative protein. The increase in cooperativity in the double mutant

Figure 7: Hill plots, p50 values, and Hill coefficients for WT, single mutant, and double mutant HbII

Page 29: 2010 Student Capstone Journal

12

indicates that even with the highly diminished transition, having identical subunits results in more

cooperativity than unmatched subunits.

The single mutant also demonstrates a strong susceptibility to oxidation. Oxygen affinity tests had to be

repeated for the single mutant, due to rapid oxidation of test samples. The double mutant did not

display the same instability, however. This suggests that mutation to a single subunit in the cooperative

complex introduces instability into the entire model. This is likely caused by a mismatch between the

subunits.

Discussion

Understanding and controlling the mechanisms of cooperativity and oxygen affinity in hemoglobin can

lead to advances in many areas of science. One such area is the rapidly developing field of artificial

oxygen carriers. With the demand for donating blood fast outpacing the supply in the United States, the

need for a replacement is becoming increasingly apparent. It also addresses the issue of safety: while

the United States blood supply is kept extremely safe, some diseases (prion-based ones, specifically)

cannot be tested for. Many other countries’ blood supplies are not kept as safe as the United States’,

especially in those areas ravaged by HIV. Developing an economical and practical artificial oxygen carrier

would present a compelling emergency alternative to blood transfusion, without the need for type

matching, maintaining a refrigerated supply, donations, and other problems.

To address this need, two main possibilities are being explored. The first of these are hemoglobin-based

replacements. These present problems of their own, however, as hemoglobin requires specific

conditions in which to function effectively. Both human and other mammalian hemoglobin have been

explored for this purpose. The second possibility is diverging beyond hemoglobin to other molecules.

The leading contenders at the moment are perfluorocarbons. These are molecules composed of carbon

Page 30: 2010 Student Capstone Journal

13

and fluorine that are able to bind oxygen. These also present difficulties, however, because they are not

soluble in blood, and must be emulsified in water.

The primary barrier to effective hemoglobin-based artificial blood substitutes are the physiological

effects of hemoglobin when outside of red blood cells. Free-circulating hemoglobin is known to cause

renal failure, due to reactions between protein and a byproduct of urea. This prevents hemoglobin from

being administered directly. Several alternatives have been devised. One product, Polyheme, is having

success in clinical trials. It attempts to avoid the dangerous physiological consequences of pure

hemoglobin by purifying and polymerizing it. A study found that while patients treated with Polyheme

had slightly higher occurrence of myocardial infarction and other adverse events, “the benefit-to-risk

ratio of PolyHeme is favorable when blood is needed but not available.”

Other hemoglobin-based alternatives are also being explored. HemoPure, which has been approved for

use in South Africa, is developed from bovine hemoglobin. It employs two methods; first, polymerizing

the protein; and second, cross-linking subunits of the hemoglobin to prevent the tetramer from

dissociating. It boasts three-year shelf-life and universal compatibility, as well as increased affinity and

efficiency.

In the realm of alternate molecules,

A problem faced by all of these potential replacements is controlling their oxygen affinity. Several early

versions of hemoglobin-based substitutes faced the issue of hemoglobin’s oxygen affinity being too high

outside of the environment of the red blood cell. What some alternatives are trying to accomplish is to

perfluorocarbons are promoted for their oxygen capacity, lack of

side effects, and wider availability compared to hemoglobin-based substitutes. The current leading PFC

based oxygen carrier is Oxygent. It uses perfluorooctyl bromide (PFOB), which is a linear PFC. One

characteristic of PFOB is that it is removed from circulation after time, stored, and exhaled by the body.

This prevents any residual damage or negative effects.

Page 31: 2010 Student Capstone Journal

14

genetically modify hemoglobin to better maintain its tetrameric structure, while simultaneously

lowering its oxygen affinity and maintaining its cooperativity. This is where research such as that

conducted here is potentially valuable. For a mutated variant of hemoglobin to be successful, the

mechanisms behind cooperativity and oxygen affinity in hemoglobin must be fully understood.

Page 32: 2010 Student Capstone Journal

15

Appendix

A(WT)B(WT)

Hill Coefficient: 1.8 p50: 10.0 mm Hg

mL 0 542 nm 2 552 nm 556 nm 562 nm 576 nm 520 nm 550 nm 570 nm 588 nm 0 .447 .549 .555 .515 .346 .258 .537 .426 .295 10 .508 .526 .507 .464 .431 .259 .522 .428 .274 15 .534 .517 .486 .442 .469 .258 .514 .428 .264 20 .552 .509 .469 .425 .496 .260 .509 .429 .258 25 .562 .201 .457 .412 .512 .264 .510 .433 .257 Air .598 .488 .427 .380 .569 .262 .504 .439 .248

A(F97Y)B(WT)

Hill Coefficient: 1.2 p50: 1.0 mm Hg

mL 0 542 nm 2 552 nm 556 nm 562 nm 576 nm 520 nm 550 nm 570 nm 588 nm 0 .265 .315 .318 .305 .232 .166 .306 .267 .176 .5 .286 .310 .303 .285 .258 .162 .305 .265 .173 1.0 .302 .304 .290 .270 .277 .164 .305 .267 .173 1.5 .312 .301 .282 .261 .291 .163 .302 .266 .169 2.0 .317 .298 .276 .254 .300 .166 .302 .269 .171 2.5 .323 .296 .272 .251 .306 .164 .299 .267 .167 Air .347 .287 .250 .226 .344 .162 .296 .272 .164

A(F97Y)B(F99Y)

Hill Coefficient: 1.3 p50: 0.13 mm Hg

mL 0 542 nm 2 552 nm 556 nm 562 nm 576 nm 520 nm 550 nm 570 nm 588 nm 0 .377 .412 .414 .412 .336 .420 .406 .385 .185 .10 .411 .405 .390 .379 .381 .223 .405 .385 .185 .20 .441 .400 .371 .355 .418 .212 .399 .378 .179 .30 .472 .415 .379 .358 .453 .210 .398 .377 .129 .40 .458 .395 .357 .338 .441 .208 .396 .374 .177 Air .480 .385 .340 .312 .475 .203 .401 .374 .187

Page 33: 2010 Student Capstone Journal

16

Guidelines for Oligonucleotide Creation

Primers should be between 25 and 45 bases in length, with a melting temperature (Tm) of ≥ 78°C. Primers longer than 45 bases may be used, but using longer primers increases the likelihood of secondary structure formation, which may affect the efficiency of the mutagenesis reaction. The following formula is commonly used for estimating the Tm

mismatchNGCTm %/675)(%41.05.81 −−+=

of primers:

For calculating Tm

• N is the primer length in bases

:

• Values for %GC and % mismatch are whole numbers

The desired mutation should be in the middle of the primer with ≈10-15 bases of correct sequence on both sides.

The primers optimally should have a minimum GC content of 40% and should terminate in one or more C or G bases.

Protocol for PCR Reaction

1. Synthesize two complimentary oligonucleotides containing the desired mutation, flanked by unmodified nucleotide sequence. Purify these oligonucleotide “primers” prior to use in the following steps.

2. Prepare the sample reaction as indicated below: 5 µl of 10x Pfu reaction buffer 1.5 µl Sense oligonucleotide primer 1.5 µl Antisense oligonucleotide primer 1 µl dNTP mix 3 µl DMSO 1.2 µl DNA template 3.6 µl H2

1.2 µl PfuTurbo DNA polymerase (2.5 U/µl) 0

Segment Cycles Temperature Time 1 1 95°C 30 seconds 2 12-18 95°C 30 seconds

55°C 1 minute 68°C 1 minute/kb of plasmid length

3. Cycle each reaction using the cycling parameters outlined in the above table. 4. Following temperature cycling, place the reaction on ice for 2 minutes to cool the reaction to ≤

37°C.

Page 34: 2010 Student Capstone Journal

17

References

Goorha, B. YK., Deb, Maj P., Chatterjee, Lt Col T., Dhot, Col P.S., & Prasad, Brig R. S. (2003). Artificial

Blood. Medical Journal Armed Forces India. 59. 45 – 50.

Knapp, J. E., Bonham, M. A., Gibson, Q. H., Nichols, J. C., & Royer, W. E. (2005). Residue F4 Plays a Key

Role in Modulating Oxygen Affinity and Cooperativity in Scapharca Dimeric Hemoglobin.

Biochemistry, 44, 14418 – 14430.

Moore, E. E., Moore, F. A., Fabian, T. C., Bernard, A. C., Fulda, G. J., Hoyt, D. B., … Gould, S. A. (2008).

Human Polymerized Hemoglobin for the Treatment of Hemorrhagic Shock when Blood is

Unavailable: The USA Multicenter Trial. Journal of the American College of Surgeons. 208. 1 – 13.

OPK Biotech (2010). Hemopure Attributes. Retrieved from

http://opkbiotech.com/hemopure/hemopure-attributes.php

Werlin, E., McQuinn, G., & Ophardt, R. (2005). Hemoglobin-based Oxygen Carriers. Retrieved from

http://biomed.brown.edu/Courses/BI108/BI108_2005_Groups/10/webpages/HBOClink.htm

Page 35: 2010 Student Capstone Journal

Above the Oil Influence:

The Renewable Energy Intervention We Need But Aren’t Ready For

Elena Stamatakos

Worcester Academy

Capstone Presentation

June 1, 2010

Page 36: 2010 Student Capstone Journal

Above the Oil Influence 2

Abstract

The United States’ energy industry is currently dependent on fossil fuels. As petroleum, natural

gas, and coal are finite resources, this dependency presents a considerable weakness that will

affect future generations. The global oil market and the potential shortage of this product are

especially threatening to the economic and political sectors of society. In addition, these fossil

fuels have severe detrimental effects on both public and environmental health. The scientific

community has created many forms of energy technology that may prove to be feasible

alternatives. These so called renewable energy sources vary in their effectiveness, efficiency, and

overall effect on society and the environment. Solar and wind technology are especially

promising, and have already been utilized in the movement towards sustainable energy.

However, these energy systems remain flawed; neither can be considered an ideal replacement

for fossil fuels in its current form. If wind and solar power industries are to be capitalized on in

the future, they must first be improved.

Page 37: 2010 Student Capstone Journal

Above the Oil Influence 3

Introduction

Over the past few years, an environmental revolution has invaded American society.

However, this invasion has primarily had its focus on a corporate level, targeting our consumer

driven consciences. Sales are high for a new laptop because the commercials highlight the lower

levels of toxic chemicals and heavy metals used in its production. Organic or all natural beauty

products have begun to invade the shelves of our local drug stores. Parents buy snack foods

previously banned from the house, because the cheese flavoring has taken on a natural parmesan

shade, instead of an orange that matches the color of nuclear waste. The consumer market has

embraced the movement towards sustainability, aided by increasing public demand for

environmentally sound products. However, this movement has not had an effect on what needs it

most, our national energy grid.

The United States is considered a leader in the field of renewable energy, but this is not

considerable praise in context. According to the most recent report from the United State Energy

Information Administration, the U.S. used 101,468 trillion Btu in 2007. Fossil fuels provided the

United States with 85.89% of that energy, and only 3.31% was derived from more responsible

sources: biomass, wind, solar, and geothermal. The remaining energy came from hydroelectric

and nuclear power plants. Massachusetts falls below the national average, with 92.62% of the

energy produced in 2007 derived from fossil fuels, and a meager 2.72% of energy produced

coming from the combination of biomass, wind, solar, and geothermal.1

1 From “State Energy Consumption Estimates,” by DOE/EIA, 2007.

Page 38: 2010 Student Capstone Journal

Above the Oil Influence 4

Figure 1: Sources of energy produced in the United State in 2007.

Figure 2: Sources of energy produced in Massachusetts in 2007 (excluding the 192.5 trillion Btu imported from other states).

Page 39: 2010 Student Capstone Journal

Above the Oil Influence 5

Our energy industry can be “characterized by persistently increasing electricity demand

and almost complete reliance on fossil fuel, nuclear and large hydropower generating plants”2.

There is no singular culprit responsible the massive demand on our energy supply. The

industrial, commercial, and residential spheres of our society are all to blame, as are the

transportation systems used to navigate between the three. According to Dr. Jiusto, a specialist in

environmental policy from Worcester Polytechnic Institute, it is necessary to take a dual

approach to renovating the energy industry, and develop renewable energy while simultaneously

increasing efficiency to decrease overall demand.3

The statistics seen in figures 1 and 2 are particularly alarming because fossil fuels are a

finite resource, and they will run out. While estimates of the lifetime of fossil fuel availability

vary greatly depending on the source, some doomsday prophets warn that we will reach critical

levels of fossil fuels in less than 20 years. This model can be found in the Olduvai theory,

introduced by Dr. Richard C. Duncan in 1989, and later supported by data published in the 1993

paper “The Life-expectancy of Industrial Civilization: The Decline to Global Equilibrium”.

Duncan’s theory places a 100-year life span on the Industrial era, and forewarns that in 2030, the

world will suffer an energy shortage so severe that blackouts will have descend on countries

around the world. However, critics of the Olduvai Theory argue that it ignores social or

technological advancements, and is therefore immaterial.

2 From “Assessing Sustainability Transition in the US Electrical Power System,” by Scott Jiusto and Stephen McCauley, 2010. 3 From interview with Dr. Jiusto, May 01 2010.

Page 40: 2010 Student Capstone Journal

Above the Oil Influence 6

The well-known Hubbert’s Peak, created by Shell Oil employee Marion King Hubbert,

places peak oil production at 2000. His theoretical estimation of oil reserves and discoveries

suggests that the world will eventually encounter peak oil, after which petroleum reserves will

begin to decrease steadily over the next few decades. However, there are critics of this theory,

such as Steven Gorelick, a professor of Environmental Earth systems at Stanford University,

who believes that the Hubbert’s peak model is flawed, “The Hubbert curve seems logical, but it's

not a statistical distribution, and it's full of fallacies. It's a bad model, and it just doesn't fit the

data”4.

4 From “Research by the Barrel,” 2009.

Figure 3: Graph representing the decline of fossil fuel availability and the resulting effects on society. From “The Olduvai Theory,” by the Wolf at the Door.com.

Page 41: 2010 Student Capstone Journal

Above the Oil Influence 7

Greenhouse gas emissions and general levels of air pollution are also cause for alarm. In

2008, the Environmental Protection Agency measured a production of 6,956.8 x 1019 , or 69,568

sextillion, kg of CO2 equivalents of greenhouse gas emissions.5 These emissions are steadily

deteriorating the quality of our atmosphere, and creating hazardous conditions for people around

the globe. A 2002 World Health Organization summit for environmental risk factors found air

pollution as the cause of 2,362,000 deaths around the globe that year. 6

5 From “Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990-2008,” by EPA, 2010. 6 From “Estimated deaths & DALYs attributable to selected environmental risk factors,” by WHO Department of Public Health & Environment, 2007.

Figure 4: Graph representing peak oil predicitons. From “The Peaking of World Oil,” a presentation by Congressman Roscoe Bartlett.

Page 42: 2010 Student Capstone Journal

Above the Oil Influence 8

In addition, a 2008 publication of Environmental Health Perspectives included research

focusing on air pollution. The study was conducted by Michelle Wilhelm, Ying-Ying Meng,

Rudolph P. Rull, Paul English, John Balmes and Beate Ritz, specialists in fields ranging from

Public or Environmental Health to Epidemiology. Their report reviewed previous data from

California Health Interview Surveys and Environmental Public Health Tracking services, and

concluded that the two sources contained conclusive evidence that linked air pollution in San

Diego and Los Angeles to increased severity of asthma in children.7

Air pollution has negative impacts beyond public health; acid deposition poisons water

systems, destroys nutrient rich elements of soil, and eats away at minerals. Such pollutants as

mercury can be especially harmful, as they have a tendency to magnify as they move up a food

chain. If humans are consuming an animal that is higher up in an ecosystem, such as larger fish,

mercury levels can be extremely toxic. 7 From “Environmental Public Health Tracking of Childhood Asthma Using California Health Interview Survey, Traffic, and Outdoor Air Pollution Data,” by Michelle Wilhelm et al., 2008.

Figure 5: Photo taken of the smog affecting Mexico City. From “Local Government as an Implementor of Environmental Health: Pollution in Mexico City,” by Who Implements Global Public Health, 2009.

Page 43: 2010 Student Capstone Journal

Above the Oil Influence 9

Far more negative results of our fossil fuel reliant lifestyle exist beyond air pollution. The

extraction, processing, and transporting of coal, natural gas, and oil is systematically harming the

natural world. A discussion of every negative impact on the environment caused by these

processes would span pages, but a singular example can be found in recent events. Ironically, a

company named Beyond Petroleum is responsible for what may be the most severe oil spill to

ever occur. U.S. Representative Edward Markey called the leaking oil in the Gulf of Mexico “the

greatest environmental catastrophe in the history of the United States”8. The massive oil spill has

already reached the coast of Louisiana, where it is threatening the marsh ecosystems, and

according to the President of Coastal Plaquemine Parish, Billy Nungesser, the oil will “destroy

every living thing there.”9

8 From “BP Recasts Spill Size As Oil Fouls Louisiana Marshes,” by NPR, 2010. 9 From “BP Recasts Spill Size As Oil Fouls Louisiana Marshes,” by NPR, 2010.

Figure 6: An aerial photo of the oil spill invading the Louisiana marshes. From “BP Recasts Spill Size as Oil Fouls Louisiana Marshes,” by NPR, 2010.

Page 44: 2010 Student Capstone Journal

Above the Oil Influence 10

Climate change is a highly disputed topic that has found battlegrounds from the

boardroom to the political stage. The scientific community has published evidence that the

greenhouse gases we are pumping into the atmosphere are a direct cause of Climate Change. It is

believed that these gases act as insulation that prevents infrared heat from radiating away from

Earth’s surface. NASA’s Goddard Institute for Space Studies has collected surface temperatures

around the world for over a century. This data shows a significant increase in the number of

temperature anomalies in recent years.10

Climate Change may be part of a natural cycle that has been warming and cooling the Earth’s

surface for millions of years. Even if this is true, and fossil fuels are not responsible, there remain

other detrimental effects of their use that are irrefutable and must be dealt with.

It is easy to see why fossil fuels, especially petroleum, are addicting for large industrial

nations such as the United States, as they are transportable, cost effective forms of highly

10 From “5-Year Mean Anomaly,” by NASA-GISS, 2009.

Figure 7: NASA data showing the temperature anomalies in 2006. From “5-Year Mean Anomoly,” by NASA-GISS, 2009.

Page 45: 2010 Student Capstone Journal

Above the Oil Influence 11

concentrated energy. As seen in the examples discussed above, they are also highly detrimental

to the natural world and general public health. According to Dr. Jiusto, the academic and

corporate worlds were previously guilty of “systematically starving”11 research in the field of

renewable energy. However, with support for renewables growing over the past few decades,

viable replacements have been found in solar and wind technology.

11 From interview with Dr. Jiusto, May 01 2010.

Figure 8: Model representing the profit ratios of verious forms of energy. From “The Peaking of World Oil,” a presentation by Congressman Roscoe Bartlett.

Page 46: 2010 Student Capstone Journal

Above the Oil Influence

12

Solar Power

Solar technology works exactly the way its name suggests, it derives energy directly from

the sun, which is responsible for “the largest energy flow entering the terrestrial ecosystem”12.

William G. Adams and R. E. Day built the first photovoltaic cell in 1877, which was capable of

converting a grand total of 1% of the available sun’s energy into electricity. Improvements

continued to be made through to the 1950’s when Bell Laboratories created the first modern

solar panel.

Solar panels, or Photovoltaic Cells, convert sunlight directly into electricity. Each cell is

made of a semiconducting material such as selenium, silicone, or cadmium and is intentionally

full of chemical impurities. These impurities, often Germanium or Boron, are called doping

agents and are integral for the energy conversion process. Photons hit the semiconductor material

of the panels and excite electrons within the material. These excited electrons are only allowed to

move in one direction due to the structure of the crystallized semiconductor, allowing the free

electrons to be harnessed as electrical power.

The production of these panels involves multiple steps beginning with purifying and

crystallizing the semiconductor. Next the doping chemicals are added and small cells are formed,

which are then connected to form larger panels and attached to electrical wires. Lastly, a non-

reflective coating is added to increase sunlight retention, and the panel is secured onto an

aluminum frame.

The energy consumed for this construction is 240 kWhr per m2 of panel produced. An

estimated 1,700 kWhr per m2 of available yearly sunlight can be converted at 10% to 20%

12 From “An Assessment of Solar Energy Conversion Technologies and Research Opportunities,” by the Global Climate & Energy Project, 2006.

Page 47: 2010 Student Capstone Journal

Above the Oil Influence

13

efficiency, allowing only 170 to 340 kWhr of energy to be collected per each square meter of

photovoltaic cells13. It must be noted that several square meter cells are often used in

concurrence, and maximum amounts of sunlight are rarely available due to weather conditions.

Taking into account the net energy associated with each square meter of photovoltaic cells, an

average of one to three years is needed before the clean energy produced by the solar panels can

offset the energy used during production. However, a solar panel can have a lifetime of up to 30

years, leaving substantial time for clean net energy to be accumulated. In addition, Solar panels

produce no greenhouse gases and can dramatically reduce emissions. For example, 27 years of

clean energy can reduce the greenhouse gas emissions of the average American family by almost

½ ton of sulfur dioxide, ⅓ ton of nitrogen oxides, and 100 tons of carbon dioxide.14

However, solar panels must be disposed off after the 30 or so years of use, presenting an

environmental challenge. The number of solar panels waiting for disposal will not be substantial

for several years. However, the chemicals in the panels are cause for some concern. Procedures

for disposing of photovoltaic cells safely are currently being researched and developed.15 The

alternative, leaving the panels in land fills, is unattractive from an environmental standpoint. The

chemicals could potentially leak out, polluting the surrounding area, and adding bulk to landfills

is generally something avoided by the environmental cause.

A photovoltaic cell can cost upwards of $30,000 depending on the rebates offered by

each state. This is offset by the energy received free of charge from the sun, instead of from the

electric grid. A solar power system for the average American family may take as long as 16 years

to pay for itself. This amount of time may be shortened when state incentives are taken into

account. Also, at times when the solar panels are creating more energy than the household is

13 From “Environmental Impacts of PV Electricity Generation,” by E.A. Alsema et al., 2006. 14 From “PV FAQs,” by DOE, 2004. 15 From “PV Panel Disposal and Recycling,” by DOE, 2008.

Page 48: 2010 Student Capstone Journal

Above the Oil Influence

14

currently using, the solar energy runs back into the grid, earning credits from the energy

provider. However, because of the substantial initial cost, solar panels are unrealistic for many

families.

Eileen Mueller, a Middlesex County resident, installed solar panels almost twenty years

ago. She and her husband looked past the large financial investment because of their belief that

using renewable energy was “the right thing to do”. Their solar panels are directly connected to

their water tanks and are responsible for heating the water. When the system was first installed

Eileen found that the panels worked surprisingly well, despite the less than ideal, wooded

location of their house. Unfortunately, a storm damaged the panels and they no longer work as

efficiently. Eileen and her husband are considering installing new panels instead of repairing

their current system. This decision is driven not only by the availability of more advanced

technology, but also by the lack of laborers who are capable of repairing their solar energy

system. This reveals another barrier to the implementation of photovoltaic power in residential

areas. If solar panels are to become a widely accessible technology, a skilled workforce must be

available for installations and repairs. 16

An impressive example of residential solar power systems can be found in Milwaukee.

The Kreppel’s installed several panels on the roof of their suburban home. These panels provide

the Kreppel’s with more energy than they use most months out of the year, and feed clean energy

back into the electrical grid. The success of this solar powered home has inspired a new incentive

program in Milwaukee that will hopefully lead to similar projects.17

The financial barrier impedes the installation of solar panels in residential areas. So it

may be more reasonable to implement solar technology in the form of large plants. The Solar

16 From interview with Eileen Mueller, Apr 24 2010. 17 From “Green energy incentives make solar panels more affordable,” by Fox 6 news, 2010.

Page 49: 2010 Student Capstone Journal

Above the Oil Influence

15

Energy Generating System in the Mojave Desert in the largest solar power plant in the world.

One branch of this plant covers 140 acres with photovoltaic cells near the Nellis Air Force base.

This system generates almost 30 million kWhr of electricity each year, providing the base with

25% of its energy.

Other branches of the SEGS use a more efficient form of collecting solar energy,

Concentrated Solar Power or Solar Thermal Energy. CSP focuses the sun’s radiation though a

series of mirrors onto a pipe of synthetic oil. The heated oil then runs into a traditional steam

generator, which creates electricity.

Figure 9: A photo of the SEGS solar farm. From “Solar Energy,” by Alternative Energy.

Page 50: 2010 Student Capstone Journal

Above the Oil Influence

16

Together, the solar installations have a capacity of 354 MW of power, and provide for

roughly 500,000 people. A similar project is currently underway in California, where a 300 MW

solar farm is being constructed.

Similar projects are found throughout Germany, the world’s leader in photovoltaic

technology. The renewable energy projects in Germany have shown great success, and solar

currently provides Germany with 1% of its electricity. This value is expected to rise to 25% by

2050.18

18 From “Development of renewable energy sources in Germany,” by the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety, 2009.

Figure 10: An example of a Concentrated Solar Power system. From “Solar Energy,” by Alternative Energy.

Page 51: 2010 Student Capstone Journal

Above the Oil Influence

17

Concentrated Solar Energy systems are impractical for residential areas, but show

promise when used in large solar plants or to power commercial of industrial buildings. For

example, Frito-Lay has a plant in Modesto, California that specifically produces Sun Chips. This

plant generates much of its own power through CSP. The plant offsets any energy not generated

by the solar energy system with clean energy credits, effectively making Sun Chips a zero

emission product.

A possible downfall to solar energy is its reliance on intense sunlight. Areas of the

country that lack steady exposure to the sun or experience significant cloud cover are not

effective locations for solar energy to be implemented. However, much of the country has the

Figure 11: Graph representing the use of solar thermal power in Germany. From “Development of renewable energy sources in Germany,” by the Fecderal Ministery for the Environment, Nature Conservation, and Nuclear Safety.

Page 52: 2010 Student Capstone Journal

Above the Oil Influence

18

ideal features necessary to generate solar power efficiently. With the exception of some coastal

regions, the upper Northeast, the Pacific Northwest, and Alaska, the United States has substantial

solar energy potential. The Southwest in particular has significant potential for generating solar

power, because much of the ideal land is unused. Plants could be built in the desert regions of

that area that are currently vacant, and have considerable exposure to the sun.

Solar energy may be a viable option for the future of generating electricity, and according

to Dr. Jiusto, solar power has impressive “breakthrough potential”19. However, solar panels

remain a bulky power system, and a secondary technology is needed to allow electricity

generated by solar power to be transportable. Generating power from the sun also lacks the

effectiveness of fossil fuels, and the current net energy productions from solar panels and CSP

systems cannot rival those of oil, coal, or natural gas. The environmental impact of solar

technology has two opposing sides because “while the use of solar technology does not pollute 19 From interview with Dr. Jiusto, May 01 2010.

Figure 12: DOE map of solar potential.

Page 53: 2010 Student Capstone Journal

Above the Oil Influence

19

the environment, the manufacture of certain types of solar technology can”20. The biggest

challenge for residential systems is the price barrier that few Americans can overcome,

especially in today’s economic climate. A positive factor about solar energy is that, unlike fossil

fuels, the sun’s radiation will not be depleted in the foreseeable future. We can rely on the sun

until its transformation into a Red Giant, which will not occur for billions of years. In short, solar

energy may prove to be a promising alternative to fossil fuels but still requires some scientific

development and “involvement of both political and economic players”21.

20 From “Solar Energy,” by Alternative Energy, 2009. 21 From “An Assessment of Solar Energy Conversion technologies and research Opportunities,” by Global Climate & energy Project, 2006.

Page 54: 2010 Student Capstone Journal

Above the Oil Influence

20

Wind Power

Power generated from the wind is actually another form of solar energy, as wind is

created when the sun heats up the Earth’s atmosphere unevenly. Centuries ago, this power was

harnessed in order to complete industrial or agricultural tasks. For example, most people connect

the Netherlands to a quintessential image of a farm and a windmill, which would have been used

to grind flour or cut timber. The windmill was able to evolve with the harnessing of electricity,

and is now generates power that can be used indirectly.

Two forms of wind turbines can be found in the modern world. The horizontal axis

turbine has two or three blades that are spun to generate energy. Vertical axis turbines resemble

an eggbeater, but function the same way. The mechanics behind a wind turbine are fairly simple.

The wind moves the propeller blades, which in turn spins a shaft located in the center of the

blades. The movement of the shaft is what drives the localized generator and creates electricity.

Figure 13: An example of a horizontal axis turbine (left) and a vertical axis turbine (right). From susty.com and trendir.com.

Page 55: 2010 Student Capstone Journal

Above the Oil Influence

21

The blades of a wind turbine can be anywhere from 30 to 125 ft long, and are capable of

spinning up to 22 rotations per minute. While some small-scale turbines exist at under 100

kilowatts, most utility sized structures vary between 100 kilowatts to several megawatts. When

placed in groups, these larger turbines are capable of providing substantial bulk power to the

electrical grid. For the most part, wind turbines are functional in any outdoor setting, as they only

require the movement of the air. However, building turbines or a complete wind farm in certain

areas that receive maximum wind, such as costal regions, can increase energy yields

significantly.

The production of wind turbines is less intensive than that of solar panels, but challenges

still exist. Basic construction requirements are readily available as “enough concrete and steel

exist for the millions of wind turbines, and both those commodities are fully recyclable”22. Issues

arise with the production of the generators. The gearboxes require rare-earth metals, and while

the metals are not in short supply, they are located outside the United States. The country’s

energy industry would then be “trading dependence on Middle Eastern oil for dependence on Far

Eastern metals”23. The production involves relatively safe materials, leaving the energy

consumed for the production and transportation of the turbines as the primary negative

environmental impact for the initial manufacturing. However, the impact of transporting the

turbines can be significant. General Electric, a leading manufacturer of wind turbines, operates a

few plants in the United States, but is mainly located over seas, in Germany, Spain, and China.24

Installing a wind turbine is a significant financial undertaking. The cost varies greatly

depending on size and generating capacity. An industrial sized turbine capable of producing 2

MW can cost up to $3.5 million. However, a turbine of this size would only be needed by large 22 From “A Path to Sustainable Energy by 2030,” by Mark Jacobson and Mark Delucchi, 2009. 23 From “A Path to Sustainable Energy by 2030,” by Mark Jacobson and Mark Delucchi, 2009. 24 From “1.5 MW Wind Turbine,” by GE Energy, 2009.

Page 56: 2010 Student Capstone Journal

Above the Oil Influence

22

corporations or factories. For the average American family, a 5-15 kW turbine is suitable to

power their home. A turbine of this size costs significantly less, at $6,000 to $22,000. Similar to

solar panels, this can be too great an investment to make for many Americans. Wind turbines are

expected to last for 20 to 30 years, yet they do require some upkeep. Cleaning, tuning, and basic

repairs are expected in order to maintain a functioning turbine. A skilled workforce will also

need to be developed in order to fulfill this requirement.

In a similar manner to solar panels, wind turbines are an investment that will pay off. A

turbine is expected to lower electricity bills anywhere from 50% to 90% depending on generating

capacity and wind speed. The Energy Information Administration lists the average residential

monthly electricity use at 920 kWh, costing roughly $95.66.25 This means that a less expensive

installation working at maximum capacity could recoup its investment in only 6 years. On the

other hand, a more expensive wind turbine that only reduces electricity bills by 50% will require

over 30 years before the investment is paid back.

Because of the potentially lengthy time period before such an investment produces

financial returns, wind turbines are only reasonable for some. Financial constraints, as well as

conditions under which the turbine would be operating are important factors in deciding if such

an installation is feasible. Another barrier for much of the residential areas of the country is the

noise pollution produced by a working turbine. Sound ordinances will impede the use of turbines

in urban areas, or in suburban neighborhoods with small plots.

With the developments of the Cape Wind project, there will be much debate on the topic

of wind farms in the coming year. One concern held by both the scientific community and the

general population, is the potential for negative effects on surrounding ecosystems. Costal wind

farms, such as the Cape Wind project, are built in water, creating the potential for interference 25 From “Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State,” by the EIA, 2010.

Page 57: 2010 Student Capstone Journal

Above the Oil Influence

23

not only with aerial species, but also with aquatic ecosystems. The installation of a turbine may

have a detrimental effect to the surrounding wildlife, whether terrestrial or aquatic. After the

turbine is installed, it poses a threat to birds or other aerial species, especially migratory animals.

“Although generally considered environmentally friendly, wind-power development has been

associated with the death of birds colliding with turbines and other wind farm structures,

especially in California”.26 Some ecologists and environmental scientists are working to prevent

the installation of wind turbines in order to protect the ecosystems threatened by their

installation. Methods of reducing the interference with the surrounding wildlife must be

developed in order to reduce the negative impact of such technology.

Social concerns are also impeding the construction of wind farm. Many believe the wind

turbines to be an annoyance and some who live near the coast of Cape Cod or on the Islands of

Nantucket and Martha’s Vineyard are resistant to the Cape Wind installation. While some

consider the Cape Wind project problematic, others are welcoming. Eileen Mueller also owns a

house on the Cape and considers the wind farm to be a “great idea”. Rather than an eyesore, she

finds the turbines to be unobtrusive.27 Societies reception to the Cape wind project will play an

important role in its future development.

The United States has substantial potential for the implementation of wind farms. Federal

land is available for the installation of turbines, and 18% of this land has a high potential for

wind power. More land across the United States also has high wind potential, and is accessible

by private energy companies. Much of this land is located in North and South Dakota. The high

percentage of agricultural based land makes this region even more attractive for wind power, as

crop fields and wind farms can function together harmoniously.

26 From “Collision mortality of local and migrant birds at a large-scale wind-power development on Buffalo Ridge, Minnesota,” by Gregory Jonson et al., 2002. 27 From interview with Eileen Mueller, Apr 24 2010.

Page 58: 2010 Student Capstone Journal

Above the Oil Influence

24

The United States also has a significant amount of costal territory that is a prime location

for wind farms. The coast of New England, the Mid-Atlantic states, and the Northwestern states

are especially suited to generating wind power. The large areas dedicated to agriculture are also

prime locations for wind farms, as croplands and turbines can work harmoniously. These types

of remote areas are necessary for large turbine installations because a commercial sized turbine is

capable of producing noise pollution equivalent to a jet engine.

Figure 14: EIA and EPA wind potential map.

Page 59: 2010 Student Capstone Journal

Above the Oil Influence

25

Many European countries have invested in wind technology. Germany is currently constructing a

7+ MW turbine. This would be the largest turbine ever constructed. In addition to this turbine,

Germany currently generates 1.6% of its energy from wind power.

Spain is another excellent example of capitalizing on wind potential. As a peninsula, Spain has a

large amount of offshore territory suited to wind power. Both off shore and terrestrial wind

plants have been built and at under ideal conditions, they provided Spain with a record of 53% of

its energy needs.

Figure 15: Graph of Germany’s wind power investments. From “Development of renewable energy sources in Germany,” by the Fecderal Ministery for the Environment, Nature Conservation, and Nuclear Safety.

Page 60: 2010 Student Capstone Journal

Above the Oil Influence

26

Wind turbines are a more effective method of producing energy than Solar Panels.

However, they are equally expensive and finances will act as a barrier to their use in the future.

They are also capable of harming the environment, in ways that solar panels are not. Methods to

prevent the detrimental effects of wind farms must be developed. In addition, wind turbines are

limited to certain regions, especially larger turbines placed in groups to function as wind farms.

These energy-producing units create noise pollution, and are restricted to remote areas.

Ultimately, if the scientific community continues to make advancements in wind technology,

wind power provide a feasible alternative to fossil fuels, especially if used on an industrial level.

Figure 16: Wind farm located in agricultural regions of Spain. From reuk.co.uk.

Page 61: 2010 Student Capstone Journal

Above the Oil Influence

27

Conclusion

Dr. Donald Kennedy, the previous editor for the American Association for the

Advancement of Science, has called our present circumstances a “critical crossroads”28. If the

world continues on its current path of fossil fuel dependence, it risks the integrity of the

environment and the well being of the public, as well as the political and economic security of

our future world. If we fail to alleviate the demand for fossil fuels with another energy source,

then we will eventually run out. In reality, the length of time before this happens is

inconsequential.

Far more potential replacements exist and “entrepreneurs are exploring a wide range of

alternative technologies beyond solar and wind power”29. Beyond wind turbines, solar panels,

and concentrated solar power, there exists wave technology, biomass technology, nuclear electric

power, hydroelectric plants, geothermal heat extraction, and fuel cell technology. Some of these

have yet to be fully developed, while others have been in use for decades and remain highly

controversial and the topic of heated debate.

Wave technology is highly undeveloped, but the process of generating energy by

harvesting the power of tide movements in our oceans shows some promise. Biomass technology

involves creating power by the burning of waste products. This method is efficient in that it

draws on a resource that fills landfills around the country, but it does release some emissions into

the atmosphere. The Seabrook Station Nuclear Power Plant in New Hampshire has caused public

unrest in New England. There are concerns that this method of generating power creates national

security issues and produces dangerous waste.

28 From “Energy Crossroads: Building a Coalition for a Clean, Prosperous, and Secure Energy Future,” by Stanford University, 2010. 29 From “Assessing Sustainability Transition in the US Electrical Power System,” by Scott Jiusto and Stephen McCauley, 2010.

Page 62: 2010 Student Capstone Journal

Above the Oil Influence

28

Hydroelectric plants, such as the Hoover Dam, can be found throughout the country.

However, these structures and the reservoirs they create destroy ecosystems found in the river or

dependent on the river.

Figure 17: Photo of the Seabrook Station Nuclear Power Plant. From nukeworker.com.

Figure 18: Photo of the Hoover Damn. From centralbasin.org.

Page 63: 2010 Student Capstone Journal

Above the Oil Influence

29

Iceland has become one of the leaders in renewable energy resources due to the

extraordinary geothermal potential found within their borders. This method can be hazardous, as

extracting heat from the earth brings heavy metals and toxic chemicals along with it. Fuel cells

do not generate energy, but allow clean energy to be stored as electrochemical potential that can

generate emission free power. This method has been integrated into some prototype cars.

However, previous head of the Central Intelligence Agency, who is well known for his support

for a diversified energy industry, has stated that fuel cells for passenger cars are “one of the

worst ideas about transportation in many decades”30.

These technologies may be beneficial for the environment in some respects. Yet they all

also raise serious concerns, as they threaten either our environment or our security, lack

efficiency or affordability. Fossil fuels, specifically petroleum, also place strain on our

environment, global health, and national security, but is relatively affordable and efficient. This

leads us to a series of complicated questions of which is more important, providing affordable

energy to the country at any costs? Bringing an end to global warming and the systematic

pollution of our atmosphere? Or the protection of the world’s ecosystems?

Ideally, a choice will not be necessary, and energy technology capable of being

affordable, efficient, and clean will be implemented. With the aid of rigorous work in developing

solar and wind technology in the coming years, this may be possible. Solar panels are difficult to

build, are expensive, and provide limited electricity yields. Wind turbines are also difficult to

produce and install, and are expensive, but provide substantially more energy. However, these

turbines also present a threat to the natural world. If these complications can be remedied, then a

30 From “Energy Crossroads: Building a Coalition for a Clean, Prosperous, and Secure Energy Future,” by Stanford University, 2010.

Page 64: 2010 Student Capstone Journal

Above the Oil Influence

30

combination of both technologies may be effective in supplementing our dependence on fossil

fuels. Some experts may favor one of the methods over the other, or they may prefer a different

alternative entirely. No matter which energy source is in question, significant obstacles stand in

the way of its implementation.

When considering wind or solar, only certain locations around the country can meet the

requirements for efficient energy harvesting. Financial constraints will also act as a barrier to any

future installations, whether commercial or residential. The infrastructure needed to construct

wind or solar farms is immense. Ideally, these plants would replace the need for petroleum,

natural gas, and coal. However, these fossil fuel industries are immense and provide employment

and commerce around the country, and the world. Shifting towards other energy sources would

result in a massive rearrangement of job opportunities, and towns and cities may lose the

industries that support them.

Figure 19: Photograph of West Virginia coal miners protesting renewable energy movements. From “A Path to Sustainable Energy by 2030,” by Mark Jacobson and Mark Delucchi, 2009.

Page 65: 2010 Student Capstone Journal

Above the Oil Influence

31

Despite the obstacles that lay ahead, a shift is necessary. Solar and wind provide feasible

alternatives to fossil fuels, but only after significant improvements. It is my opinion that the

current technology at hand is not adequate for providing the world with an energy efficient and

environmentally sound solution. With the improvements outlined above, solar and wind may

provide an answer. The scientific community must remain dedicated to the research and

development of sustainable energy systems. With improved renewable energy systems, a future

with a clean atmosphere, without the fear of an impending energy crisis, and without an ongoing

war between the environment and mankind can be achieved.

Page 66: 2010 Student Capstone Journal

32

Glossary

Acid Deposition: The pollution of rain, snow, hail, fog etc… with sulfuric and nitric acids due to greenhouse gas emissions and general air pollution. Btu: Stands for British Thermal Unit, and it represents the amount of energy needed to heat one pound of water one degree Fahrenheit. CO2 Equivalents: A unit of measure used to classify greenhouse gas emissions that represents the collective effect of one particle of carbon dioxide. Example: 2 CO2 eq emissions have the same effect on the atmosphere as 2 particles of CO2. Department of Energy (DOE): Cabinet level association in the United States government concerned with energy policy, specifically with nuclear energy safety. Energy Grid: The term used for the collective energy industry and the system of wires and plants that runs across the United States. Energy Information Administration (EIA): A statistical agency that collects, analyzes, and reports on energy information for the United States government Environmental Protection Agency (EPA): An agency within the United States government that is responsible for protecting human health and the environment. Fossil Fuels: Refer to coal, petroleum, and natural gas. Fuel Cell: A technology that produces energy when hydrogen and oxygen come together to form water, which is the only emission. Heavy Metals: A group of elements with metallic properties and certain atomic weights and levels of toxicity. KiloWatts (kW): One thousand watts of power. Kilowatts per hour (kWhr): A unit of measure used for prolonged period of power output. MegaWatts (MW): One million watts of power. Oil Reserves: The estimated amount of recoverable oil. Peak Oil: The point at which a maximum of global petroleum production will occur. Photon: A packet of light energy. Rare-earth metal: A misnomer, as scandium, yttrium and the fifteen lanthanides that make up the collection are quite abundant within the Earth’s crust.

Page 67: 2010 Student Capstone Journal

33

Semiconductor: A material with a mediocre capacity for conducting electricity. Smog: Named for a mix of smoke and fog, it is the result of industrial and vehicular fumes and is commonly found in cities.

Page 68: 2010 Student Capstone Journal

34

References

Alsema, E. A., de Wild-Scholten, M. J., & Fthenakis, V. M. (2006). ENVIRONMENTAL

IMPACTS OF PV Electrcity Generation - A Critical Comparison of Energy Supply

Options. European Photovoltaic Solar Energy Conference, Retrieved Mar. 3, 2010, from

http://www.bnl.gov/pv/files/pdf/abs_193.pdf

Bartlett, R. (2006). The Peaking of World Oil. Adress to House of Representatives. Washington

D.C.:

Brandt, A. R. (2006). Testing Hubbert (Doctoral dissertation, University of California, Berkeley,

June 6, 2006). Dissertation Abstracts International, p. 1-35.

Department of Energy and Energy Information Administration. (2009, Aug. 1). State Energy

Consumption Estimates 1960 through 2007 Author. Retrieved May 2, 2010, from

http://www.eia.doe.gov/states/_seds.html

Department of Energy, (2008, July 14). PV Panel Disposal and Recycling Message posted to

http://www1.eere.energy.gov/solar/panel_disposal.html

Department of Energy. (2004, Dec.). PV FAQs Washington D.C.: Author.

Department of Public Health & Environment. World Health Organization. (2007, Jan.).

Estimated deaths & DALYs attributable to selective environmental risk factos

Duncan, R. C. (1993). The life-expectancy of industrial civilization: The decline to global

equilibrium. Population & Environment, 14(4), 325-357. Retrieved n.d., from

http://www.springerlink.com/content/g03835431333tr43/

Energy Information Administration. (2010, May 14). verage Retail Price of Electricity to

Ultimate Customers by End-Use Sector, by State Author. Retrieved May 21, 2010, from

http://www.eia.doe.gov/cneaf/electricity/epm/epm_sum.html

Page 69: 2010 Student Capstone Journal

35

Environmental Protection Agency. (2010, Apr. 15). Inventory of U.S. Greenhouse gas Emissions

and Sinks Washington D.C.: Author.

Friedman, T. L. (2009). Hot, Flat, and Crowded. (Vol. 2,). New York, NY: Picador.

GE Electric. (2009, Jan. 1). 1.5 MW Wind Turbine Author. Retrieved May 4, 2010, from

http://www.gepower.com/prod_serv/products/wind_turbines/en/15mw/index.htm

Global Cimate & Energy Project. Stanford university. (2006, July). An Assessment of Solar

Energy Conversion Technologies and Research Opportunities Palo Alto, CA: Retrieved

Feb. 26, 2010, from http://gcep.stanford.edu

Hansen, J. E. & Lacis, A. A. (1990). Sun and dust versus greenhouse gases: an assessment of

their relative roles in global climate change. Nature Publishing Group, 346, 713-719.

Hults, D. (2009). Refined Thinking. Interaction Quarterly, p. 56-59. Retrieved Feb. 26, 2010,

from http://pesd.stanford.edu/news/research_by_barrel/

Jacobson, M. Z. & Delucchi, M. A. (2009). A Path to Sustainable Energy by 2030. Scientific

American, 301(5), 58-65.

Jiusto, S. & McCauley, S. (2010). Assessing Sustainability Transition in the US Electrical Power

System. Open Access Sustainability, p. 551-575.

Johnson, G. D., Erickson, W. P., Strickland, M. D., Shepherd, M. F., Shepherd, D. A., et al.

(2002). Collision Mortality of Local and Migrant Birds at a Large-Scale Wind-Power

Development on Buffalo Ridge. Wildlife Society Bulletin, 30(2), 879-887. Retrieved Mar.

2, 2010, from http://www.jstor.org/pss/3784243

L. Fingersh, M. Hand, and A. Laxon. National Renewable Energy Laboratory. (2006, Dec. 1).

Wind Turbine Design Cost and Scaling Model U.S. Department of Energy Office of

Energy Efficiency & Renewable Energy.

Page 70: 2010 Student Capstone Journal

36

Marion Ottmüller, and Thomas Nieder. Federal Ministry for the Environmental, Nature

Conservation, and Nuclear Safety. (2010, Mar.). Development of renewable energy

sources in Germany 2009 Berlin, Germany: Retrieved Apr. 14, 2010, from

http://www.bmu.de/english/renewable_energy/doc/39831.php

Metcalfe, S. & Whyatt, D. (1995). Who to blame for acid rain? A regional study of acid

deposition in Yorkshire and Humberside. Transactions of the Institute of British

Geographers, 20(1), 58-67.

NPR Staff, (2010, May 20). BP Recasts Spill Size As Oil Fouls Louisiana Marshes. NPR,

Retrieved May 21, 2010, from

http://http://www.npr.org/templates/story/story.php?storyId=127012041

Personal interview with Dr. Jiusto. May 12 2010.

Personal Interview with Eileen Mueller. May 1 2010.

Sorensen, B. (2004). Renewable Energy. (Vol. 3,). Burlington, MA: Elsevier Academic Press.

Wilhelm, M., Meng, Y., Rudolph, P. R., English, P., Balmes, J., et al. (2008). Environmental

Public Health Tracking of Childhood Asthma Using California Health Interview Survey,

Traffic, and Outdoor Air Pollution Data. Environmental Health Perspectives, 116(9),

1254-1260. Retrieved May 13, 2010, from http://www.jstor.org/pss/25148418