introduction to photography (1)

8

Upload: jam-choi

Post on 25-Dec-2015

15 views

Category:

Documents


3 download

DESCRIPTION

Introduction to Photography (1)

TRANSCRIPT

Page 1: Introduction to Photography (1)
Page 2: Introduction to Photography (1)

2

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

3

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

!

12

a

b

c

fd

ab

c

d

f

1. Mount. It is the body of the camera, where the light comming in is registered as an image.

The film or sensor (image 4-a): It captures the light comming from the lens into an image. Tradi-tional cameras use chemical pigments on the photographic film, which density vary according to the amount of light received. Digital cameras, on the other hand, transform light photons into electrical signals, gathered in discrete cells forming the image pixels. Fryrender proceeds in a similar fashion by accumulating the so called “samples” into one or several “framebuffers” (which can be seen as the photographic film) to build each one of the pixels forming the image. The more samples the framebuffer receives, the better defined the pixels on the image will be, resulting in less noise.

The films are differenciated by how they react to the light, and mainly by their sensi-tivity. The sensitivity, also called film speed, is a measure of how much light needs to hit a certain region of the film/sensor to produce a chemical/electrical response on it, and therefore being registered. The more sensitive is a film, the lesser light is needed to produce an image. This sensitivity is given with the ISO value of the film, which can be seen as a scale where lower values are assigned to low sensitivity films and high values to those films used in poor light conditions. This way, an ISO 100 film needs twice as light as an ISO 200 film to produce the same image, and four times more light than an ISO 400 film. Digital cameras also simulate the ISO scale by configuring how much photons are needed to produce an electrical signal in the sensor. In fryrender, the ISO sensitivity is a multiplier of the overall image brightness, and can be used among other factors, which will be later explained, to adjust the exposure of the resulting image. Moreover, the ISO value can be tweaked in real time from the tonemapping options as the render goes by.

Image 4. SLR camera depiction

Unbiased rendering technology has arisen as a new easy way to pro-duce photorealistic computer graphics. As each element involved is built to behave exactly the same way as its counterpart in the real world, this technology, and therefore fryrender, allows the user to practically for-get about the technical aspects of a rendering engine, and focus on the creative part of the visualization process.

However, you have to keep in mind that a rendererer is a powerful tool, but nothing more than a tool. Despite the fact that it will take care of the technical aspects for you, it won’t perform some kind of magic and turn poor scene setups into outstanding images, just as a real photo-graphic camera wouldn’t. In this sense, you have to consider fryrender as the equivalent to your real world camera, as it behaves just like it, and will produce true-to-life images when enough effort is put on the user’s part.

Fryrender fuzzes the boundaries between photography and computer graphics; hence you will need basic photogra-phy knowledge to get the best out of your images. This tutorial stands in a mid point between computer graphics and traditional photography, and it is aimed at providing an introductory background on both photography theory and basic photographic skills to help you to understand the few aspects to be kept in mind when you work in this exciting field of virtual photography.

We will start the tutorial understanding how a real camera works, reviewing its components, and how these parts ap-ply to fryrender. Next, we will stand a totally practical point of view, examining how these components play a role in the process of taking a picture and finishing with some basic Best Practices tips

SLR stands for Single Lens Reflex, and it is the most common type of Reflex camera. The most important difference between Reflex cameras and the so called Compact or “Point-and-Shoot” cameras is that Reflex cameras let the photographer see right through the lens, whereas compact cameras use an outer viewfinder located on the camera body. This has immediate benefits, since this way the photographer is able to point out exactly what will be framed into the image (which is hardly achievable when you’re looking through a viewfinder mounted above the lens, although this has been overcome in the new digital compact cameras), and also have visual feedback on the scene focusing and the amount of light coming into the sensor.

On the other hand, Reflex cameras are built in two main pieces: the mount, or cam-era body, and the set of lenses. This means you can switch the lens you want to use in each take, while the lens system in a Compact camera is fixed.

Fryrender mimics a SLR camera. So what is a SLR camera?

1. INTRODUCTION

2. CAMERA COMPONENTS

In both chemical films and digital sensors higher sensitivity (ISO values) comes at the price of higher noise, seen as grain in the resulting picture. The more speed or sensitivity the film has, the more grain it will produce, which justifies the need for low

ISO films when we want fine grained pictures. It is important to notice that this is consequence of the nature of their composition and this does NOT apply to fryrender, which simulates an ideal grain-free film. Do NOT confuse unbiased rendering’s characteristic image noise, which comes from high variance on poorly sampled scenes, with film grain, although sometimes you may want to add some grain to the final render for artistic purposes.

Image 2. compact camera

Image 3. SLR camera

Page 3: Introduction to Photography (1)

4

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

5

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

i

i2. Lens. Is the interchangeable part of a reflex camera. When we talk about a “lens”, we actually refer to a set of lenses through which the light beams travel until they reach the film. The lens roughly defines the zoom performed on the image (also called the optical zoom) and it is achieved through different arrangements on the inner set of lenses and mirrors. Just like the mount components, the lens system is accuratelly simulated in fryrender, allowing us to configure every single aspect of the camera optics. Let’s review each one of the elements involved:

The diaphragm (image 4-c): Located at the end of the lens, the diaphragm is an aperture with a varying di-ameter which restricts the amount of light reaching the sensor. Unlike the shutter, the diaphragm does not close, but keeps a constant configurable aperture. This aperture is defined through the so-called f-Stop value. Skipping the maths involved, you can see the f-Stop as a proportion of how much light the lens is letting pass through. It is defined as an inverse ratio, just as the shutter speed, so a value of 1 for the f-Stop means that the diaphragm is totally opened. Along with the film sensitivity (ISO value), the aperture of the diaphragm determines the exposure of the picture, modulating how much light the sensor is receiving but, as we’ll see on the next section, it has further implications on the focusing of the image.

The lens system (image 4-b): the arrangement of the set of lenses bends and directs the light beams which are finally registered on the sensor. The portion of the scene being captured, and therefore the resulting image, depends on the distance among these components, the sensor, and also the subject being photo-graphed. Since we will refer to them later, let’s give names to the distances involved:

- the focal distance is the distance from the outer lens to the sensor. It is depicted with the letter “ f ” in Image 4, and it is given in millimeters. This value is related to the optical zoom -or magnification- of the lens (the more focal length, the higher zoom on the picture taken), and it is given to characterize them. This way we’ll talk about a 50mm lens, meaning its focal length is equal to 50 millimeters.

b. The shutter: It is a curtain-like plate made of tiny sheets often arranged in a radial way. On idle state, the shutter remains closed letting no light pass through it. When the photographer takes a picture, the shutter sheets briefly open and let the light beams reach the film. The time they remain opened is called the shutter speed or the time of exposure. As the shutter speed is a very brief time span, it is usually specified as the inverse of it, so setting the shutter speed to 60 in both a cam-era and in fryrender means that you’re letting the light come through it for 1/60 seconds (roughly 0’017s).

The film / sensor / framebuffer is cumulative, so the more time the shutter remains opened, the more light it receives, and the more bright the resulting image will be. This is part of what we know as “choosing the right exposure”, of which we’ll talk further later on this tutorial. As of now, think of the light arriving to the sensor as an imaginary bright ink on black canvas; the more you add, the brighter the resulting color is. With high shutter speeds, the sensor is able to capture just a snapshot of the scene; this usually happens so fast that the scene is frozen in the picture. However, when the shutter remains opened long enough, several “shots” of a moving scene (whether the objects or the camera are moving) are superimposed on the same image. This is called motion blur.

Although not drawn in Image 4 for the sake of simplicity, the mount also includes a small mirror that makes the light beams bounce towards a piece called prism, which directs the image to the photographer’s eye instead of the sensor. When the picture is taken, the mirror is lifted up as the shutter opens, producen the characteristic ‘click’ sound.

- the target distance on the other hand is the distance from the center of interest in our picture to the sensor. It is named with the letter “ d ” in Image 4, and it is given in any spatial unit (fryrender uses cen-timeters).

We use the target distance to determine the distance at which objects appear completely focused in the picture. This distance is chosen either manually or by using the camera autofocus system. The autofocus automatically picks a point in the scene as the center of interest, and adjust the target distance according to it. In fryrender that point is given by the depth of the central pixel of the image.

Now that we have enumerated the main elements on a camera, and how they are defined in fryrender we will step ahead and see how they interact together, and their influence on the way the picture is taken.

These are the common steps we would follow when we are about to take a real photograph, and also when we are configur-ing the camera in our fryrender scene:

Step 1. Choose the right lens:

Properly speaking, the first step would be choosing the proper focal length for our camera. Remember that the focal length defines the magnification (or zoom) of our lens. Another way to speak about magnification is through the concept of Field of View (FOV), the FOV is an angle which defines how wide or narrow is the portion of the scene being captured by the sensor, see Image 5.

Image 5. Focal Length and Field of View

The FOV angle can be measured horizontally across the image, vertically, or even along the diagonal. Image 5 shows the vertical field of view angle. So the higher the focal length is, the narrower the field of view angle will be, and thus the lens will have a larger magnification or zoom factor.

There are two ways of making the object being photographed to appear bigger in the picture: Either getting closer to it, or staying away and choosing a high focal length. Nevertheless, these methods are not equivalent, as there is another element involved: perspective foreshortening.

3. THE PROCESS OF TAKING A PICTURE

Despite it is beyond the basic scope of this tutorial, just notice that the diaphragm aperture is made from tiny blades, pretty much like the shutter is. These blades confer a shape to the aper-ture, which can be configured in fryrender under the Iris dropdown list, and it determines how the bloom and glare halo looks when they’re used.

Page 4: Introduction to Photography (1)

6

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

7

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

Perspective foreshortening:

Imagine we are taking a picture of somebody, and we want him to fill most of the frame. If we decide to stay away from our subject and use a high focal length, our picture would look like this:

However we could alternatively keep using a short focal length, and walk closer to our subject. Our picture would then look like this:

Image 6a. Picture taken from a large distance using a high focal length

Image 6b. Picture taken getting close to the subject and using a short focal length

Notice how big focal lengths tend to “flatten” the image, making the foreground and the background elements appear to be in the same plane. Small focal lengths, on the other hand, produce more exaggerated perspectives. The extreme case are the so-called wide angular or fish-eye lenses, usually with focal lengths on the range of 10 to 20 mm, where the image is deformed so much that is looks almost spherical (hence the fish-eye).

Aside from aesthetical considerations, most of the times the election of the lens is determined by the physical space avail-able to take the picture. For example, if we were to take photographs of the interior of a house, we would probably use small focal lenses with a wide viewing angle; despite we want to cover as much space as possible, we are constrained by the walls and we won’t be able to move much further away. This may become a problem as the camera location must be carefully chosen in order to minimize the perspective distortion. Nonetheless, when we were to take pictures of big objects such as entire buildings, it is most likely that we would want to avoid perspective deformations; thus we would rather use higher focal lengths, also moving the camera far away from the subject being photographed.

Step 2. Focusing:

Once we have our image properly framed, the next step will be to define which is the center of interest in our scene. The center of interest is the point being completely focused, which can be either manually specified (remember fryrender uses the camera target object to do so), or guessed automatically by using the autofocus system.

Just as it happens to the human eye, SLR cameras (and also fryrender cameras) define an in-focus area given by the target position. When we are looking at something, either with our naked eyes or through a camera lens, we are implicitly configur-ing our view to focus on that object; despite we may not be completely conscious of it, the remaining parts of the scene will usually remain out of focus in our image.

When we talk about “focus”, what we are actually referring to is the distance from the camera at which objects appear com-pletely focused. Away from this distance, objects show progressively blurrier in our picture. The focusing area is defined by a distance range called Depth of Field (DOF). The DOF tells us the speed at which objects become blurry (i.e. get out of focus) when as they move away from the target distance.

Image 7. Depth Of Field

As you can see in the Image 7 above, Depth of field reminds us that focus in a photography does not refer to a point but a range in space. The larger the area defined by DOF is, the easier will be for the photographer to keep all objects sharp in the picture. On the other hand, using a shallow depth of field will make everything except what is being explicitely focused to appear blurry in the image, which can be interesting for compositing purposes as we will see in the next section.

Image 8. Shallow Depth Of Field

i When autofocus is enabled, fryrender will set the target point at the closest object seen through the mid pixel of the frame.

Page 5: Introduction to Photography (1)

8

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

9

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

!

So what does the Depth Of Field range depend on?

Technically speaking, the DOF can be estimated from a given focal length, f-Stop and the distance to the subject being photographed. There are many DOF-calculators on the Internet (such as http://www.dofmaster.com/dofjs.html) which will help you to estimate the focused region given a lens configuration.

Since this is an introductory tutorial, once again we will skip the maths involved. However, it is important that you keep in mind how do these factors affect the final depth of field in our picture.

- The most important factor is the lens aperture, that is, the f-Stop. Being practical, the f-Stop is the main thing to tweak when you want to alter the depth of field of an image. The lower the f-Stop value is, the wider lens aperture (recall Image 4, C) and the shallower depth of field. On the other hand, high f-Stop values ( f-22 for instance ) will give us large focused extents, which are useful when we want to capture the whole scene in detail. This is commonly used in landscape and architectural photography.

Image 9 shows a pool scene rendered with different lens apertures. The scene consists of 7 spheres with a di-ameter of 10 centimeters, and a camera which target is laying on the ball number 2.

The image on the top has been taken using a f-Stop = 2, which produces a very shallow DOF region. Notice how the only portion of the image that remains in focus is the yellow ball, wereas the remaining balls quickly become blurry. The lower image on Image 9 has been taken using a f-Stop = 22 instead, producing a much deeper DOF range. This range now contains all the spheres, which are shown completely in focus on the final picture.

Image 9. How the f-Stop modifies the DOF range

- The Depth Of Field range usually extends between a near and a far plane (see Image 7), however these planes also depend on the focal length and the distance to the subject being focused. For any given aperture there is a target distance called hyperfocal distance beyond which everything will appear sharp in the image (this is the reason why when we take a picture of a landscape, the mountains do not look blurry).

Quoting the New York Institute of Photography: “... the hyperfocal distance setting ... is simply a fancy term that means the distance setting at any aperture that produces the greatest depth of field.” ; focusing exactly at or beyond the hyperfocal distance will extend the DOF from the near plane to the infinity, meaning every single object located beyond the near plane will always look sharp on the image. Again, the estimation of the hyperfocal distance involves a mathematic equation, so I’ll refer you to any of the DOF calculators available on the Internet such as http://www.dofmaster.com/charts.html.

When you activate the fryrender realtime viewport and then use the Focus button you visualize a plane perpendicular to the camera line of sight, located at the target distance away from it. See Image 11. This plane is exactly the same than the red plane shown in the Image 7, and it determines the part of the scene which will be completely in focus. Do not confuse this plane for any of the DOF planes shown in gray in the Image 7, as they are only drawn for the sake of clarity. In the actual lens behavior, the focussed area does not start at an exact point, instead it is described as a fuzzy, progres-sive zone. Moreover, the depth of field planes may even not be defined at all, for example when you’re focusing at the hyperfocal distance.

Image 12 is an example of how architectural or landscape photography often set the focus at “infinity”, beyond the hy-perfocal distance. This makes the DOF contribution negli-gible, so everything situated beyond the “near” plane will be shown completely sharp on the picture, no matter how far it

Image 10. Hyperfocal distance

Image 11. fryrender realtime viewport showing the target distance plane

Image 12. hyperfocal distance

is.

Moreover, taking pictures of exteriors with a day light much brighter and powerful than any artificial light, will make us pre-fer narrow lens apertures (usually on the range of f/16 to f/22, although it depends on many factors) that will increase the depth of field range even more, bringing the near plane closer to the observer while the far plane is already at the infinity.

Remember that the lens aperture also modifies the image exposure (more on this to come), so in order to obtain an equivalent luminance in both images on Image 9, the camera sensitivity has been modified between takes.

Page 6: Introduction to Photography (1)

10

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

11

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

Step 3. Choosing the right exposure:

The final step to setup the camera is choosing the right exposure for the image. We already saw that the camera film, or the digital sensor, is in charge of registering the light arriving to it from the lens into an image, but no further details were given. Let’s have a closer look at how this happens:

If we forget for a moment about color and focus on power, we can classify light as having a high dynamic range (HDR) when the difference between the brightest and the darkest tone is huge and continuous. On the other hand, if we were to register light with a Low dynamic range (LDR) we would only be able to register a finite and discrete number of shades.

Light in real world has a high dynamic range; think of sunlight as a good example of it: it is so powerful that you can’t (or shouldn’t!) look directly at it, but when it bounces among surfaces it starts dimming and producing an unlimited amount of shades.

Camera films and digital sensors, however, are low dynamic range devices since they’re only able to register a limited range of light shades. Digital cameras sensors have to transform the electrical signals produced by the light photons into discrete intensitites which are then translated to image pixels. Moreover, computer monitors, printers, and common computer image formats (jpg, tga, bmp ...) also represent colors in LDR, as they work with 256 different levels of brightness -where colors are the combination of red, green and blue main components, each one represented separately in 256 possible intensities. Even the human eye has a limited ability to register a wide range of lighting conditions simultaneously; this is why the pupil is needed to control the amount of light arriving to the retina, and why we need a few seconds to adapt our eyes to the dark-ness).

We are, then, forced to work with a limited range of all the light present on the scene. You may have experienced that it is almost impossible to take a photograph of a window and capture both what it is inside the room and the exterior seen through it: there is simply too much difference in the light power. This way, you generally have to choose between adjusting the camera to take the picture of the inside of the room, where the window will look almost white (this is called burnt), or do it the other way round, and get the outer scene properly registered while the room is shown as black.

You have had to choose the range of light you want to represent in the picture: this is called the image exposure. The means the camera has to filter the amount of light arriving to the sensor are:

Image 13. Dramatized comparison between HDR and LDR

1. The lens diaphragm, controlled by the f-Stop value (Image 4-c). This first filter sets the portion of light com-ming from the lens which passes through it. The lower the f-Stop value, the wider opened the shutter is.

2. The shutter speed (Image 4-b). As we saw previously, the sensor behaves in a cumulative way, so the more time it remains opened (slower shutter speed), the brighter the resulting image will be. Fast shutter speeds, on the other hand, lead to darker images.

3. The sensitivity of the sensor to the light, determined by the ISO value. In traditional cameras this value is fixed for each film roll, but digital cameras (and of course fryrender) allow you to change this sensitivity among a range of values (usually from ISO 100 to ISO 1600 in 5 steps in digital cameras, and up to 1600 in a continuous range in fryrender).

You probably have realised that, technically speaking, the “exposure” is mostly related to the shutter and the diaphragm, as they expose the sensor to the light. However we won’t be so strict and and will also enclose the film sensitivity under this term.

So we have three ways to influence over the overall image brightness: f-Stop, shutter speed and the film ISO. However the first two have side effects that we already know: modifying the lens aperture (f-Stop) will affect the depth of field, while low shutter speeds may produce motion blur. The film ISO, nonetheless, can be seen as a “safe” factor to tweak when we want to alter the image brightness.

On real films and sensors, modifying the film ISO has consequences in the resulting image grain, as we saw when we re-viewed the camera components. However, remember this was due to the physicall behavior of the chemical or electronic components used to build the film or sensor, so this does not apply to fryrender. In an unbiased renderer, the film ISO behaves as an image brightness multiplier, but it has a limited range of values, and if you need an ISO 5700, or an ISO 0,1 you might consider that there probably is something wrong with your scene or camera setup.

Image 14. Light travel through the camera

Now have a look at the Image 14: once the shutter opens, the light comes through it and hits the camera film or sensor. On real cameras, the sensor converts the incoming light photons into discrete electrical signals (if we think of digital cameras) which are then arranged as the pixels of the resulting image.

In fryrender, the entire lighting calculation is performed in HDR, just like the real light, and the generated samples are roughly equivalent to the real light photons hitting the sensor. These samples are gathered in the framebuffer to generate the resulting image pixels, but since they’re HDR they need to be processed by the tonemapping algorithm to convert them into LDR pixels which can be represented in a computer screen, printed, or stored in a image file in any of the available formats. See Image 15.

fryrender also lets you store the HDR source image in a propietary .DSI format. DSI files store raw radiance values which are not directly representable on the screen, but can be used to feed the tonemapping algorithm to produce as many LDR images as you wish. Moreover, DSI files do not store image pixels but the samples used to generate them. This makes pos-sible to use a DSI file to initialize a blank framebuffer to resume a render, or merge several DSI files from the same scene to obtain a higher density of samples per pixel, thus decreasing the noise in the resulting tonemapped image.

Image 15. Image pixels generation

Page 7: Introduction to Photography (1)

12

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

13

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

TIP: IsolaTIng an objecT from The background

TIP: blendIng rendered Images wITh real PhoTograPhIes

TIP: sIze maTTers

Along the last two sections we have reviewed which are the important parts of a camera, how they work individually and how they interact together to act over the perspective, the depth of field or the image exposure. All this knowledge applies entirely to fryrender, and this is why we are making practically no distinction between real and virtual photography.

Now you’ve got a deeper knowledge of the tool, this section will summarize all what we’ve learnt from a practical point of view in the form of short advice, wich will help you to avoid common mistakes and improve your skills.

Notice that the main difference between Images 9 and 12 is the distance to the subject, much closer on the pool im-age. It is good to keep in mind that, in photography -both real and by means of an unbiased renderer- we will use short target distances with small objects, and large target distances, even beyond the hyperfocal distance when we’re working with large objects such as buildings or even landscapes.

Thus, with all the concepts that we have already seen, we can infer a simple common sense rule of thumb, which will help us to determine when something “does not look good” in a render:

Image 16. Macrophotography

• Small objects photography tends to require com-binations of focal length and target distances which produce a very shallow depth of field. This way, the main issue with small objects is trying to keep all of them in focus. The most obvious example of this is the so-called Macrophotography where the subjects being photographed are really small (i.e. insects or flowers) or even microscopic. The main challenge in macrophotography is trying to enlarge the focused area, requiring the use of very narrow f-Stops and, hence, powerful light setups to com-pensate them.

• Large objects, on the other hand, will make us move the camera away from them to fit them in the picture frame. This implies setting a large target dis-tance and focal distance (recall Image 6, to avoid perspective distortions). The resulting depth of field will thus be large or even infinite. This doesn’t mean that you can’t find objects out of focus in these scenes, but those will be the ones located too close to the camera (closer than the near DOF plane) in-stead of too far away from it.

Being aware of this will help you to quickly identify scale problems in your model. When you are setting up your scenes to be rendered with fryrender, the model scale is crucial. Just like real world objects, the camera optics won’t behave the same way with big and small objects. See an example of an atypic object scale in the image below; the building intentionally looks as a scaled model.

When you are working in a photo composition, where you want to integrate a 3D object into a real photography, one of the main problems you will find is to mimic the real camera settings into the virtual camera used to render the image. Most 3D packages nowadays offer the ability to put the real photography on the background of the viewports, this way helping you to determine the point of view of the real camera, and the perspective distortion.

If the real photography was taken with a decent digital camera, it is likely that the camera already stored most of that technical information right into the image file. Some image formats (being JPG the most widely used) allow the posi-bility to store some extra information besides the image pixels, called the EXIF data. The EXIF stores, among other things, the camera settings used to take the picture, and the date and time when it was taken. If available, this information is easily accessed through image editing programs such as Photoshop (File > File Info)and can be really useful to copy the film ISO, shutter speed, focal length etc right to your fryrender camera instead of matching the rendered image with the real photography by hand.

You can find further information about real image blending in a nice tutorial called Photo integration via the matte/shadows channel, available on fryrender’s website (www.fryrender.com).

A good principle of compositing is to establish a clear center of inter-est in your image. When someone looks at your image, it is desir-able that the eye is immediately caught by the subject you’re trying to show on it.

There are many ways to do so, such as using a different color gamut on the main subject other than the used on the background. How-ever one of the most effective ones is playing with the depth of field: photographers often use a shallow depth of field to isolate the main subject on the photography as the only clearly defined part of it. This leaves everything on the background out of focus, preventing it be ing distractive.

This useful resource is widely seen on portraits and in advertising, but how can we achieve this effect?. Well, as we saw, it involves a shallow DOF, and therefore it will work better with small or medium elements. We’ll need to move away from the subject (the distance will depend on the object’s size, small objects won’t need this as much as medium or big objects) and use a big focal length to flatten the perspective. Then use a low value for the f-Stop -the lower, the more exaggerated the effect will be- and decrease the film ISO, or increase the shutter speed, to compensate the luminosoty gain due to the wid-er lens aperture.

Recalling the previous section, technically what you’re do-ing is to increase the focal length (moving the hyperfocal distance as far as possible from the subject), and besides you’re using a wide lens aperture. These both factors com-bined greatly reduce the depth of field region, which will be centered on the subject you’re photographing, thus leaving everything behind and in front of it out of focus.

Be careful to not using a too shallow depth of field, as your subject may not show entirely on focus! (a good practice when you’re photographing people is to set the target on the eyes, and not on the nose).

Image 18

Image 19

4. IMPROVING YOUR IMAGES

Image 17

Page 8: Introduction to Photography (1)

14

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

15

Feve

rso

ft f

ryre

nd

erIn

tro

du

ctio

n T

o P

ho

tog

rap

hy

TIP: color and deTaIls

TIP: rule of ThIrds

Following with these compositing snippets, another common advice is to apply the rule of the thirds to your images. This rule, which has been traditionally applied on paintings and now on photography, states that if you divide your frame in 3 horizontal parts and 3 vertical ones by using two lines on each, the resulting 4 intersecting points are the best places to locate your image centers of interest. This is considered to produce more aesthetical results.

This way, if you’re taking a picture of a single object, avoid to center it in the middle of the picture. Portraits are usually framed to leave more free space on the side the model is looking towards. When the subject or your photographs is a landscape, it is advisable to align the horizon with one of the horizontal divisions, depending on whether you want to give more importance to the sky or to the ground.

As any other rule, it isn’t suited for every single situation, but it is a good start wich is generally worth trying. You’ll find lots of useful samples “googling” a bit. Image 20

Despite this tip isn’t directly related with photography, it is intended to avoid a common mistake when producing photorealistic 3D scenes: You’re advised to put care in your materials and try to avoid unreal colors.

When you are working on a 3D scene which is supposed to mimic the real life, or even more, to be later integrated in a real photography, you have to put care not only in the quality of the model, but also on the quality of the materials and textures used on it. Creating materials for 3D models is usually almost 50% of the work, but alas it is an often forgotten aspect of the process. Photorealistic rendering greatly suffers from this. If you take a random photograph and use a color picker tool to analyze the pixels, you’ll quickly notice that it is almost impossible to find pure tones such as [red: 255, 0, 0], [green 0, 255, 0], and even a white wall isn’t actually white. Real life is full of subtleties which contribute to making it so hard to imitate. This way, favor photographic textures whenever possible, and fight blindly against the excesive perfection in both colors and shapes so often found on computer generated images. The improve-ment in your image quality will greatly pay off this extra effort.

Image 1. http://www.completedigitalphotography.comImage 2. http://www.image-acquire.com/Image 3. http://www.techshout.com/Image 5. http://www.trustedreviews.com/Image 6a. http://www.trustedreviews.com/Image 6b. http://www.trustedreviews.com/Image 8. http://www.pic-a-day.co.ukImage 12. http://pinker.wjh.harvard.edu/Image 16. Harold Davis. http://www.flickr.com/photos/harold_davis/Image 17. Courtesy of Stéphane Moya on fryrender’s galleryImage 18. http://www.luminous-landscape.com/Image 18. http://andrewhefter.com/Image 20. Wikipedia. http://en.wikipedia.org

Image 20. You won’t take this as a photo

IMAGE CREDITS