digital image processing

17
DIGITAL IMAGE PROCESSING INTRODUCTION : The field of digital image processing is continually evolving. During the past five years, there has been a significant increase in the level of interest in image data compression, image recognition and knowledge based analysis systems. Interest in digital image processing methods stems from two application areas: a) Improvement of pictorial information for human interpretation. b) Processing of scene data for autonomous machine perception. One of the first applications of image processing techniques in the first category was in improving digitized newspaper pictures sent by submarine cable between London and Newyork. Introduction of the Bartlane cable picture transmission system in the early 1920’s reduced the time required to transport a picture across the Atlantic from more than a week. Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution of brightness levels. The printing method used to send the picture by SPECIALIZED PRINTING EQUIPMENT CODED PICTURES was abandoned towards the end of 1921 in favor of a technique based on PHOTOGRAPHIC REPRODUCTION made from tapes perforated at the telegraph receiving terminal whose improvements are evident both in tonal quality and resolution. During this period introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably. Improvements on processing methods for transmitted digital pictures continued to be made during the next 35 years. However, it took the combined advents of large scale digital computers and space program to bring into focus the potential of image

Upload: elizabeth-jones

Post on 16-Nov-2014

4 views

Category:

Documents


0 download

DESCRIPTION

This paper mainly concentrates on what is an image and how processing takes place, digital image.

TRANSCRIPT

Page 1: Digital Image Processing

DIGITAL IMAGE PROCESSING

INTRODUCTION :

The field of digital image processing is continually evolving. During the past five years, there has been a significant increase in the level of interest in image data compression, image recognition and knowledge based analysis systems.

Interest in digital image processing methods stems from two application areas:

a) Improvement of pictorial information for human interpretation.b) Processing of scene data for autonomous machine perception.

One of the first applications of image processing techniques in the first category was in improving digitized newspaper pictures sent by submarine cable between London and Newyork. Introduction of the Bartlane cable picture transmission system in the early 1920’s reduced the time required to transport a picture across the Atlantic from more than a week. Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution of brightness levels. The printing method used to send the picture by SPECIALIZED PRINTING EQUIPMENT CODED PICTURES was abandoned towards the end of 1921 in favor of a technique based on PHOTOGRAPHIC REPRODUCTION made from tapes perforated at the telegraph receiving terminal whose improvements are evident both in tonal quality and resolution. During this period introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably.

Improvements on processing methods for transmitted digital pictures continued to be made during the next 35 years. However, it took the combined advents of large scale digital computers and space program to bring into focus the potential of image processing concepts. Work on using computer techniques for improving images from a space probe began at the Jet propulsion Laboratory at Pasadena, California in 1964 when pictures of the moon were transmitted which were processed by computer to correct various types of image distortion inherent in the on-board television camera. From 1964 until the present, the field of image processing has grown vigorously.

The second major area of application of digital image processing techniques is in solving problems dealing with machine perception. In this case, interest focuses on procedures for extracting from an image information in a form suitable for computer processing. Often, this information bears little resemblance to visual features that human beings use in interpreting the contents of an image. Examples of the type of information used in machine perception are Statistical moments, Fourier transform coefficients, and multi dimensional distance measures.

Page 2: Digital Image Processing

DIGITAL IMAGE REPRESENTATION

The term monochrome image or simply image, refers to a 2-dimensional light intensity function f(x, y), where x and y denote spatial coordinates and the value of f at any point (x, y) is proportional to the brightness ( or gray level ) of the image at that point. The figure below illustrates the axis convention.

FIGURE 1

Sometimes viewing an image function in perspective with the third axis being brightness is useful. Viewed in this way the figure would appear as a series of active peaks in regions with numerous changes in brightness levels and smoother regions or plateaus where the brightness levels varied little or were constant. Using the convention of assigning proportionately higher values to brighter areas would make the height of the components in the plot proportional to the corresponding brightness in the image.

A digital image is an image f(x, y) that has been discretized both in spatial coordinates and brightness. A digital image can be considered matrix whose row and column indices identify a point in the image and corresponding matrix element value identifies the gray level at that point. The elements of such a digital array are called image elements, picture elements, pixels or pels, with the last two being commonly used abbreviations of “picture elements”. The computer breaks down the image into thousands of pixels. Pixels are the smallest component of an image. They are the small dots in the horizontal lines across a television screen. Each pixel is converted into a number that represents the brightness of the dot. For a black-and-white image, the pixel represents different shades between total black and full white. The computer can then adjust the pixels to enhance image quality.

BASIC PRINCIPLE OF IMAGE PROCESSING

The first step in digital image processing is to transfer an image to a computer, digitizing the image and turning it into a computer image file that can be stored in a computer’s memory or on a storage medium such as a hard disk or CD-ROM. Digitization involves translating the image into a numerical code that can be understood by a computer. It can be accomplished using a scanner or a video camera linked to a frame grabber board in the computer.

Page 3: Digital Image Processing

FUNDAMENTAL STEPS IN IMAGE PROCESSING

Digital image processing encompasses a broad range of hardware, software, and theoritical underpinnings.

FIGURE 2

The fundamental steps in image processing are Image acquisition Preprocessing Segmentation Representation and description Recognition and interpretation

IMAGE ACQUISITION:

The first step in this process is to acquire a digital image. To do so, it requires an imaging sensor and the capability to digitize the signal produced by the sensor. The imaging sensor could also be a line-scan camera that produces a single image line at a time. In this case the object motion past the line scanner produces a 2-dimensional image. If the output of the camera or other imaging sensor is not already in digital form an analog to digital converter digitizes it.

Two elements are required to acquire digital images. They are

A Physical device that is sensitive to a band in the electromagnetic energy spectrum(such as the X-ray, ultra violet, Visible or Infrared bands ) and that produces an electrical signal output proportional to the level of energy sensed.

The Digitizer, is a device for converting the electrical output of the physical sensing device into digital form.

As a physical device consider the basics of X-ray imaging systems. The output of an X-ray source is directed at an object and a medium sensitive to X-rays is placed on the other side of the object. The medium thus acquires an image of materials( such as bones and tissue) having

Page 4: Digital Image Processing

various degrees of X-ray absorption. The medium itself can be film, or television camera combined with a converter of X-rays to photons whose outputs are combined to reconstruct a digital image.

Another major sensor category deals with Visible and Infrared light. Among the devices most frequently used for this purpose are microdensitometers, image dissectors. In microdensitometers the image to be digitized is in the form of a transparency (such as a negative film) or photograph. Although these are slow devices, they are capable of high degrees of position accuracy due to the essentially continuous nature of mechanical translation used in the digitization process.

Image digitization is achieved by feeding the video output of the cameras into a digitizer as stated earlier which converts the given input to its equivalent digital form.

IMAGE PREPROCESSING:

After a digital image has been obtained, the next step deals with preprocessing that image. The key function of preprocessing is to improve the image in ways that increase the chances for success of other processes. It typically deals with techniques for enhancing contrast, removing noise and isolating regions whose texture indicate a livelihood of alphanumeric information. The three main categories of digital image processing are image compression, image enhancement and restoration, and measurement extraction.Image compressionImage compression is a mathematical technique used to reduce the amount of computer memory needed to store a digital image. The computer discards some information, while retaining sufficient information to make the image pleasing to the human eye.

Image Enhancement & RestorationEnhancement of a compressed image may reveal the artifacts of the compression process. Image enhancement techniques can be used to modify the brightness and contrast of an image, to remove blurriness, and to filter out some of the noise. Using mathematical equations called algorithms, the computer applies each change to either the whole image or targets a particular portion of the image. For example, global contrast enhancement would affect the entire image, whereas local contrast enhancement would improve the contrast of small details, such as a face or a license plate on a vehicle. Some algorithms can remove background noise without disrupting key components of the image.

For the purpose of differentiation, we consider Restoration to be a process that attempts to reconstruct or recover an image that has been degraded by using some knowledge of the degradation phenomenon. Thus restoration techniques are oriented towards modeling the degradation and applying the inverse process in order to recover the original image. The principal objective of enhancement technique is to process the image so that the result is more suitable than the original image for specific application. The approaches in order to achieve image enhancement fall into two broad categories:

Page 5: Digital Image Processing

Spatial domain methods and Frequency domain methods.

The spatial domain refers to the image plane itself and approaches in this category are based on direct manipulation of pixels in an image where as frequency domain processing techniques are based on modifying the fourier transform of an image.

SPATIAL DOMAIN METHODSThe term spatial domain refers to the aggregate of pixels composing an image, and spatial domain methods are procedures that operate directly on these pixels. Image processing functions in this domain can be expressed as

g(x,y)=T[f(x,y)]where f(x,y) is the input image , g(x,y) is the processed image and T is an operator on f. In addition T can also operate on a set of input images such as performing the pixel-by-pixel sum of M images for noise reduction.

FREQUENCY DOMAIN METHODSThe foundation of frequency domain techniques is the convolution theorem . Let g(x,y) be an image formed by the convolution of an image f(x,y) and the linear, position invariant operator h(x,y), that is,

g(x,y)=h(x,y)*f(x,y)Then from the convolution theorm following frequency domain relation holds:

G(u,v)=H(u,v)F(u,v)Where G, H and F are the fourier transforms of g, h and f.

Histogram Equalization

Histogram Equalization is the process which increases the contrast of the image as shown below. An image with poor contrast , such as the one at the left of the figure , can be improved by adjusting the image histogram to produce the image shown at the right of it.

FIGURE 3

To illustrate the usefulness of histogram equalization , consider the following figure.

Page 6: Digital Image Processing

FIGURE 4

The gray levels of an image that has been subjected to histogram equalization are spread out and always reach white. This process increases the dynamic range of gray levels and consequently , produces an increase in image contrast. Overall , however histogram equalization significantly improved the visual appearance of this image .

Filters

Filters are used to enhance the appearance of raw images . However , information is lost in the process. Filters include

Low pass (softening)

High pass (sharpening)

Median (noise removal)

The image at the left of the figure has been corrupted by noise during the digitization process. The ‘clean’ image at the right of it was obtained by applying a median filter to the image.

Page 7: Digital Image Processing

FIGURE 5

Image Measurement Extraction: The example bellow demonstrates about extracting measurements from an image. The image at the top left of fig1 shows some objects. The aim is to extract information about the distribution of the sizes of the objects. The first step involves segmenting the image to separate the objects of interest from the background . This usually involves thresholding the image,which is done by setting the values of pixles above a certain threshold value to white , and all the others to black(top right of fig1). Because the objects touch, thresholding at a level which includes the full surface of all the objects does not show separate objects. This is solved by performing a watershed separation on the image(lower left of fig1). The image at the lower right fig1 shows the result of performing a logical AND of the two images at the left of fig1. This shows the effect that the watershed separation has on touching objects in the original image. Finally, some measurements can be extracted from the image. Fig2 is a histogram showing the distribution of the area measurements. (Assumption: width of image is 28 cm).

FIGURE 6

Page 8: Digital Image Processing

FIGURE 7

SEGMENTATION:The first step in image analysis generally is to segment the image. Segmentation subdivides an image into its constituent parts or objects. The level to which this subdivision is carried depends on the problem being solved. That is, segmentation should stop when the objects of interest in an application have been isolated.

For example in autonomous air to ground target acqisition applications interest lies in identifying vehicles on the road. The first step is to segment the road from the image and then to segment the contents of the road down to objects of a range of sizes that correspond to potential vehicles. There is no point in carrying segmentation below the scale, nor is there any need to attempt segmentation that lie outside the boundaries of the road. In general autonomous segmentation is one of the most difficult tasks in image processing.

The segmentation algorithms from monochrome images generally are based on one of two basic properties of gray level values which are Discontinuity and Similarity. In the first category, the approach is to partition an image based on abrupt changes in gray level. The principal areas of interest within this category are detection of isolated points and detection of lines and edges in an image. In the second category the principal approaches are based on thresholding, region growing, and region splitting and merging.

The concept of segmenting an image based on dicontinuity and similarity of the gray level values of its pixels is applicable to both static and dynamic( time varying ) images.

Page 9: Digital Image Processing

FIGURE 8

REPRESENTATION AND DESCRIPTION:

Basically, representing a region involves two choices:

Representation in terms of its external characteristics( its boundary ) Ex: Representation in chain codes, polygonal approximations and skeleton of a region.

Representation in terms of its internal characteristics( pixels comprising the region ) Ex: Regional descriptors such as topological descriptors, the texture using statistical approaches or structural approaches.

Choosing a representation scheme however is only part of the task of making the data useful to the computer. The next task is to describe the region based on the chosen representation. For example, a region may be represented by its boundary with the boundary described by features such as its length, the orientation of the straight line joining the extreme points. Generally an external representation is chosen when the primary focus is on shape characteristics. An internal representation is selected when the focus is on reflectivity properties, such as color and texture.

RECOGNITION AND INTERPRETATION:

Recognition is the process that assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. In terms of example, identifying a character as, say, a ‘c’ requires associating the descriptors for that character with the label c. We conclude the coverage of digital image processing by developing several techniques for recognition and interpretation.

Page 10: Digital Image Processing

APPLICATION OF DIGITAL IMAGE PROCESSING

Digital image processing finds its applications in many areas such as:

Criminology

Morphology

Microscopy

Bio medical

Meteorology

Remote sensing etc.,

CRIMINOLOGY: Few types of evidence are more incriminating than a photograph or videotape that places a suspect at a crime scene, whether or not it actually depicts the suspect committing a criminal act. Ideally, the image will be clear, with all persons, settings, and objects reliably identifiable. Unfortunately, though, that is not always the case, and the photograph or video image may be grainy,blurry, of poor contrast, or even damaged in some way. In such cases, investigators may rely on computerized technology that enables digital processing and enhancement of an image. The U.S. government, and in particular, the military, the FBI, and the National Aeronautics and Space Agency (NASA), and more recently, private technology firms, have developed advanced computer software that can dramatically improve the clarity of and amount of detail visible in still and video images. NASA, for example, used digital processing to analyze the video of the Challenger incident.

MORPHOLOGY: The word morphology commonly denotes a branch of biology that deals with the form and structure of animals and plants. We use this image processing under the context of mathematical morphology as a tool for extracting image components that are useful in the representation and description of region shape, such as boundaries, skeletons and the convex hull. The language of mathematical morphology is Set theory. Sets in this represent the shapes of objects in an image.

MICROSCOPY: Digitization of a video or electronic image captured through an optical microscope results in a dramatic increase in the ability to enhance features, extract information, or modify the image. Digital imaging is increasingly applied to image capture for microscopy –an area that demands high resolution, color fidelity and careful management of, often, limited light conditions. The latest digital cameras combined with powerful computer software now offer image quality that is comparable with traditional silver halide film photography. Moreover, digital cameras are also easier to use and offer greater flexibility for image manipulation and storage.

Page 11: Digital Image Processing

ABSTRACT

The Digital Image processing deals with the process in which the given image is processed using the techniques such as Image Acquisition, Preprocessing, Segmentation and so on. These elements of image processing are dealt in order to be in pace with the new developments in image-processing hardware and software.

Firstly, in the processing flow the Image Acquisition plays a prominent role. In this process acquiring of the images is done in order to process it. After absorbing the image, it is Preprocessed. i.e. the image is Enhanced, Restored and then Compressed. While Enhancing the image or during image enhancement, several techniques are adopted such as gray- scale mappings for image negatives, contrast stretching, gray-level slicing and so on. After enhancing the required image, it is Restored whose ultimate goal is to improve an image or to reconstruct an image that has been degraded by using some priori knowledge of the degradation phenomenon. Then comes the turn of Compression in which the amount of data required to represent a digital image is compressed . Image compression plays a crucial role in many important and diverse applications, including telex video conferencing, remote sensing, FAX etc,. In the next process, which is the Segmentation process, the image is divided into several parts or segments. After segmentation, Representation and Description comes into picture. Choosing a representation scheme, however, is only part of the task of making the data useful to the computer. The next task is to describe the region based on chosen representation. Next comes Recognition and interpretation which is a process in which the acquired image is recognized and also interpreted to get a final image of high resolution and clear picture clarity.

The image processing techniques are described and this is found in almost all the present communication systems such as remote sensing, and it has also found its applications in area like FORENSIC DEPARTMENTS, SPACE SERVICES, MEDICINE, and many more.

Page 12: Digital Image Processing

BIBLIOGRAPHY

1) DIGITAL IMAGE PROCESSING by “Rafael C. Gonzalez & Richard E. Woods”.

2) COMPUTER TECHNIQUES IN IMAGE PROCESSING by “Andrews”.

INTERNET WEBSITES:

1) www.howstuffworks.com

2) www.electronicsforu.com

3) www.khoral.com

4) www.dca.fee.unicamp.br

5) www.google.com