computing fundamentals and programming batch 2012

18
Page 1 of 18 Computing Fundamentals and Programming Batch 2012 (Electronics and Telecomm) Chapter 2: Processing Hardware Prepared by: Mr. Shakil Ahmed (Assistant Professor, CED) (Electronics Sec F) Other Course Instructors: Mr. Tauseef Mubeen (Assistant Professor, CED) (Electronics Sec B) Mr. Imran Saleem (Assistant Professor, CED) (Electronics Sec A) Mr. Adnan Afroze (Assistant Professor, TED) (Telecomm Sec A & B) Mr. Sarfaraz Natha (Lecturer, CED) (Electronics Sec E) Ms. Batool Raza (Lecturer, CED) (Telecomm Sec C) Mr. Ali Yousuf (Lecturer, CED) (Electronics Sec C) Mr. Sajjad Imam (Lecturer, CED) (Electronics Sec D) Significance The development of microprocessor enabled the development of the microcomputer. This chapter covers this amazing device – the processor- and associated processing hardware. How Data & Programs are represented in Computer? Electricity can either be “off” or “on” Examples are: An electric circuit may be open or closed, the magnetic pulses are on a disk may be present or absent. Therefore this two-state situation allows computers to use the binary system to represent data & programs. Binary Numbers The Binary Number System Bits and Bytes Text Codes Binary Numbers Computer processing is performed by transistors, which are switches with only two possible states: ON and OFF. All computer data is converted to a series of binary numbers– 1 and 0. For example, you see a sentence as a collection of letters, but the computer sees each letter as a collection of 1s and 0s. If a transistor is assigned a value of 1, it is on. If it has a value of 0, it is off. A computer's transistors can be switched on and off millions of times each second.

Upload: others

Post on 18-Dec-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1 of 18

Computing Fundamentals and Programming Batch 2012 (Electronics and Telecomm)

Chapter 2: Processing Hardware

Prepared by: Mr. Shakil Ahmed (Assistant Professor, CED) (Electronics Sec F)

Other Course Instructors:

Mr. Tauseef Mubeen (Assistant Professor, CED) (Electronics Sec B) Mr. Imran Saleem (Assistant Professor, CED) (Electronics Sec A)

Mr. Adnan Afroze (Assistant Professor, TED) (Telecomm Sec A & B) Mr. Sarfaraz Natha (Lecturer, CED) (Electronics Sec E)

Ms. Batool Raza (Lecturer, CED) (Telecomm Sec C) Mr. Ali Yousuf (Lecturer, CED) (Electronics Sec C)

Mr. Sajjad Imam (Lecturer, CED) (Electronics Sec D) Significance • The development of microprocessor enabled the development of the microcomputer. • This chapter covers this amazing device – the processor- and associated processing

hardware. How Data & Programs are represented in Computer?

• Electricity can either be “off” or “on” • Examples are: An electric circuit may be open or closed, the magnetic pulses are

on a disk may be present or absent. Therefore this two-state situation allows computers to use the binary system to represent data & programs.

• Binary Numbers • The Binary Number System • Bits and Bytes • Text Codes

Binary Numbers Computer processing is performed by transistors, which are switches with only two possible states: ON and OFF.

• All computer data is converted to a series of binary numbers– 1 and 0. • For example, you see a sentence as a collection of letters, but the computer sees

each letter as a collection of 1s and 0s. • If a transistor is assigned a value of 1, it is on. If it has a value of 0, it is off. A

computer's transistors can be switched on and off millions of times each second.

Page 2 of 18

Binary Number System • To convert data into strings of numbers, computers use the binary number

system. • Humans use the decimal system (“deci” stands for “ten”). • The binary number system works the same way as the decimal system, but has

only two available symbols (0 and 1) rather than ten (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9).

Bits and Bytes A single unit of data is called a BIT, having a value of 1 or 0.

• Computers work with collections of bits, grouping them to represent larger pieces of data, such as letters of the alphabet.

• Eight bits make up one BYTE. A byte is the amount of memory needed to store one alphanumeric character.

• With one byte, the computer can represent one of 256 different symbols or characters.

Text Code Text Code is a system that uses binary numbers (1s and 0s) to represent characters understood by humans (letters and numerals).

• An early text code system, called EBCDIC, uses eight-bit codes, but is used primarily in older mainframe systems.

• In the most common text-code set, ASCII, each character consists of eight bits (one byte) of data. ASCII is used in nearly all personal computers.

• In the Unicode text-code set, each character consists of 16 bits (two bytes) of data.

Page 3 of 18

Examples from the ASCII Text Code

Binary Coding Schemes All the amazing things that computers do are based on binary numbers made up of 0s and 1s.

• Fortunately we do not enter data into the computer using groups of 0s and 1s. • Instead we use natural language characters such as those on the keyboard. • Then the computer system encodes the data by means of binary or digital coding

schemes to represent letters, numbers and special characters Binary Coded Schemes There are many coding schemes, the most common ones are:

• EBCDIC Extended Binary Coded Decimal Interchange Code • ASCII American Standard Code for Information Interchange • Unicode

EBCDIC

• It is commonly used in IBM mainframes. • It is an 8-bit coding scheme. • So it can represent 28 = 256 characters

ASCII

• ASCII was created for use in early telecommunications systems but has proven useful for computer systems and has been the basis for most other character sets.

• Most widely used binary code with non-IBM mainframes and virtually all microcomputers.

• Standard ASCII uses 7 bits, so 27=128 • Extended ASCII uses 8 bits, so 28=256 • In the ASCII character set, each binary value between 0 and 127 is given a

specific character. • Most computers extend the ASCII character set to use the full range of 256

characters available in a byte. • The upper 128 characters handle special things like accented characters from

common foreign languages

Page 4 of 18

ASCII Example (Try it) Computers store text documents, both on disk and in memory, using these codes. For example, if you use Notepad in Windows 95/98 to create a text file containing the words, "Four score and seven years ago," Notepad would use 1 byte of memory per character (including 1 byte for each space character between the words -- ASCII character 32). When Notepad stores the sentence in a file on disk, the file will also contain 1 byte per character and per space.

• Try this experiment: Open up a new file in Notepad and insert the sentence, "Four score and seven years ago" in it. Save the file to disk under the name getty.txt. Then use the explorer and look at the size of the file. You will find that the file has a size of 30 bytes on disk: 1 byte for each character. If you add another word to the end of the sentence and re - save it, the file size will jump to the appropriate number of bytes. Each character consumes a byte.

Unicode Unicode is an entirely new idea in setting up binary codes for text or script characters. It uses 2 bytes (16 bits)

• Therefore, 216=65,536 characters can be handled. • This is a bit of overkill for English and Western-European languages, but it is

necessary for some other languages, such as Greek, Chinese and Japanese. • Many analysts believe that as the software industry becomes increasingly global,

Unicode will eventually supplant ASCII as the standard character coding format. • Officially called the Unicode Worldwide Character Standard; it is a system for

"the interchange, processing, and display of the written texts of the diverse languages of the modern world." It also supports many classical and historical texts in a number of languages.

• Currently, the Unicode standard contains 34,168 distinct coded characters derived from 24 supported language scripts.

The Parity bit: Checking for errors

• Dust, electrical disturbance, weather conditions and other factors can cause interference in a circuit or communications line that is transmitting a byte.

• A Parity bit also called a check bit is an extra bit attached to the end of a byte for purpose of checking of accuracy

Types: Even Parity and Odd Parity Why two different microcomputer platforms incompatible?

• Self Study

Page 5 of 18

How Computer Capacity (Main Memory or Storage Device) is expressed?

How many 0s and 1s will a computer’s main memory or a storage device hold? The following terms are used to denote capacity:

Bit: The binary digit (bit)—0 or 1—is the smallest unit of measurement.

Byte: Group of 8 bits makes a byte, and a byte represents one character, digit, or other value. A computer’s memory capacity is expressed in number of bytes or multiples of bytes.

Kilobyte: A kilobyte (K, KB) is about 1000 bytes (1024 bytes to be precise). An average printed page of text would take up about 4.1-4.2 kilobytes, or 4100-4200 bytes.

Megabyte: A megabyte (M, MB) is about 1 million bytes (1,048,576 bytes). This is a very common unit used in discussing microcomputer capacity.

Gigabyte: A gigabyte (G, GB) is about 1 billion bytes (1,073,741,824 bytes). Gigabytes are used to describe microcomputer hard disks and the main memory of mainframes and some supercomputers.

Terabyte: A terabyte (T, TB) represents about 1 trillion bytes (1,009,511,627,776 bytes). Terabytes are used to describe some supercomputer’s main memory capacity.

Petabyte: A petabyte is about 1 million gigabytes. Petabytes are used to describe the huge storage capacities of modern databases.

Where Processing Occurs

• The Control Unit • The Arithmetic Logic Unit • Machine Cycles • The Role of Memory in Processing • Types of RAM

The Processor: In Charge

• Processing takes place in the PC's central processing unit (CPU). • The system's memory also plays a crucial role in processing data. • Both the CPU and memory are attached to the system's motherboard, which

connects all the computer's devices together, enabling them to communicate.

Page 6 of 18

Central Processing Unit The two main parts of a CPU are

• Control Unit (CU) and • Arithmetic Logic Unit (ALU)

• The control unit directs the flow of data through the CPU, and to and from other

devices. • It directs the movement of electrical signals between main memory and the

arithmetic/logic unit. It also directs these electrical signals between main memory and the input and output devices.

• The control unit stores the CPU's microcode, which contains the instructions for all the tasks the CPU can perform.

Page 7 of 18

Arithmetic Logic Unit • The actual manipulation of data takes place in the ALU. • The ALU can perform arithmetic and logic operations and also controls the speed of

those operations. • The ALU is connected to a set of registers— small memory areas in the CPU, which

hold data and program instructions while they are being processed ALU Operation List

Specialized Processor Chips: Assistants to the CPU Modern computers may have a number of processors in addition to the main processor. Two of the examples are math and graphics coprocessor chips Coprocessors • A special-purpose processing unit that assists the CPU in performing certain types of

operations. For example, a math coprocessor performs mathematical computations, particularly floating-point operations.

• Math coprocessors are also called numeric and floating-point coprocessors. • In addition to math coprocessors, there are also graphics coprocessors for

manipulating graphic images. These are often called accelerator boards. In the coming years we may see PC on a chip CISC, RISC, & MPP: Not All Processors are created equal CISC and RISC Processors • Pronounced sisk, and stands for Complex Instruction Set Computer. • Most personal computers use a CISC architecture, in which the CPU supports as

many as two hundred instructions. (they have large instruction sets). • An alternative architecture, used by many workstations and also some personal • computers, is RISC (Reduced Instruction Set Computer), which supports fewer

instructions.

Page 8 of 18

• Reduced instruction set computing (RISC) processors use smaller instruction sets. This enables them to process more instructions per second than (CISC) chips.

• RISC processors are found in Apple's PowerPC systems, as well as many H/PCs, workstations, minicomputers, and mainframes

Which one of the two is better?: Discussion in class

Massively parallel processing (MPP) Main Memory (Primary Storage): Working Storage Area for the CPU RAM- Random Access Memory

• What does RANDOM means? o Random access refers to the fact that data can be stored and

retrieved at random—from anywhere in the electronic RAM chips—in approximately equal amounts of time.

• The amount of RAM in a PC has a direct affect on the system's speed. • The more RAM a PC has, the more program instructions and data can be held in

memory, which is faster than storage on disk. • If a PC does not have enough memory to run a program; it must move data

between RAM and the hard disk frequently. This process, called swapping, can greatly slow a PC's performance.

• Main memory is contained on chips called RAM chips that use CMOS (Complementary Metal-Oxide semiconductor technology.

Two Important factors about main memory are

• Its contents are temporary • Its capacity varies in different computers

RAM Types

• SRAM • DRAM

Registers

• The CPU contains a number of small memory areas, called registers, which store data and instructions while the CPU processes them.

• The size of the registers (also called word size) determines the amount of data with which the computer can work at a one time.

• Today, most PCs have 32-bit registers, mean the CPU can process four bytes of data at one time. Register sizes are rapidly growing to 64 bits.

Types of registers: Self study

Page 9 of 18

Machine Cycle Address:

• Is the location designated by a unique number, in main memory in which a character of data or part of an instruction is stored.

• To process a character, the CU retrieves the character from its address in main memory and places it into a register.

• Machine cycle comprises of a series of operations performed to execute a single program instruction.

• MC = I-cycle + E-cycle The CPU follows a set of steps-called a machine cycle-for each instruction it carries out.

• By using a technique called pipelining; many CPUs can process more than one instruction at a time.

The machine cycle includes two smaller cycles: • During the instruction cycle, the CPU "fetches" a command or data from memory

and "decodes" it for the CPU. • During the execution cycle, the CPU carries out the instruction, and may store

the instruction's result in memory. RAM Capacity, Word size and, Processor Speed RAM Capacity Word Size

• As main memory capacity is measured in MB, processor capacity is measured in word size.

• Word size = 1- number of bits processor can hold in its registers 2- process at one time 3- send through its local bus

Expansion buses Bus

• A BUS is a path between the components of a computer. Data and instructions travel along these paths.

• The Data Bus width determines how many bits can be transmitted between the CPU and other devices.

• The Address Bus runs only between the CPU and RAM, and carries nothing but memory addresses for the CPU to use.

• Peripheral devices are connected to the CPU by an expansion bus

Page 10 of 18

Processing Speed Factors Affecting Processing Speed

• Registers • RAM • The System Clock • The Bus • Cache Memory

System Clock The computer's system clock sets the pace for the CPU by using a vibrating quartz crystal.

• A single "tick" of the clock is the time required to turn a transistor off and back on. This is called a clock cycle.

• Clock cycles are measured in Hertz (Hz), a measure of cycles per second. • If a computer has a clock speed of 300 MHz, then its system clock "ticks" 300

million times every second. • The faster a PC's clock runs, the more instructions the PC can execute each

second. Cache Memory

• Pronounced cash, a special high-speed storage mechanism. • It can be either a reserved section of main memory or an independent high-

peed storage device. • Two types of caching are commonly used in personal computers: memory

caching and disk caching. • A memory cache, sometimes called a cache store or RAM cache, is a portion of

memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory.

Page 11 of 18

• Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.

• Some memory caches are built into the architecture of microprocessors. • The Intel 80486 microprocessor, for example, contains an 8K memory cache, and

the Pentium has a 16K cache. Such internal caches are often called Level 1 (L1) caches.

• Most modern PCs also come with external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger.

• Disk caching works under the same principle as memory caching, but instead of using highspeed SRAM, a disk cache uses conventional main memory.

• The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there.

• Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk.

Cache Memory (Easier Version)

• Cache memory is high-speed memory that holds the most recent data and instructions that have been loaded by the CPU.

• Cache is located directly on the CPU or between the CPU and RAM, making it faster than normal RAM.

• CPU-resident cache is called Level-1 (L1) cache. • External cache is called Level-2 (L2) cache. • The amount of cache memory has a tremendous impact on the computer's

speed.

Page 12 of 18

A simple Example of Cache • A librarian example

What did you gain from this example? From this example you can see several important facts about caching:

• Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type.

• When using a cache, you must check the cache to see if an item is in there. If it is there, it's called a Cache Hit. If not, it is called a Cache Miss and the computer must wait for a round trip from the larger, slower memory area.

Focus on the Microcomputer: What’s Inside What factors should you consider before purchasing a microcomputer? System Unit

• The box that contains the microcomputer’s processing hardware and other components is called the system unit

• Power Supply • Motherboard • Microprocessor • RAM • ROM • Cache, VRAM, Flash memory • Ports • Expansion Slots • Bus Lines and PC slots and cards

Power Supply • The component that supplies power to a computer. Most personal computers

can be plugged into standard electrical outlets. • The power supply then pulls the required amount of electricity and converts the

AC current to DC current. • It also regulates the voltage to eliminate spikes and surges common in most

electrical systems. • Not all power supplies, however, do an adequate voltage-regulation job, so a

computer is always susceptible to large voltage fluctuations. Motherboard

• Also called System Board • It is the main circuit board in the system unit. • All or most of the components are plugged onto the motherboard

Page 13 of 18

Microprocessor (Intel)

• Since 1978, Intel's processors have evolved from the 8086 and the 8088 to the 80286, 80386, and 80486, to the Pentium family of processors. All are part of the 80x86 line.

• Intel's Pentium family of processors includes the Pentium, Pentium Pro, Pentium with MMX, Pentium II, Pentium III, Celeron, and Xeon processors.

• The earliest Intel processors included only a few thousand transistors. Today's Pentium processors include 9.5 million transistors or more.

Microprocessors (AMD)

• Advanced Micro Devices (AMD) was long known as a provider of lower-performance processors for use in low cost computers.

• With its K6 line of processors, AMD challenged Intel's processors in terms of both price and performance.

• With the K6-III processor, AMD broke the 600 MHz barrier, claiming the "fastest processor" title for the first time in IBM-compatible computers.

Microprocessors (Motorola)

• Motorola makes the CPUs used in Macintosh and PowerPC computers. • Macintosh processors use a different basic structural design (architecture) than

IBM compatible PC processors. • With the release of the G3 and G4 PowerPC processors, Macintosh computers set

new standards for price and performance

Page 14 of 18

RAM • Pronounced ramm, acronym for Random Access Memory, a type of computer

memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes.

• RAM is the most common type of memory found in computers and other devices, such as printers.

• There are two basic types of RAM: • Dynamic RAM (DRAM) • Static RAM (SRAM)

• The two types differ in the technology they use to hold data, dynamic RAM being the more common type.

• Dynamic RAM needs to be refreshed thousands of times per second. Static RAM does not need to be refreshed, which makes it faster; but it is also more expensive than dynamic RAM.

• Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.

• Similar to a micro-processor, a memory chip is an Integrated Circuit (IC) made of millions of transistors and capacitors.

• In the most common form of computer memory, Dynamic Random Access Memory (DRAM), a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information -- a 0 or a 1 (see How Bits and Bytes Work for information on bits). The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state.

• A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the capacitor's bucket is that it has a leak. In a matter of a few milliseconds a full bucket becomes empty.

• Therefore, for dynamic memory to work, either the CPU or the memory controller has to come along and recharge all of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the memory and then writes it right back. This refresh operation happens automatically thousands of times per second RAM basics.

• The capacitor in a dynamic RAM memory cell is like a leaky bucket. It needs to be refreshed periodically or it will discharge to 0.

• This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this refreshing is that it takes time and slows down the memory.

• Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds each bit of memory. A flip-flop for a memory cell takes four or six transistors along with some wiring, but never has to be refreshed. This makes static RAM significantly faster than dynamic RAM. However, because it has more parts, a static memory cell takes a lot more space on a chip than a dynamic memory cell.

Page 15 of 18

Therefore you get less memory per chip, and that makes static RAM a lot more expensive.

• So static RAM is fast and expensive, and dynamic RAM is less expensive and slower. Therefore static RAM is used to create the CPU's speed-sensitive cache, while dynamic RAM forms the larger system RAM space.

Memory Modules (RAM) • SIMMs: Acronym for Single In-Line Memory Module, a small circuit board that can

hold a group of memory chips. Typically, SIMMs hold up 8 (on Macintoshes) or 9 (on PCs) RAM chips. On PCs, the ninth chip is often used for parity error checking. Unlike memory chips, SIMMs are measured in bytes rather than bits. SIMMs are easier to install than individual memory chips.

DIMMs: Dual In-line Memory Module � Can hold upto 18 chips

ROM • Pronounced rohm, acronym for Read-Only Memory, computer memory on which

data has been prerecorded. Once data has been written onto a ROM chip, it cannot be removed and can only be read.

• also known as firmware • Unlike main memory (RAM), ROM retains its contents even when the computer is

turned off. ROM is referred to as being nonvolatile, whereas RAM is volatile. ROM types

• ROM • PROM • EPROM • EPROM

Page 16 of 18

ROM types • Creating ROM chips totally from scratch is time consuming and very expensive in

small quantities. For this reason, mainly, developers created a type of ROM known as Programmable Read-Only Memory (PROM). Blank PROM chips can be bought inexpensively and coded by anyone with a special tool called a programmer.

• PROMs can only be programmed once • Erasable Programmable Read-Only Memory (EPROM) chips can be rewritten

many Times. • Erasing an EPROM requires a special tool that emits a certain frequency of

ultraviolet (UV) light. EPROMs are configured using an EPROM programmer that provides voltage at specified levels depending on the type of EPROM used.

• In EEPROMs: o The chip does not have to removed to be rewritten. o The entire chip does not have to be completely erased to change a

specific portion of it. o Changing the contents does not require additional dedicated equipment. o Though EPROMs are a big step up from PROMs in terms of reusability,

they still require dedicated equipment and a labor-intensive process to remove and reinstall them each time a change is necessary. Also, changes cannot be made incrementally to an EPROM; the whole chip must be erased. Electrically Erasable Programmable Read-Only Memory (EEPROM) chips remove the biggest drawbacks of EPROMs.

Ports • An interface on a computer to which you can connect a device. • Personal computers have various types of ports. Internally, there are several

ports for connecting disk drives, display screens, and keyboards. Externally, personal computers have ports for connecting modems, printers, mice, and other peripheral devices

Types of Ports Serial Ports

• Considered to be one of the most basic external connections to a computer, the serial port has been an integral part of most computers for more than 20 years. Although many of the newer systems have done away with the serial port completely in favour of USB connections, most modems still use the serial port, as do some printers, PDAs and digital cameras

Page 17 of 18

• The name "serial" comes from the fact that a serial port "serializes" data. That is,

it takes a byte of data and transmits the 8 bits in the byte one at a time. • The advantage is that a serial port needs only one wire to transmit the 8 bits

(while a parallel port needs 8). • The disadvantage is that it takes 8 times longer to transmit the data than it

would if there were 8 wires. Serial ports lower cable costs and make cables smaller.

• Serial ports, also called Communication (COM) Ports, are Bi-Directional. • Bi-directional communication allows each device to receive data as well as

transmit it. Serial devices use different pins to receive and transmit data -- using the same pins would limit communication to half-duplex, meaning that information could only travel in one direction at a time. Using different pins allows for fullduplex communication, in which information can travel in both directions at once.

Parallel Ports • When using a parallel port, the computer sends the data 1 byte at a time (8 bits

in parallel, as opposed to 8 bits serially as in a serial port). • Parallel ports are used to connect a host of popular computer peripherals:

o Printers o Scanners o CD-writers o External hard drives o Network adapters o Tape backup drives

SCSI Ports • Provides as interface for transferring data at high speeds for SCSI-compatible

devices. Like Hard disks, scanners and CDROMs Video Adapter Ports

• Used to connect video display monitor Game Ports

• Used to attach joysticks Infrared Ports

• This enables you to transfer data from one device to another without any cables. For example, if both your laptop computer and printer have Infrared ports, you can simply put your computer in front of the printer and output a document, without needing to connect the two with a cable.

• Infrared ports support roughly the same transmission rates as traditional parallel ports.

• The only restriction on their use is that the two devices must be within a few feet of each other and there must be a clear line of sight between them.

Page 18 of 18

Current and Future Technologies

A. RISC chips and parallel processing are some of the most promising research directions in the field of computers. Others include:

B. Gallium arsenide: Gallium arsenide is a material that allows electrical impulses to be transmitted several times faster than with silicon.

• Gallium arsenide also requires less power and can operate at higher temperatures than silicon can.

• However, chip designers are unable to fit as many circuits onto a chip as they can with silicon.

C. Superconductors: Silicon is a semiconductor—electricity flows through the material with some resistance. This leads to heat buildup and the risk of circuits melting down.

• A superconductor is a material that allows electricity to flow through it without resistance.

• However, the superconducting materials found so far are only superconductors at subzero temperatures. The search is still on for a practical superconductor—which would lead to circuitry 100 times faster than today’s silicon chips.

D. Opto-electronic processing: Today’s computers are electronic, tomorrow’s might be opto-electronic—using light, not electricity.

• Light is much faster than electricity—fiber optic networks move information 3000 times faster than conventional electronic networks. However, the signals get bogged down when they have to be processed by silicon chips.

• Opto-electronic chips and computers (using lasers, lights and mirrors) would remove this bottleneck.

E. Nanotechnology: Nanotechnology, nanoelectronics, etc. are all terms associated with a measurement known as a nanometer—one billionth of a meter, at the level of atoms and molecules. (A human hair is approximately 100,000 nanometers in diameter.)

• Nanostructures are built one atom or molecule at a time. • Researchers have already forged layers of individual molecules into tiny

computer switches. They plan to assemble these switches and other components into devices called "Chemically Assembled Electronic Devices," or CAENs.

a. CAENs would be billions of times more powerful than today’s personal computers.

b. The new switches are the work of scientists at Hewlett-Packard Laboratories (Palo Alto, CA) and University of Los Angeles.

c. The first prototype should be finished by 2002; the first CAEN components should be up and running within 10 years.

F. Biotechnology: Potentially, biotechnology could be used to grow cultures of bacteria that, when exposed to light, emit a small electrical charge, for example. The properties of this "biochip" could be used to represent the on/off digital signals used in computing.