what is this course about? cs 356 unit 0 -...

8
0.1 CS 356 Unit 0 Class Introduction Basic Hardware Organization 0.2 What is This Course About? Introduction to Computer Systems a.k.a. Computer Organization or Architecture Filling in the "systems" details How is software generated (compilers, libraries) and executed (OS, etc.) How does computer hardware work and how does it execute the software I write? Lays a foundation for future CS courses CS 350 (Operating Systems), ITP/CS 439 (Compilers), CS 353/EE 450 (Networks), EE 457 (Computer Architecture) 0.3 Today's Digital Environment Voltage / Currents Transistors / Circuits Digital Logic Processor / Memory / GPU / FPGAs Assembly / Machine Code OS / Libraries C++ / Java / Python Algorithms Voltage / Currents Transistors / Circuits Digital Logic Processor / Memory / GPU / FPGAs Assembly / Machine Code OS / Libraries C++ / Java / Python Algorithms Networks Applications Our Focus in CS 356 0.4 Why is System Knowledge Important? Increase productivity Debugging Build/compilation High-level language abstractions break down at certain points Improve performance Take advantage of hardware features Avoid pitfalls presented by the hardware Basis of understanding security and exploits

Upload: dinhdan

Post on 04-Aug-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

0.1

CS 356 Unit 0

Class Introduction

Basic Hardware Organization

0.2

What is This Course About?

• Introduction to Computer Systems

– a.k.a. Computer Organization or Architecture

• Filling in the "systems" details

– How is software generated (compilers, libraries) and

executed (OS, etc.)

– How does computer hardware work and how does it execute

the software I write?

• Lays a foundation for future CS courses

– CS 350 (Operating Systems), ITP/CS 439 (Compilers), CS

353/EE 450 (Networks), EE 457 (Computer Architecture)

0.3

Today's Digital Environment

Voltage / Currents

Transistors / Circuits

Digital Logic

Processor / Memory /

GPU / FPGAs

Assembly /

Machine Code

OS /

Libraries

C++ / Java /

PythonAlgorithms

Voltage / Currents

Transistors / Circuits

Digital Logic

Processor / Memory /

GPU / FPGAs

Assembly /

Machine Code

OS /

Libraries

C++ / Java /

PythonAlgorithms

Networks

Applications

Our Focus in CS 356

0.4

Why is System Knowledge Important?

• Increase productivity

– Debugging

– Build/compilation

• High-level language abstractions break down

at certain points

• Improve performance

– Take advantage of hardware features

– Avoid pitfalls presented by the hardware

• Basis of understanding security and exploits

0.5

What Will You Learn

• Binary representation systems

• Assembly

• Processor organization

• Memory subsystems (caching, virtual

memory)

• Compiler optimization and linking

0.6

Syllabus

0.7

20-Second Timeout

• Who Am I?

– Teaching faculty in EE and CS

– Undergrad at USC in CECS

– Grad at USC in EE

– Work(ed) at Raytheon

– Learning Spanish (and Chinese?)

– Sports enthusiast!

• Basketball

• Baseball

• Ultimate Frisbee?

0.8

ABSTRACTIONS & REALITY

0.9

Abstraction vs. Reality

• Abstraction is good until reality intervenes

– Bugs can result

– It is important to underlying HW implementations

– Sometimes abstractions don't provide the control

or performance you need

0.10

Reality 1

• ints are not integers and floats aren't reals

• Is x2 >= 0 ?

– Floats: Yes

– Ints: Not always

• 40,000*40,000 = 1,600,000,000

• 50,000*50,000 = -1,794,967,296

• Is (x+y)+z = x+(y+z)?

– Ints: Yes

– Floats: Not always

• (1e20 + -1e20) + 3.14 = 3.14

• 1e20 + (-1e20 + 3.14) = around 0

0.11

Reality 2

• Knowing some assembly is critical

• You'll probably never write much (any?) code in

assembly as compilers are often better than even

humans at optimizing code

• But knowing assembly is critical when

– Tracking down some bugs

– Taking advantage of certain HW features that a compiler may

not be able to use

– Implementing system software (OS/compilers/libraries)

– Understanding security and vulnerabilities

0.12

Reality 3

• Memory matters!

– Memory is not infinite

– Memory can impact performance more than

computation for many applications

– Source of many bugs both for single-threaded and

especially parallel programs

– Source of many security vulnerabilities

0.13

Reality 4

• There's more to performance than asymptotic

complexity

– Constant factors matter!

– Even operation counts does not predict

performance

• How long an instruction takes to execute is not

deterministic…it depends on what other instructions

have been execute before it

– Understanding how to optimize for the processor

organization and memory can lead to up to an

order of magnitude performance increase

0.14

COMPUTER ORGANIZATION AND

ARCHITECTURE

Drivers and Trends

0.15

Computer Components

• Processor

– Executes the program and performs all the operations

• Main Memory

– Stores data and program(instructions)

– Different forms:

• RAM = read and write but volatile (lose values when power off)

• ROM = read-only but non-volatile (maintains values when power off)

– Significantly slower than the processor speeds

• Input / Output Devices

– Generate and consume data from the system

– MUCH, MUCH slower than the processor

Arithmetic + Logic + Control Circuitry

Program(Instructions)

Data(Operands)

Output

Devices

Input

Devices

Data

Software Program

Memory (RAM)

Processor

Combine 2c. Flour

Mix in 3 eggsInstructions

DataProcessor

(Reads instructions, operates on data)

Disk Drive

0.16

Architecture Issues

• Fundamentally, computer architecture is all about the

different ways of answering the question:

“What do we do with the ever-increasing number of

transistors available to us”

• Goal of a computer architect is to take increasing

transistor budgets of a chip (i.e. Moore’s Law) and

produce an equivalent increase in computational

ability

0.17

Moore’s Law, Computer Architecture & Real-

Estate Planning

• Moore’s Law = Number of

transistors able to be

fabricated on a chip grows

exponentially with time

• Computer architects decide,

“What should we do with all

of this capability?”

• Similarly real-estate

developers ask, “How do we

make best use of the land

area given to us?”USC University Park Development Master Plan

http://re.usc.edu/docs/University%20Park%20Development%20Project.pdf

0.18

Transistor Physics

• Cross-section of transistors

on an IC

• Moore’s Law is founded on

our ability to keep

shrinking transistor sizes

– Gate/channel width shrinks

– Gate oxide shrinks

• Transistor feature size is

referred to as the

implementation

“technology node”

0.19

Technology Nodes

0.20

Growth of Transistors on Chip

1

10

100

1,000

10,000

100,000

1,000,000

1975 1980 1985 1990 1995 2000 2005 2010

Tra

nis

tor

Co

un

t (T

ho

usan

ds)

Year

Intel '486(1.2M)

Pentium(3.1M)

Pentium Pro(5.5M)

Pentium 3(28M) Pentium 4

Northwood(42M)

Pentium 2(7M)

Intel '386(275K)

Intel '286(134K)

Intel 8086(29K)

Pentium 4 Prescott(125M)

Pentium D(230M)

Core 2 Duo(291M)

0.21

Implications of Moore’s Law

• What should we do with all these transistors

– Put additional simple cores on a chip

– Use transistors to make cores execute instructions

faster

– Use transistors for more on-chip cache memory

• Cache is an on-chip memory used to store data the

processor is likely to need

• Faster than main-memory (RAM) which is on a separate

chip and much larger (thus slower)

0.22

Memory Wall Problem

• Processor performance is increasing much faster than memory

performance

Processor-Memory

Performance Gap

7%/year

55%/year

Hennessy and Patterson,

Computer Architecture –

A Quantitative Approach (2003)

0.23

RAM

Processor

Cache Example

• Small, fast, on-chip memory to store copies of recently-used data

• When processor attempts to access data it will check the cache first

– If the cache has the desired data, it can supply it quickly

– If the cache does not have the data, it must go to the main memory (RAM) to access it

System Bus

RAM

Cache

Processor

Cache

Cache has desired

data

Cache does not have desired

dataSystem Bus

0.24

Pentium 4

L2 Cache

L1 Data

L1 Instruc.

0.25

Increase in Clock Frequency

1

10

100

1000

10000

1975 1980 1985 1990 1995 2000 2005 2010

Fre

qu

en

cy (

MH

z)

Year

Intel '486(25)

Pentium(60)

Pentium Pro(200)

Pentium 3(700)

Pentium 4 Willamette

(1500)

Pentium 2(266)

Intel '386(20)

Intel '286(12.5)

Intel 8086(8)

Pentium 4 Prescott (3600)

Pentium D (2800)

Core 2 Duo(2400)

0.26

Intel Nehalem Quad Core

0.27

Progression to Parallel Systems

• If power begins to limit clock frequency, how can we

continue to achieve more and more operations per

second?

– By running several processor cores in parallel at lower

frequencies

– Two cores @ 2 GHz vs. 1 core @ 4 GHz yield the same

theoretical maximum ops./sec.

• We’ll end our semester by examining (briefly) a few

parallel architectures

– Chip multiprocessors (multicore)

– Graphics Processor Units (SIMT=Single Instruction, Multi

Threaded)

0.28

GPU Chip Layout

• 2560 Small

Cores

• Upwards of

7.2 billion

transistors

• 8.2 TFLOPS

• 320

Gbytes/sec

Photo: http://www.theregister.co.uk/2010/01/19/nvidia_gf100/

Source: NVIDIA

0.29

Intel Haswell Quad Core

0.30

Flynn’s Taxonomy

• Categorize architectures based on relationship between

program (instructions) and data

SISD

Single-Instruction, Single-Data

SIMD / SIMT

Single Instruction, Multiple Data

(Single Instruction, Multiple Thread)

• Typical, single-threaded processor • Vector Units (e.g. Intel MMX, SSE,

SSE2)

• GPU’s

MISD

Multiple Instruction, Single-Data

MIMD

Multiple Instruction, Multiple-Data

• Less commonly used (some streaming

architectures may be considered in this

category)

• Multi-threaded processors

• Typical CMP/Multicore system (Task

parallelism with different threads

executing)