© 2006 hitachi data systems raid concepts a. ian vogelesang tools competency center (tcc) hitachi...

36
© 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data System’s WebTech Series

Upload: isabella-mckenna

Post on 26-Mar-2015

223 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

© 2006 Hitachi Data Systems

RAID Concepts

A. Ian Vogelesang

Tools Competency Center (TCC)

Hitachi Data Systems

Hitachi Data System’s WebTech Series

Page 2: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

2© 2006 Hitachi Data Systems

• RAID Concepts• Who should attend:

– Systems and Storage Administrators

– Storage Specialists & Consultants

– IT Team Lead

– System and Network Architects

– IT Staff

– Operations and IT Managers

– Others who are looking for storage management techniques

Hitachi Data Systems WebTech Educational Seminar Series

Page 3: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

3© 2006 Hitachi Data Systems

How RAID type impacts cost

• The factors we will examine– Disk drive capacity vs. disk drive IOPS capability

– The impact of RAID level on disk drive activity

• Topics to cover along the way– RAID concepts (RAID-1 vs. RAID-5 vs. RAID-6).

– The 30-second “elevator pitch” on data flow through the subsystem.

• Conclusion will be – That I/O access pattern very often is the determining factor, rather

than storage capacity in GB.

Page 4: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

4© 2006 Hitachi Data Systems

Growth in recording density drives $/GB

60 70 80 90 100 110Production Year

1E-3

1E-2

1E-1

1E+0

1E+1

1E+2

1E+3

1E+4

1E+5

1E+6

Are

al D

ensi

ty M

egab

its/

in2

1st MR Head

1st GMR Head

2000 10

60% CGR

100% CGR

40% /yr ArealDensityProgress

PerpendicularRecording

105

104

103

102

106

10

1

10-1

10-2

10-3

25% CGR

IBM RAMAC (First Hard Disk Drive)

Page 5: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

5© 2006 Hitachi Data Systems

Time

Are

al D

ens

ity

(Gb

/in2

) Longitudinal

Perpendicular

Bit Patterned Media

Thermally-assisted writing

100-130

500-800

1,500-4,000

2,000-15,000

10,000 Gb/in2 = 10 Tb/in2

50 TB 3.5-inch drive12 TB 2.5-inch drive

1 TB 1-inch drive

2006 2011 2014

~ 60 M fold increase

50 Years > 50 Million increase in areal density

Areal density growth will continue

Page 6: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

6© 2006 Hitachi Data Systems

Here’s the problem

• Drive capacities keep doubling every 1.5 years or so• If you take the data that used to be on two disk drives and put it

onto one drive that’s twice as big, you will also be combining the I/O activity that was on the original two drives onto the one double-size drive.

• The problem is that as drive capacity keeps increasing, the number of I/Os per second (IOPS) that a drive can handle has not been increasing.

– An I/O operation consists of a seek, ½ turn latency, and data transfer.

– Data transfer for a 4K block is now down to around 1 % of a rotation.

– To position the head takes over 1 rotation (seek + ½ turn latency)

– IOPS capability is ALL about mechanical positioning

Page 7: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

7© 2006 Hitachi Data Systems

IOPS capability at 50% busy by drive type

63 65 65

86 86

38

59 61 61

81 81

23*

0

10

20

30

40

50

60

70

80

90

100

10K73 10K146 10K300 15K73 15K146 SATA7K400

Read

Write

* Includes read verify after write

• Note that IOPS capability is the same for different drive capacities with same RPM

• These are green zonegreen zone upper limits per drive for back-end I/O, including RAID-penalty I/Os

4k random IOPS at 50% busy by drive type

Page 8: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

8© 2006 Hitachi Data Systems

Access density capability

• When we talked about combining the data that used to be on two drives onto one double-size drive, and how that also combines (doubles) the I/O activity to the bigger drive, this illustrates that for a given workload there is a certain amount of I/O activity per GB of data.

• This activity per GB is called the “access density” of the workload, and is measured in IOPS per GB.

• Over the last few decades, as disk drive storage capacity has become much cheaper, from a humble beginning it became economic to store graphics, then audio, and now video.

– The introduction of these new data types has reduced typical access densities by about a factor of 10 over the last 20 years.

• However, access density is going down slower than disk drive capacity is going up.

– Typical access densities are reported in the 0.6 to 1.0 IOPS per GB range

Page 9: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

9© 2006 Hitachi Data Systems

Random read IOPS capability by drive type

• This chart shows what access density each drive type can handle if you fill it up with data.

marks green zone upper limit at 50% busy.

• The position of the left to right shows the maximum access density that the drive can comfortably handle.

Random read response by access density

0

5

10

15

20

25

30

35

40

45

50

00.

10.

20.

30.

40.

50.

60.

70.

80.

9 11.

11.

21.

3

Access Density, IOPS/GB

Res

po

nse

Tim

e, m

s

7200

10K15K

7K400 10K300 15K300 10K146

10K73

15K73

15K146

Page 10: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

10© 2006 Hitachi Data Systems

RAID makes the access density problem worse

• The basic idea behind RAID is to make sure that you don’t lose any data when a single drive fails.

• So what this means is that whenever a host writes data to the subsystem, that at least two disks need to be updated.

• The amount of extra disk drive I/O activity needed to handle write activity is the key factor in determining the lowest cost solution as a combination of disk drive RPM, disk drive capacity, and RAID type.

– So that’s why we will look at how different RAID levels work

• It is very rare that the access density is so low that you can completely fill up the cheapest drive.

– Only for things like a home PVR will a 750 GB SATA drive make the smallest dent in your wallet while getting the job done.

Page 11: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

11© 2006 Hitachi Data Systems

30 second “elevator pitch” on subsystem data flow

• Random read hits are stripped off by cache and do not reach the back end.• Random read misses go through cache unaltered and go straight to the

appropriate back end disk drive.– This is the only type of “I/O” operation where the host always “sees” the

performance of the back-end disk drive.

• Random writes– Host sees random writes complete at electronic speed

• Host only sees delay if too many pending writes build up.– Each host random write is transformed going through cache into a multiple I/O

pattern that depends on RAID type

• Sequential I/O– Host sequential I/O is at electronic speed.– Cache acts like a “holding tank”.– Back end puts [removes] “back-end buckets” of data into [out of] the tank to

keep the tank at an appropriate level

Page 12: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

12© 2006 Hitachi Data Systems

What is RAID?

• 1993 paper by a group of researchers at UC Berkeley– http://www.eecs.berkeley.edu/Pubs/TechRpts/1993/CSD-93-778.pdf

• “Redundant Array of Inexpensive Disks”– The original idea was to use cheap (i.e. PC) disk drives arranged in a

RAID to give you “mainframe” reliability.

– Now most call it Redundant Array of Independent Disks

• A RAID is an arrangement of data on disk drives in such a way that if a disk drive fails, you can still get the data back somehow from the remaining disks

– RAID-1 is mirroring – just keep two copies

– RAID-5 uses parity – recovers from single drive failures

– RAID-6 uses dual parity – recovers from double drive failures

Page 13: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

13© 2006 Hitachi Data Systems

RAID-1 random reads / writes

• Also called “mirroring”

• Two copies of the data

• Requires 2x number of disk drives

ABC ABC

Copy #1 Copy #2

XYZ XYZ

Copy #1 Copy #2

or

• For reads, the data can be read from either disk drive

• Read activity distributed over both copies reduces disk drive busy (due to reads) to ½ of what it would be to read from a single (non-RAID) disk drive

Copy #1 Copy #2

XYZ

• For writes, a copy must be written to both disk drives

• Two parity group disk drive writes for every host write

• Don’t care about what the previous data was, just over-write with new data

Page 14: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

14© 2006 Hitachi Data Systems

RAID-1 sequential read

• 2 sets of parallel I/O operations, each set reading 4 data chunks (2 MB)• Parity group data MB/s = 4 x drive MB/s

Chunk 1 Chunk 2 Chunk 3 Chunk 4

Chunk 5 Chunk 6 Chunk 7 Chunk 8

Chunk 1’Chunk 1 Chunk 2 Chunk 2’

Chunk 3Chunk 3’ Chunk 4’ Chunk 4

Chunk 5’Chunk 5 Chunk 6 Chunk 6’

Chunk 7Chunk 7’ Chunk 8’ Chunk 8

2+2 shown

Page 15: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

15© 2006 Hitachi Data Systems

RAID-1 sequential write

• 4 sets of parallel I/O operations, each writing 2 data chunks (1MB) and 2 parity chunks• Parity group data MB/s = 2 x drive MB/s

Chunk 1 Chunk 2 Chunk 3 Chunk 4

Chunk 5 Chunk 6 Chunk 7 Chunk 8

Chunk 1’Chunk 1 Chunk 2 Chunk 2’

Chunk 3Chunk 3’ Chunk 4’ Chunk 4

Chunk 5’Chunk 5 Chunk 6 Chunk 6’

Chunk 7Chunk 7’ Chunk 8’ Chunk 8

2+2 shown

Page 16: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

16© 2006 Hitachi Data Systems

RAID-1 comments

• Since RAID-1 requires doubling the number of disk drives to store the data, people tend to think of RAID-1 as the most expensive type of RAID.

• However, due to the intensity of host access, in RAID subsystems often one cannot completely “fill up” the disk drive with data because the disk drive would become too busy.

• RAID-1 offers the lowest “RAID penalty” of only having two disk drive I/Os per random write, compared to four for RAID-5, and six for RAID-6.

• For this reason, when the workload is sufficiently active and has a lot of random writes, RAID-1 will be the cheapest RAID type because it has the least disk drive I/O operations per random write.

Page 17: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

17© 2006 Hitachi Data Systems

RAID-1’s “RAID penalty”

• Penalty in space– Double the number of disk drives required

• Penalty in disk drive utilization (disk drive % busy)– Twice the number of I/O operations required for all writes

– No penalty for read operations; read operation distributed over twice the number of drives.

Page 18: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

18© 2006 Hitachi Data Systems

RAID-5 parity concept

• Each parity bit indicates whether or not there is an odd number of “1” bits in that bit position across the whole parity group (“odd parity”).

• If you add more data drives, you don’t add any more parity.

10011 11111 00000 01100

Data Data Data (odd)parity

0 XOR 1 XOR 0 = 1 There is an odd number of 1s in this bit position, so parity bit is 1

1 XOR 1 XOR 0 = 0 With an even number of 1s in this bit position, parity bit is set to 0.

Page 19: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

19© 2006 Hitachi Data Systems

RAID-5 – if drive containing parity fails

• You still have the data.• Better reconstruct the parity on a spare disk drive

right away just in case a second drive fails

10011 11111 00000 01100

Data Data Data Parity

Page 20: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

20© 2006 Hitachi Data Systems

RAID-5 – if drive containing data fails

• If a drive that had data on it fails, you can reconstruct the missing data.• Read the corresponding “chunk” from all the remaining data drives, and see how

many “1” bits there are in each position.• By comparing how many “1” bits there are in each bit position out of the remaining

disk drives with what the parity tells you there originally was, you can reconstruct the data

• Better reconstruct the parity on a spare disk drive right away just in case a second drive fails

10011 11111 00000 01100

Data Data Data ParityA “1” bit here says there

originally was an odd number of “1” data bits in this position across

the data drives

Since on the remaining data disks, there is now an even

number of “1” bits, we know that the missing data bit is a “1”`

11111

Page 21: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

21© 2006 Hitachi Data Systems

RAID-5 random read hit

• Read hits operate at electronic speed

• Just transfer data from cache

10011 11111 00000 01100

Data #1 Data #2 Data #3 Parity

Cache00000

Copy of data #3

Read data #3

Page 22: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

22© 2006 Hitachi Data Systems

RAID-5 random read miss

• Read misses are the ONLY operation that “sees” the speed of the disk drive during normal (not overloaded) operation

• I.e. read misses are the only type of host I/O operation that does not complete at electronic speed with just an access to cache

10011 11111 00000 01100

Data #1 Data #2 Data #3 Parity

Cache

10011

00000

Copy of data #1

Copy of data #3

Read data #1

Page 23: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

23© 2006 Hitachi Data Systems

RAID-5 random write

1) Read old data, read old parity

2) Remove old data from old parity giving “partial parity” (parity for the rest of the row)

3) Add new data into partial parity to generate “new parity”

4) Write new data and new parity to disk

10011 11111 00000 01100

Data #1 Data #2 Data #3 Parity

01100

11111

..........

Olddata

Old parity

1100101010

Newdata

Cache

- 10011Partial parity

New parity

11001+

New data

01010New data

#2 from host

Partial parity corresponds to

remaining part of stripe without old

data

Page 24: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

24© 2006 Hitachi Data Systems

RAID-5 sequential read

• The subsystem “detects” that the host is reading sequentially after a few sequential I/Os

– (The first few are treated as random reads.)

• The subsystem performs “sequential pre-fetch” to load stripes of data from the parity group into cache in advance of when the host will request the data

• The subsystem can usually easily keep up with the host as transfers from the parity group are performed in parallel

110110100011001111

Cache

10101 00110 10101

10101 00110 10101 11001

11001

Page 25: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

25© 2006 Hitachi Data Systems

RAID-5 sequential read example

• In parallel, read a chunk from each drive in the parity group.• 3 sets of parallel I/O operations to read 12 chunks (6 MB)• Parity group MB/s = 4 x drive MB/s

Parity 7, 8, 9

Parity 4, 5, 6

Parity 1, 2, 3

Parity 10, 11, 12

Chunk 1 Chunk 2 Chunk 3 Chunk 4

Chunk 1 Chunk 2 Chunk 3

Chunk 4

Chunk 5 Chunk 6 Chunk 7 Chunk 8

Chunk 5 Chunk 6

Chunk 7 Chunk 8

Chunk 9 Chunk 10 Chunk 11 Chunk 12

Chunk 9

Chunk 10 Chunk 11 Chunk 12

Page 26: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

26© 2006 Hitachi Data Systems

RAID-5 sequential write

• First compute the parity chunk for a row• Then write row to disk.• 4 sets of parallel I/O operations to write 12 data chunks (6 MB) with 4 parity chunks• Parity group data MB/s = 3 x drive MB/s

Chunk 1 Chunk 2 Chunk 3

Parity 1, 2, 3

Chunk 1 Chunk 2 Chunk 3 Parity 1, 2, 3

Chunk 4

Chunk 5 Chunk 6

Parity 4, 5, 6

Parity 4, 5, 6 Chunk 4Chunk 5 Chunk 6

Chunk 8Chunk 7

Chunk 9

Parity 7, 8, 9

Parity 7, 8, 9 Chunk 7 Chunk 8Chunk 9

Chunk 10 Chunk 11 Chunk 12

Parity 10, 11, 12

Parity 10, 11, 12 Chunk 10 Chunk 11 Chunk 12

Page 27: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

27© 2006 Hitachi Data Systems

RAID-5 comments

• For sequential reads and writes, RAID-5 is very good.

– It’s very space efficient (smallest space for parity), and sequential reads and

writes are efficient, since they operate on whole stripes.

• For low access density (light activity), RAID-5 is very good.

– The 4x RAID-5 write penalty is (nearly) invisible to the host, because it’s non-

synchronous.

• For workloads with higher access density and more random writes, RAID-5

can be throughput-limited due to all the extra parity group I/O operations to

handle the RAID-5 “write penalty”

Page 28: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

28© 2006 Hitachi Data Systems

RAID-5 “RAID penalty”

• Penalty in space– For 3+1, 33% extra space for parity

– For 7+1, 14% extra space for parity

• Penalty in disk drive utilization (disk drive % busy)– Random writes

• Four times the number of I/O operations (300% extra I/Os)

– Sequential writes

• For 3+1, 33% extra I/Os for sequential writes

• For 7+1, 14% extra I/Os for sequential writes

Page 29: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

29© 2006 Hitachi Data Systems

RAID-6

• RAID-6 is an extension of the RAID-5 concept which uses two separate parity-type fields usually called “P” and “Q”.

• The mathematics are beyond a basic course*, but RAID-6 allows data to be reconstructed from the remaining drives in a parity group when any one or two drives have failed.*The math is the same as for ECC used to correct errors in DRAM memory or on the surface of disk drives.

• Each RAID-6 host random write turns into 6 parity group I/O operations– Read old data, read old P, read old Q– (Compute new P, Q)– Write new data, write new P, write new Q

• RAID-6 parity group sizes usually start at 6+2.– This has the same space efficiency as RAID-5 3+1.

D1 D2 D3 D4 D5 D6 P Q

“6D + 2P” parity group

Page 30: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

30© 2006 Hitachi Data Systems

RAID-6 “RAID penalty”

• 6+2 penalty in space– 33% extra space for parity

• 6+2 penalty in disk drive utilization (disk drive % busy)– Random writes

• Six times the number of I/O operations(500% extra I/Os)

– Sequential writes

• 33% extra I/Os

Page 31: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

31© 2006 Hitachi Data Systems

RAID-1 vs RAID-5 vs RAID-6 summary

• The concept of RAID with parity groups permits data to be recovered even upon a single drive failure for RAID-1 and RAID-5, or a double drive failure for RAID-6

• RAID-1 trades off more space utilization for lower RAID penalty for writes, and lower degradation after drive failure.

– RAID-1 can be cheaper (require less disk drives) than RAID-5 where there is concentrated random write activity

• RAID-5 achieves redundancy with less parity space overhead, but at the expense of having a higher “RAID penalty” for random writes, and having a larger performance degradation upon a drive failure

Page 32: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

32© 2006 Hitachi Data Systems

30 second “elevator pitch” on subsystem data flow

• Random read hits are stripped off by cache and do not reach the back end.• Random read misses go through cache unaltered and go straight to the

appropriate back end disk drive.– This is the only type of “I/O” operation where the host always “sees” the

performance of the back-end disk drive.

• Random writes– Host sees random writes complete at electronic speed

• Host only sees delay if too many pending writes build up.– Each host random write is transformed going through cache into a multiple I/O

pattern that depends on RAID type

• Sequential I/O– Host sequential I/O is at electronic speed.– Cache acts like a “holding tank”.– Back end puts [removes] “back-end buckets” of data into [out of] the tank to

keep the tank at an appropriate level

Page 33: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

33© 2006 Hitachi Data Systems

RAID-5 can often be more expensive

• See how much busier the “back end” disk drives are for the RAID-5 configuration, all due to random writes (solid blue)

• In this case, the RAID-1 configuration was cheaper, because fewer disk drives were needed to handle the back-end I/O activity.

• RAID-1 drives could be completely filled, whereas the RAID-5 drives could only be filled to 55% of their capacity.

Page 34: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

34© 2006 Hitachi Data Systems

Conclusions – factors driving lowest cost

• The lowest cost configuration in terms of disk drive RPM, disk drive

capacity, and RAID type depends strongly on the access density

and the read:write ratio.

• If there is even moderate access density with significant random

write activity, RAID-1 will often turn out to be the lowest cost total

solution, due to being able to fill up more of the drives’ capacity with

data.

• Where access densities are higher, 15K RPM drives will often turn

out to offer the lowest cost overall solution.

• SATA drives, due to their low IOPS capability, can only be filled if

the data has very low access density, and therefore are rarely the

cheapest.

Page 35: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

35© 2006 Hitachi Data Systems

Upcoming WebTech Sessions:

• 19 September - Enterprise Data Replication Architectures that Work: Overview and Perspectives

• 17 October – 10 Steps To Determine if SANs Are Right For You

www.hds.com/webtech

Page 36: © 2006 Hitachi Data Systems RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems Hitachi Data Systems WebTech Series

© 2006 Hitachi Data Systems

Questions/Discussion