no slide title · oracle [email protected] . program agenda • enterprise application...

25

Upload: vuongduong

Post on 30-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

ACCELERATING

ENTERPRISE

APPLICATIONS WITH

FLASH STORAGE

Randal Sagrillo

Principal Product Performance Manager

Oracle

[email protected]

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

Enterprise Application Issues

• Batch Job duration too long

• Reporting/ad hoc query too long

• OLTP times too long (Business

value)

– Or not high enough OLTP rate

(Operational value)

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

“Do you have an I/O bottleneck?”

Top 5 Timed Events Avg %Total

~~~~~~~~~~~~~~~~~~ wait Call

Event Waits Time (s) (ms) Time Wait Class

---------------------------- ---------- --------- ------ ---- ---------

db file sequential read 19,858,182 72,997 4 41.0 User I/O

CPU time 55,805 31.4

log file sync 3,840,570 33,452 9 18.8 Commit

log file parallel write 3,356,001 12,749 4 7.2 System I/O

db file scattered read 3,672,892 10,018 3 5.6 User I/O

-------------------------------------------------------------

Database I/O Bottlenecks: Wait

Events

• Typical I/O wait types, foreground

– db file sequential read: disk to database buffer cache wait

– db file scatter read: wait for multi-block read into buffer

cache

– read by other session: another session waiting for block

above

– direct path read: read bypassing buffer cache directly into

PGA

• Typical I/O wait types, background

– log file parallel write: write log data (typically to NVRAM)

from LGWR

– db file parallel write: write to tables async from DBWR

– log file sequential read: to build archive log, DataGuard

– log archive I/O, RMAN, etc.

Typical Storage Bottlenecks

I

N

I

T

I

A

T

O

R

T

A

R

G

E

T

Demand Supply

IOPS

MB/Sec

milliseconds

• Maximum IOPS delivered

– Talked about the most, but

least important for

enterprise applications

– Measures concurrency

• Maximum data rate

– Really measured channel

and disk bandwidth

• Shortest service time

– Usually most important for

databases

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

Flash for Database Acceleration

• Flash Arrays: TB and up

– 100’s to over 1000 random

read KIOPS

– 100’s us service times

– Many 10’s GB/Sec reads

– SHARABLE!

• PCI Flash cards: 100’s GB

– 100K approaching 1M random

read IOPS

– 100’s of us service times

– ~1’s GB/sec reads

• SSD: Many GB

– Many 10K IOPS

– 100 of us service times

– Interface speed GB/sec reads

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

Database Smart Flash Cache

• Acts as Level 2 SGA

• Changes physical

read I/O to logical I/O

• Rule of sizing: 2x –

10x buffer cache size

• Best accelerates read

intensive workloads

Storage

Few

I/O’s

Buffer Cache

Storage

Buffer Cache

Many

I/O’s

Example: Flash Cache Setup

• Aggregate Flash Modules to pool

– Concat. No mirroring - it is a cache!

– Best results seen with DB Automated Storage

Management (ASM)

– This example used OS Volume Manager (SVM)

• Set two init.ora parameters

– db_flash_cache_file = <+flashdg/FlashCacheFile>

• Path to flash file/raw aggregation/metadevice

– db_flash_cache_size = <flash pool size>

• L2 SGA size: amount of flash to use

Deployment Model/Layout

• Test Conditions:

– Read only OLTP workload

– Buffer Cache size of 25GB

– DB working set 3X buffer cache size

– Varied database flash cache size:

• From 0 to 100GB

Business and Operational

Results

• ‘db file sequential

read’ gone as ‘Top 5

Timed Event’

• Reduced transaction

times

• Increased

transaction rate

• Nearly 5x

improvement

• 1:1 Flash to cache

size

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

RAC Considerations for Smart

Flash Cache

• RAC Scaling

Generally Held

• Eliminates

Physical I/O if

block in any

node’s Buffer

Cache

• But only checks

blocks in local

node’s Flash

Cache File

Buffer Cache

Shared Storage

Buffer Cache

Database Smart Flash Cache

Memory

Coherence

Database Smart Flash Cache

Two Node RAC Example

Real World - Before Flash Cache

• Notes

– 15 minute snapshots ‘under load’: but 9.5 hours

of buffer waits!

– 77 minutes of commit time!

Top 5 Timed Foreground Events Avg %Total

~~~~~~~~~~~~~~~~~~ wait Call

Event Waits Time (s) (ms) Time Wait Class

---------------------------- ---------- --------- ------ ---- ---------

db file sequential read 3,189,229 34,272 11 67.8 User I/O

CPU time 11,332 22.4

log file sync 2,247,374 4,612 2 9.1 Commit

gc cr grant 2-way 1,365,247 793 1 1.6 Cluster

enq: TX – index contention 140,257 720 5 3.1 Concurrenc

-------------------------------------------------------------

Real World - After Flash Cache

• Notes

– Average Flash Cache Read time 540 us (ASM)

– Of other (previous) POC’s is 320-360 us (ASM)

Top 5 Timed Foreground Events Avg %Total

~~~~~~~~~~~~~~~~~~ wait Call

Event Waits Time (s) (ms) Time Wait Class

---------------------------- ---------- --------- ------ ---- ---------

CPU time 11,353 57.6

log file sync 1,434,247 6,587 3 33.4 Commit

flash cache single block read 4,221,599 2,284 1 21.3 User I/O

Buffer busy waits 723,807 1,502 329 3.3 Concurrenc

db file sequential read 22,727 182 8 67.8 User I/O

-------------------------------------------------------------

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

Flash Disk Group Design

• “db file sequential read” still key indicator

• Accelerates write as well as read intensive

workloads

– Indexes, Hot tables, Flash Reco, Etc.

• Requires you manage vs. the database (I.E.

SFC.)

• Note for RAC: RAC needs shared storage

• Requires at least 2:1 Flash to Storage –

Mirroring

• Not recommended for logs: NVRAM faster

than Flash

ASM Flash Disk Group

Configuration

• ASM Normal Redundancy (Flash Module

extents ‘Mirrored’)

• Failure groups across SAS domains

– chassis even better

ASM Business and Operational

Results

• Transaction times

improved 50%

• Doubled new order

throughput

• Did so for less

than 20% original

investment

Program Agenda

• Enterprise Application Performance

• Do you have an I/O bottleneck?

• Flash Storage

• Database Smart Flash Cache

• Flash Cache in clusters

• Flash disk groups

• Q &A

Q&A