windows server infrastructure for sql server michael frandsen [email protected]

74
SQL Michael Frandsen DB203 - Windows Server 2012 R2 & SQL Server 2014 Infrastruktur Principal Consultant MentalNote [email protected]

Upload: gladys-townsend

Post on 02-Jan-2016

228 views

Category:

Documents


4 download

TRANSCRIPT

SQL

Michael Frandsen

DB203 - Windows Server 2012 R2 & SQL Server 2014

Infrastruktur

Principal [email protected]

• SQL Server storage challenges• The SAN legacy• Traditional interconnects• SMB past• New ”old” interconnects• File Shares – the new Black• “DIY” Shared Storage• Microsoft vNext

Agenda

Bio - Michael FrandsenI have worked in the IT industry for just over 21 years, 17 of these has been spent as a consultant.

My typical clients are Fortune 500 Companies, most of them global corporations

I have a close relationship with Microsoft R&D in Redmond, with the Windows team for 19 years, ever since the first beta of Windows NT 3.1, SQL server for 18 years, since the first version Microsoft did by themselves, v4.21a – I am in various advisory positions in Redmond and am involved in vNext versions of Windows, Hyper-V, SQL Server and Office/SharePoint.

Specialty areas:

• Architecture & design

• High performance

• Storage

• Low Latency

• Kerberos

• Scalability (scale-up & scale-out)

• Consolidation (especially SQL Server)

• High-Availability

• VLDB

• Data Warehouse platforms

• BI platforms

• High Performance Computing (HPC) clusters

• Big Data platforms & architecture

Bio - Michael Frandsen

SQL Server storage challenges

• Capacity

• Fast

• Shared

• Reliable

The SAN legacy• Because it’s expensive … it must be fast

0 4 8 12 16 20 240

500

1000

1500

2000

2500

3000

Price

Perf

orm

ance

• SAN Vendor sales pitch

• SAN typical

• SAN non-match

The SAN legacy• Shared storage or Direct Attached SAN

SAN2 x 8Gb/s

File Server2 x 8Gb/s

Database Server

2 x 8Gb/s

Mail Server 2 x 8Gb/s

The SAN legacy• Widespread misconception

The SAN legacy• Complex stack

FCHBA

AB

FCHBA

AB

FC S

WIT

CH

STORAGECONTROLLER

AB

AB

XOR

Engi

ne

CACH

E

SQL

SERV

ER

WIN

DO

WS

CPU

CO

RES

CPU Feed Rate HBA Port Rate Switch Port Rate SP Port Rate

A

BDISK DISK

LUN

DISK DISK

LUN

SQL Server Read Ahead Rate

LUN Read Rate Disk Feed Rate

MPI

O A

lgor

ithm

MPI

O

DSM

WW

N Z

onin

g

CACHE

SCSI

Co

ntro

ller

Port

Log

ic

Typical SAN load:

Low to medium I/O processor load (top - slim rectangles)

Low cache load (Middle - big rectangles)Low disk spindle load (lower half - squares)

SAN Bottleneck

Typical Data Warehouse / BI / VLDB SAN load:High I/O processor load – maxed out (top - slim rectangles)High cache load (Middle - big rectangles)Low disk spindle load (lower half - squares)

SAN Bottleneck

SAN Bottleneck

Ideal Data Warehouse / BI / VLDB SAN load:

Low to medium I/O processor load (top - slim rectangles)

Low to medium cache load (Middle - big rectangles)High disk spindle load (lower half - squares)

Traditional interconnects• Fibre Channel

• Stalled at 8Gb/s for many years• 16Gb/s FC still very exotic• Strong movement towards FCoE (Fibre Channel over Ethernet)

• iSCSI• Started in low-end storage arrays• Many still 1Gb/s• 10Gb/E storage arrays typically have few ports compared to FC

• NAS• NFS, SMB, etc.

File Share reliabilityIs this mission critical technology?

SMB 1.0 - 100+ Commands• Protocol negotiation, user authentication and share access (NEGOTIATE, SESSION_SETUP_ANDX, TRANS2_SESSION_SETUP,

LOGOFF_ANDX, PROCESS_EXIT, TREE_CONNECT, TREE_CONNECT_ANDX, TREE_DISCONNECT)

• File, directory and volume access (CHECK_DIRECTORY, CLOSE, CLOSE_PRINT_FILE, COPY, CREATE, CREATE_DIRECTORY, CREATE_NEW, CREATE_TEMPORARY, DELETE, DELETE_DIRECTORY, FIND_CLOSE, FIND_CLOSE2, FIND_UNIQUE, FLUSH, GET_PRINT_QUEUE, IOCTL, IOCTL_SECONDARY, LOCK_AND_READ, LOCK_BYTE_RANGE, LOCKING_ANDX, MOVE, NT_CANCEL, NT_CREATE_ANDX, NT_RENAME, NT_TRANSACT, NT_TRANSACT_CREATE, NT_TRANSACT_IOCTL, NT_TRANSACT_NOTIFY_CHANGE, NT_TRANSACT_QUERY_QUOTA, NT_TRANSACT_QUERY_SECURITY_DESC, NT_TRANSACT_RENAME, NT_TRANSACT_SECONDARY, NT_TRANSACT_SET_QUOTA, NT_TRANSACT_SET_SECURITY_DESC, OPEN, OPEN_ANDX, OPEN_PRINT_FILE, QUERY_INFORMATION, QUERY_INFORMATION_DISK, QUERY_INFORMATION2, READ, READ_ANDX, READ_BULK, READ_MPX, READ_RAW, RENAME, SEARCH, SEEK, SET_INFORMATION, SET_INFORMATION2, TRANS2_CREATE_DIRECTORY, TRANS2_FIND_FIRST2, TRANS2_FIND_NEXT2, TRANS2_FIND_NOTIFY_FIRST, TRANS2_FIND_NOTIFY_NEXT, TRANS2_FSCTL , TRANS2_GET_DFS_REFERRAL, TRANS2_IOCTL2, TRANS2_OPEN2, TRANS2_QUERY_FILE_INFORMATION, TRANS2_QUERY_FS_INFORMATION, TRANS2_QUERY_PATH_INFORMATION, TRANS2_QUERY_PATH_INFORMATION, TRANS2_REPORT_DFS_INCONSISTENCY, TRANS2_SET_FILE_INFORMATION, TRANS2_SET_FS_INFORMATION, TRANS2_SET_PATH_INFORMATION, TRANSACTION, TRANSACTION_SECONDARY, TRANSACTION2, TRANSACTION2_SECONDARY, UNLOCK_BYTE_RANGE, WRITE, WRITE_AND_CLOSE, WRITE_AND_UNLOCK, WRITE_ANDX, WRITE_BULK, WRITE_BULK_DATA, WRITE_COMPLETE, WRITE_MPX, WRITE_MPX_SECONDARY, WRITE_PRINT_FILE, WRITE_RAW)

• Other (ECHO, TRANS_CALL_NMPIPE, TRANS_MAILSLOT_WRITE, TRANS_PEEK_NMPIPE, TRANS_QUERY_NMPIPE_INFO, TRANS_QUERY_NMPIPE_STATE, TRANS_RAW_READ_NMPIPE, TRANS_RAW_WRITE_NMPIPE, TRANS_READ_NMPIPE, TRANS_SET_NMPIPE_STATE, TRANS_TRANSACT_NMPIPE, TRANS_WAIT_NMPIPE, TRANS_WRITE_NMPIPE)

14 distinct WRITE operations ?!??

SMB 2.0 - 19 Commands• Protocol negotiation, user authentication and share access

(NEGOTIATE, SESSION_SETUP, LOGOFF, TREE_CONNECT, TREE_DISCONNECT)

• File, directory and volume access(CANCEL, CHANGE_NOTIFY, CLOSE, CREATE, FLUSH, IOCTL, LOCK, QUERY_DIRECTORY, QUERY_INFO, READ, SET_INFO, WRITE)

• Other(ECHO, OPLOCK_BREAK)

• TCP is a required transport• SMB2 no longer supports NetBIOS over IPX, NetBIOS over UDP or NetBEUI

SMB 2.1• Performance improvement

• Up to 1MB MTU to better utilize 10Gb/E• ! Disabled by default !

• Real benefit required app support• Ex. Robocopy in W7 / 2K8R2 is multi-threaded

• Defaults to 8 threads, range 1-128

SQL Server SMB support• < 2008

• Using UNC path could be enabled with trace flag• Not officially supported scenario• No support for system databases• No support for failover clustering

• 2008 R2• UNC path fully supported by default• No support for system databases• No support for failover clustering

Two things happened

SQL Server 2012

Windows Server 2012

SQL Server 2012• UNC support expanded

• System Databases supported on SMB

• Failover Clustering supports SMB as shared storage

• … and TempDB can now reside on NON-shared storage • Mark Souza commented: Great Suggestion!

Windows Server 2012• InfiniBand

• NIC Teaming

• SMB 3.0• RDMA• Multichannel• SMB Direct

New “old” interconnects InfiniBand characteristics• Been around since 2001• Used mainly for HPC clusters and Super Computing• High throughput• RDMA capable• Low latency• Quality of service• Failover• Scalable

InfiniBand throughputNetwork Bottleneck Alleviation: InfiniBand (“Infinite Bandwidth”) and High-speed Ethernet (10/40/100 GE)

• Bit serial differential signaling• Independent pairs of wires to transmit independent data (called a lane)• Scalable to any number of lanes• Easy to increase clock speed of lanes

(since each lane consists only of a pair of wires)

• Theoretically, no perceived limit on the bandwidth

InfiniBand throughputNetwork Speed Acceleration with IB and HSE

InfiniBand throughput

Most commercial implementations use 4x lanes

56Gb/s - 64/66 bit encodingÞ 6,8GB/s pr port

SDR - Single Data RateDDR - Double Data RateQDR - Quad Data RateFDR - Fourteen Data RateEDR - Enhanced Data RateHDR - High Data RateNDR - Next Data Rate

InfiniBand throughputTrends in I/O Interfaces with Servers

PCIe Gen2 4x:2GB/s Data RateÞ 1,5GB/s Effective Rate

PCIe Gen2 8x:4GB/s Data RateÞ 3GB/s Effective Rate

(I/O links have their own headers and other overheads!)

InfiniBand throughputLow-level Uni-directional Bandwidth Measurements

InfiniBand uses RDMA(Remote Direct Memory Access)

HSE can support RoCE(RDMA over Converged Ethernet)

RoCE makes a huge impact on small I/O

InfiniBand latencyEthernet Hardware Acceleration• Interrupt Coalescing

• Improves throughput, but degrades latency

• Jumbo Frames• No latency impact; Incompatible with existing switches

• Hardware Checksum Engines• Checksum performed in hardware -> significantly faster• Shown to have minimal benefit independently

• Segmentation Offload Engines (a.k.a. Virtual MTU)• Host processor “thinks” that the adapter supports large Jumbo frames, but the adapter splits it into regular

sized (1500-byte) frames• Supported by most HSE products because of its backward compatibility -> considered “regular” Ethernet

InfiniBand latencyIB Hardware Acceleration

• Some IB models have multiple hardware accelerators • E.g., Mellanox IB adapters

• Protocol Offload Engines • Completely implement ISO/OSI layers 2-4 (link layer, network layer and transport layer) in

hardware

• Additional hardware supported features also present • RDMA, Multicast, QoS, Fault Tolerance, and many more

InfiniBand latencyHSE vs IB

• Fastest 10Gb/E NIC’s 1-5 µs• Fastest 10Gb/E switch 2,3 µs

• QDR IB 100 nano sec => 0,1 µs• FDR IB 160 nano sec => 0,16 µs - slight increase due to 64/66 encoding

• Fastest HSE RoCE end to end 3+ µs• Fastest IB RDMA end to end <1 µs

InfiniBand latencyLinks & Repeaters

• Traditional adapters built for copper cabling• Restricted by cable length (signal integrity)• For example, QDR copper cables are restricted to 7m

• Optical cables with Copper-to-opticalconversion hubs• Up to 100m length• 550 picoseconds copper-to-optical conversion latency

• That’s 0,00055 µs or 0,00000055 ms

File Shares – the new BlackWhy file shares?• Massively increased stability

• Cleaned up protocol• Transparent Failover between cluster nodes

• with no service outage!

• Massively increased functionality• Multichannel• RDMA and SMB Direct

• Massively decreased complexity• No more MPIO, DSM, Zoning, HBA tuning, Fabric zoning etc.

New protocol - SMB 3.0• Which SMB protocol version is used

Client / Server OSWindows 8

Windows Server 2012

Windows 7 Windows Server 2008

R2Windows Vista

Windows Server 2008Previous versions

of Windows

Windows 8 Windows Server 2012 SMB 3.0 SMB 2.1 SMB 2.0 SMB 1.0

Windows 7 Windows Server 2008 R2 SMB 2.1 SMB 2.1 SMB 2.0 SMB 1.0

Windows Vista Windows Server 2008 SMB 2.0 SMB 2.0 SMB 2.0 SMB 1.0

Previous versions of Windows SMB 1.0 SMB 1.0 SMB 1.0 SMB 1.0

Transparent Failover• Failover transparent to server

apps• Zero downtime• Small IO delay during failover

• Supports• Planned moves• Load balancing• OS restart• Unplanned failures• Client redirection (Scale-Out only)

• Supports both file and directory operations

• Requires:• Windows Server 2012 Failover

Clusters• Both server running application and

file server cluster must be Windows Server 2012

File Server Cluster

SQL Server or Hyper-V Server

Failover to Node B

2

Normal operation

Connections and handles auto-recovered; application IO continues with no errors

1 3

File Server Node A File Server Node B

\\fs1\share \\fs1\share

SMB MultichannelMultiple RDMA NICsMultiple 1GbE NICsSingle 10GbE

RSS-capable NIC

SMB Server

SMB Client

Full Throughput• Bandwidth aggregation with

multiple NICs• Multiple CPUs cores engaged

when using Receive Side Scaling (RSS)

Automatic Failover• SMB Multichannel

implements end-to-end failure detection

• Leverages NIC teaming if present, but does not require it

Automatic Configuration• SMB detects and uses

multiple network pathsSMB Server

SMB Client

SMB Server

SMB Client

Multiple 10GbE in a NIC team

SMB Server

SMB Client

NIC Teaming

NIC Teaming

Switch10GbE

NIC10GbE

NIC10GbE

Switch10GbE

NIC10GbE

NIC10GbE

NIC10GbE

NIC10GbE

Switch1GbE

NIC1GbE

NIC1GbE

Switch1GbE

NIC1GbE

NIC1GbE

Vertical lines are logical channels, not cables

Switch10GbE/IB

NIC10GbE/IB

NIC10GbE/IB

Switch10GbE/IB

NIC10GbE/IB

NIC10GbE/IB

Switch10GbE

RSS

RSS

SMB Multichannel

• No failover• Can’t use full 10Gbps• Only one TCP/IP connection• Only one CPU core engaged

1 session, without Multichannel

SMB Server

SMB Client

Switch10GbE

NIC10GbE

NIC10GbE

CPU utilization per core

Core 1 Core 2 Core 3 Core 4RSS

RSS

SMB Server

SMB Client

Switch10GbE

NIC10GbE

NIC10GbE

CPU utilization per core

Core 1 Core 2 Core 3 Core 4RSS

RSS

SMB Multichannel

• No failover• Full 10Gbps available

• Multiple TCP/IP connections• Receive Side Scaling (RSS) helps

distribute load across CPU cores

1 session, with Multichannel

SMB Server 1

SMB Client 1

Switch10GbE

SMB Server 2

SMB Client 2

NIC10GbE

NIC10GbE

Switch10GbE

NIC10GbE

NIC10GbE

Switch10GbE

Switch10GbE

NIC10GbE

NIC10GbE

NIC10GbE

NIC10GbE

RSS RSS

RSS RSS

SMB Multichannel

• No automatic failover• Can’t use full bandwidth• Only one NIC engaged• Only one CPU core engaged

1 session, without Multichannel

SMB Server 1

SMB Client 1

Switch10GbE

SMB Server 2

SMB Client 2

NIC10GbE

NIC10GbE

Switch10GbE

NIC10GbE

NIC10GbE

Switch10GbE

Switch10GbE

NIC10GbE

NIC10GbE

NIC10GbE

NIC10GbE

RSS RSS

RSS RSS

SMB Multichannel

• Automatic NIC failover• Combined NIC bandwidth available

• Multiple NICs engaged• Multiple CPU cores engaged

1 session, with Multichannel

SMB Multichannel Performance• Pre-RTM results using four 10GbE NICs

simultaneously

• Linear bandwidth scaling • 1 NIC – 1150 MB/sec• 2 NICs – 2330 MB/sec• 3 NICs – 3320 MB/sec• 4 NICs – 4300 MB/sec

• Leverages NIC support for RSS (Receive Side Scaling)

• Bandwidth for small IOs is bottlenecked on CPU

5121024

40968192

1638432768

65536

131072

262144

524288

10485760

500100015002000250030003500400045005000

SMB Client Interface Scaling - Throughput

1 x 10GbE 2 x 10GbE 3 x 10GbE 4 x 10GbE

I/O SizeM

B/se

c

RDMA in SMB 3.0SMB over TCP and RDMA

File Server

SMB Direct

1. Application (Hyper-V, SQL Server) does not need to change.

2. SMB client makes the decision to use SMB Direct at run time

3. NDKPI provides a much thinner layer than TCP/IPNo longer flow anything via regular TCP/IP

4. Remote Direct Memory Access performed by the network interfaces.

Client

Application

NIC

RDMA NIC

TCP/ IP

User

Kernel

SMB Direct

Ethernet and/or InfiniBand

TCP/ IP

Unchanged API

SMB ServerSMB Client

Memory Memory

NDKPINDKPI

RDMA NIC NIC

RDMA

1

2

3

4

SMB Server 2

SMB Client 2

SMB Server 1

SMB Client 1

Switch10GbE

Switch10GbE

R-NIC10GbE

R-NIC10GbE

R-NIC10GbE

R-NIC10GbE

Switch54GbIB

R-NIC54GbIB

R-NIC54GbIB

Switch54GbIB

R-NIC54GbIB

R-NIC54GbIB

SMB Direct and SMB Multichannel

• No automatic failover• Can’t use full bandwidth• Only one NIC engaged• RDMA capability not used

1 session, without Multichannel

SMB Server 2

SMB Client 2

SMB Server 1

SMB Client 1

Switch10GbE

Switch10GbE

R-NIC10GbE

R-NIC10GbE

R-NIC10GbE

R-NIC10GbE

Switch54GbIB

R-NIC54GbIB

R-NIC54GbIB

Switch54GbIB

R-NIC54GbIB

R-NIC54GbIB

SMB Direct and SMB Multichannel

• Automatic NIC failover• Combined NIC bandwidth available

• Multiple NICs engaged• Multiple RDMA connections

1 session, with Multichannel

“DIY” Shared StorageNew paradigm for SQL Server storage design

• Direct Attached Storage (DAS)• Now with flexibility

• Converting DAS to shared storage• Fast RAID controllers will be shared storage• NAND Flash PCIe cards (ex. Fuson-io) will be shared storage

New Paradigm designs

SQL Server

SQL Server

SQL Server

File Server

DisksFusion

IO

Fusion IO

Fusion IO

PCIe Flash

New Paradigm designs

SQL Server

SQL Server

SQL Server

File Server

File Server

Traditional SANShared Storage

NAND FlashShared Storage

New Paradigm designs

2 x 36 port Mellanox Infiniband Switch

SQL Server Cluster Node 1 SQL Server Cluster Node 2

Windows Server 2012 R2File Server Cluster Node 2

Violin Memory 6612NAND Flash Shared Storage

2 x 56Gb/s

Windows Server 2012 R2Windows Server 2012 R2

Windows Server 2012 R2File Server Cluster Node 1

2 x 56Gb/s

IODuo

PCIe

IODuo

PCIe2 x 56Gb/s

2 x 56Gb/s

4 x 8Gb/s

4 x 8Gb/s

DemoStorage Spaces

SQL Server storage challenges

• Capacity

• Fast

• Shared

• Reliable

SQL Server virtualization challenges

• Servers with lots of I/O

• Servers using all RAM and CPU resources

• Servers using more than 4 cores

• Servers using large amounts of RAM

Hyper-V v3.0

• Only two goals:

• Adopt new technologies in the Win8 kernel

• Be the best hypervisor for SQL Server

Hyper-V v3.0

• How do you become the best hypervisor for SQL Server?

Hyper-V v3.0

• Microsofts initial idea up to November 2010

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 22 240%

5%

10%

15%

20%

25%

30%

35%

40%36%

33%

1%

21%

0% 0% 0%

6%

0% 0% 0% 0% 0% 0% 0%2%

0% 0%

Servers by number of CPU Sockets Based on:

• 13.095 Servers• 1.7PB of Storage

Hyper-V v3.0

• New insight for the Hyper-V Team

Based on:

• 1.550 Servers running SQL Server• 350TB of SQL Server Storage• 5.678 Physical CPU’s running SQL Server• 15.200GB Memory for SQL Server• 2.267 SQL Server Instances• 9.599 Databases

1 2 3 4 5 6 8 11 12 14 15 16 240%

5%

10%

15%

20%

25%

30%

35%

40%

24%

36%

1%

16%

0% 0%

12%

0% 0% 0% 0%

11%

0%

SQL instances by number of CPU Sockets

CPU’s

Hyper-V Team idea of Physical to Virtual

• Before:• 750 Servers with SQL Server• 920 SQL Server Instances• 200TB Storage

• After:• 780-790 Servers with Hypervisor and SQL Server• 920 SQL Server Instances• 200TB Storage

Real life consolidation on Physical servers

• Before:• 750 Servers with SQL Server• 920 SQL Server Instances• 200TB Storage

• After:• 6 Servers with SQL Server• 12 SQL Server Instances• 140TB Storage

Real life consolidation on Physical servers

• How did we achieve the Storage savings?

37%

63%

Databases by type

SystemUser

Because of the large allocated storage space for System

databases we saved 60TB SAN space

Digging deeper

• Further storage re-claims could easily be done in databases

Allocated disk space Allocated database space Used database space Total free space0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

57%

17%

83%

Disk capacity waste in SQL environments (350TB)

Shar

e of

disk

spac

e

Final specs of Hyper-V v3.0

Capability Hyper-V Server 2008 R2 Hyper-V Server 2012

Number of logical processors on host 64 320

Maximum supported RAM on host 1 TB 4 TB

Virtual CPUs supported per host 512 2048

Maximum virtual CPUs supported per virtual machine 4 64

Maximum RAM supported per virtual machine 64 GB 1 TB

Maximum running virtual machines supported per host 384 1024

Guest NUMA No Yes

Maximum failover cluster nodes supported 16 64

Maximum number of virtual machines supported in failover clustering 1000 8000

Final specs of Hyper-V v3.0

• So what about Storage?

• VMware tops out at 300,000 IOPS per VM• A really good number

• A single Windows Server 2012 Hyper-V VM does:

• 985,000 IOPS

New Paradigm designs

Hyper-V Hyper-V Hyper-V

File Server

File Server

Traditional SANShared Storage

NAND FlashShared Storage

SQL Server

SQL Server

SQL Server

New things are happening

SQL Server 2012 R2 (SQL14)

Windows Server 2012 R2 (Windows Blue)

Windows Server 2012 R2

• RTM September 5th 2013 – Both Server and Client Win8.1• Hyper-V v4.0

• 985.000 IOPS

• Improved network performance• 300.000 IOPS/NIC

• Improved Storage Spaces• Caching• Tiered Storage

-> 1.300.000 IOPS

-> 450.000 IOPS/NIC

SQL Server 2014

• Still in development• Project Hekaton

• In-Memory OLTP

• Columnstore Index• Clustered & Updateable

• Updated Always-On• Improved reliability and scalability• 8 replicas

• Completely New Query Engine• For the first time control of IOPS with resource policies• Buffer Pool Extension

• Use NAND Flash as L2 Memory

Re-think hardware usage

MechanicalHarddrives

NANDFlash

Storage Memory

Re-think hardware usage

MechanicalHarddrives

NANDFlash

Storage Memory

L5 Cache

L4 Cache

Re-think hardware usage

MechanicalHarddrives

NANDFlash

Storage Memory

L2 RAM

L1 RAM

SSD Buffer Pool Extension and Scale up• What’s being delivered:• Usage of non-volatile drives (SSD) to extend buffer pool• NUMA aware Large page and BUF array allocation

• Main benefits:• BP Extension for SSDs

• Improve OLTP query performance with no application changes• No risk of data loss (using clean pages only)• Easy configuration optimized for OLTP workloads on commodity

servers (32GB RAM)

• Scalability improvements for systems with >8 sockets

Example:ALTER SERVER CONFIGURATIONSET BUFFER POOL EXTENSION ON(FILENAME = 'F:\SSDCACHE\EXAMPLE.BPE‘, SIZE = 50 GB)

Buffer Pool Manager

DataFiles

Relational Engine

ProtocolLayer

Storage Engine Buffer Pool

Optimizer

CmdParser

QueryExecuto

rSNI

Transaction

Manager

AccessMethods

BufferManager

Transaction Log

Plan Cache

Data Cache

TDS

Query Plan Result Sets

ResultsDat

a

GetPage D

TDS

ReadI/O

WriteI/O

CommandQuery Tree

Cached Pages

IOPS Offload to Storage Class Memory (SCM) in Memory

Easy enablement

Troubleshooting optionsDMVssys.dm_os_buffer_pool_extension_configurationsys.dm_os_buffer_descriptors

XEventssqlserver.buffer_pool_extension_pages_writtensqlserver.buffer_pool_extension_pages_readsqlserver.buffer_pool_extension_pages_evictedsqlserver.buffer_pool_page_threshold_recalculated

Performance Monitor CountersExtension page writes/secExtension page reads/secExtension outstanding IO counterExtension page evictions/secExtension allocated pagesExtension free pagesExtension page unreferenced timeExtension in use as percentage on buffer pool level

Thank [email protected]

EvaluationCreate a Text message on your phone and send it to 1919 with the content:

DB203 5 5 5 I liked it a lotSession Code

Speaker Performance

(1 to 5)

Match of technical Level

(1 to 5)Relevance

(1 to 5)

Comments(optional)

Evaluation Scale: 1 = Very bad 2 = Bad 3 = Relevant 4 = Good 5 = Very Good!

Questions:• Speaker Performance• Relevance according to your work • Match of technical level according to

published level• Comments