sybase ase 15 5 make queries run faster webcast
TRANSCRIPT
MAY 25, 2010DIAL NUMBERS:
1‐866‐803‐2143
1‐210‐795‐1098 PASSCODE: SYBASE
MAKE YOUR ASE QUERIES RUN FASTER
2 – May 25, 2010
MAKE YOUR ASE QUERIES RUN FASTER
Your host
Terry OrsbornProduct Marketing
Manager Sybase
Rob VerschoorSenior Technology
Evangelist Sybase
Today’s speaker
3 – May 25, 2010
HOUSEKEEPING
Questions?Submit via the ‘Questions’
tool on your Live Meeting console,
or call
1‐866‐803‐2143
United States & Canada
0800‐777‐0476
Argentina
0800‐8911992
Brazil
01800‐9‐156448
Colombia
001‐866‐888‐0267
Mexico
Password
SYBASE
Press
1
during the Q&A segment
Presentation copies?Select the printer icon on the Live Meeting console
ROB VERSCHOORSENIOR TECHNOLOGY EVANGELISTMAY 25, 2010
MAKE YOUR ASE QUERIES RUN FASTER
5 – May 25, 2010
QUERY PERFORMANCE FEATURES IN ASE 15.x
•
Many new query performance features have become
available since ASE 15 was first released
•
This webcast looks at some key performance features
available in ASE 15.x
•
Viewed from the angle of applicability–
Which feature can be used for particular types of workload–
How easily can a feature be deployed
6 – May 25, 2010
QUERY PERFORMANCE FEATURES IN ASE 15.x
•
Impacting existing applications: SQL code changes required–
Semantic partitioning (15.0)–
SQL user‐defined functions (UDFs) (15.0.2)
•
Application‐transparent: ASE configuration only–
Statement cache + literal autoparam
(15.0.1)–
Relocated joins (15.0.2)–
Function indexes (15.0)
•
Limited application impact: Some SQL changes may be
needed–
ASE 15.0 query processing enhancements (15.0)–
IMDB (15.5)
Features discussed, grouped by ease of deployment
7 – May 25, 2010
ASE 15 ENHANCED QUERY PROCESSING (ASE 15.0)
8 – May 25, 2010
ASE 15 — FULLY REDESIGNED QUERY PROCESSING
•
ASE pre‐15 was ‘hard‐wired’
for OLTP; such queries are
typically:–
Short (10s or 100s of milliseconds)–
Hitting only one or a handful of rows–
Joining a small number of tables, which are indexed perfectly
•
Queries for DSS/reporting/aggregation have different data
access patterns:–
Hitting many rows–
Performing large aggregations–
Joining many tables, not necessarily ideally indexed
•
The OLTP‐oriented pre‐15.x Query Processing engine could
not process non‐OLTP queries as efficiently as needed, so a
new QP engine was designed and implemented for ASE 15
9 – May 25, 2010
ASE 15 — FULLY REDESIGNED QUERY PROCESSING
•
New query optimizer and execution engine aim to improve
this with many QP enhancements such as:–
New join types (hash join, merge join, enhanced nested‐loop join)–
Improved performance by avoiding usage of worktables–
Joins on expressions–
Faster processing and better query plans for large multi‐table joins
(especially Star/Snowflake schema joins)
–
Joins with OR clauses or mismatched datatypes–
…and much more…
•
Key concept: The new QP enhancements deliver performance
gains without changing SQL code
10 – May 25, 2010
USING ASE 15 QP MOST EFFECTIVELY
•
Consider using the ASE 15 query processing enhancements
for DSS‐style workload–
Queries hitting many rows–
Queries against star/snowflake schema–
Queries with many aggregates–
Queries joining many tables
•
NB: These ASE 15 QP enhancements will likely not deliver
major gains for OLTP workload–
ASE was always optimized for OLTP anyway–
…but other OLTP‐specific features are available in ASE 15
11 – May 25, 2010
USING ASE 15 QP MOST EFFECTIVELY
•
Deploying ASE 15 QP for DSS‐style workload–
Use the optimization goal allrows_dss
(or allrows_mix)–
…either server‐wide (sp_configure
'optimization goal', 0, allrows_dss)Downside is that you may need to spend more time on testing to avoid potential adverse effects
–
…or for selected parts of functionality where you expect gains may be
possible
•
About these "selected parts of functionality…" (as above)–
Identify specific stored procedures with DSS‐style queriesPut set plan optgoal allrows_dss in the proc body
–
Identify particular logins and/or application names that perform
DSS‐
style workload
Put set plan optgoal allrows_dss in a login trigger
12 – May 25, 2010
USING ASE 15 QP MOST EFFECTIVELY
•
Challenges when deploying ASE 15 QP for DSS‐style workload–
Identifying those relevant pieces of functionalityPerhaps avoiding OLTP‐type queries to be subjected to allrows_dss
–
Performing sufficient testing to ensure all queries run fine–
Having to run update statistics
(or update index statistics) more often,
or with different step counts
–
Having to use fine‐tuning optimizer settings. Examples:set store_index off – when unwanted reformatting happensset advanced_aggregation off – when seeing excessive compilation times under allrows_dss
•
Attractions–
ASE 15 QP enhancements can provide huge gains for qualifying workload–
Various examples of customers seeing major performance improvements
just by upgrading to ASE 15 and using allrows_dss, without any SQL code
changes
Customers have reported performance gains from 50% better to 600 times better
13 – May 25, 2010
USING ASE 15 QP MOST EFFECTIVELY
•
For DSS‐style workload, use an optgoal
of allrows_dss
(or allrows_mix)
•
In principle, application‐transparent: No SQL code changes
needed–
In practice, some fine‐tuning may be needed, causing small SQL
changes
•
Effort required for deployment can range from little to
medium–
Find a balance between setting the optgoal
server‐wide and on
session level
–
Realistic testing is required to assess performance effects
•
Available in any ASE 15.x version
Summary
14 – May 25, 2010
STATEMENT CACHE + LITERAL AUTOPARAM (ASE 15.0.1)
15 – May 25, 2010
ASE 15 QP FOR OLTP WORKLOAD
•
For OLTP‐style workload, the enhanced query processing
capabilities in ASE 15 will likely provide little improvement–
Because primarily aimed at DSS‐style workload
•
ASE 15 provides other OLTP‐related performance features–
Statement Cache (+ Literal Autoparam)–
Semantic Table Partitioning
16 – May 25, 2010
STATEMENT CACHE (+ LITERAL AUTOPARAM)
•
The following two ad‐hoc queries are likely to use the same
query planselect * from my_table where k = 123 select * from my_table where k = 124
•
With literal autoparam
enabled, the statement cache replaces
constants in the WHERE‐clause with placeholdersselect * from my_table where k = @@@V0_INT
•
The cached query plan is associated with this 'normalized'
version of the query text–
…so all similar‐looking queries (as above) with an integer as a search
argument will use the cached plan
17 – May 25, 2010
STATEMENT CACHE (+ LITERAL AUTOPARAM)
•
Effect of Statement Cache (+ Literal Autoparam)–
Query plan for ad‐hoc SQL query is cached and re‐used for the
same/similar queries
–
Avoids repeated query optimization overhead for same/similar
queries
In busy systems, this overhead can be significant
•
Statement Cache (+ Literal Autoparam) are aimed at ad‐hoc
SQL queries, generated/issued by the client
•
NB: The Statement Cache (+ Literal Autoparam) can still be
used when "compatibility mode" is active in ASE 15–
Most other ASE 15‐specific features are automatically disabled for
statements where compatibility mode is active
•
When not using compatibility mode, set the optgoal
to
allrows_oltp
for OLTP workload
18 – May 25, 2010
WHEN TO USE THE STATEMENT CACHE
•
Statement Cache is useful only when "similar" ad‐hoc queries
occur frequently. Examples:–
OLTP workload: Queries often vary only in primary keyBank account numbers; customer IDs; article codes
–
Replicate dataservers
in replication systemsThe queries RepServer sends to the replicate server are typically hitting one row, and specify a unique primary key: ideal for statement cacheNB: RS 15.1, use the statement cache for replicate ASE 15.0.1+ servers rather than the RS 15.1 dynamic_sql option for the replicate connection, since this appears to perform better
•
Note: The statement cache does not apply to stored
procedures–
Plans for procedures are already cached in the procedure cache
19 – May 25, 2010
WHEN TO USE THE STATEMENT CACHE
•
Classic ASE problem: A query hits different numbers of rows
for different search argumentsselect * from Orders where order_no between N and MHits few rows if N and M define a small range; best query plan uses an indexHits many rows if a large range; best plan uses a table scan
•
This problem is not solved by the statement cache!–
Same problem exists for stored proc plans–
The cached plan depends on the parameter values specified the first
time this statement is executed
–
Assumption underlying query plan caching: similar‐looking queries will
have the same optimal query plan
–
Solution: Disable the statement cache for this statement, or submit
differently‐formed queries for these two cases
20 – May 25, 2010
DEPLOYING THE STATEMENT CACHE
•
Enabling the Statement Cache is application‐transparent–
Functionally neutral (no impact on query results); only difference is
overall performance
–
Enabled by server‐side configuration settings onlysp_configure 'statement cache size', 1000 — allocate 1000*2KB sp_configure 'enable literal autoparam', 1
–
Important: Increase config
parameter 'number of open objects' by the
number of statements expected to be cached
Use sp_monitorconfig or sp_sysmon to monitor 'number of open objects'
•
Sizing of statement cache is important: too small could have a
negative performance impact on busy systems–
Monitor statement cache effectiveness (i.e., are cached plans
frequently re‐used) with sp_sysmon
or with MDA tables
monCachedStatements
+ monStatementCache
21 – May 25, 2010
DEPLOYING THE STATEMENT CACHE
•
Always enable literal autoparam
when using the statement
cache–
Statement cache without literal autoparam
is unlikely to be useful
since 100% identical queries are unlikely to occur much
•
On busy systems, it may be helpful to disable the statement
cache for sessions submitting ad‐hoc queries which will not
benefit from the statement cache–
Non‐OLTP queries, such as big updates–
Such queries will put additional load on the statement cache with
delivering any gains; could have adverse effects (high statement
cache
turnover)
–
To disable for a session, use set statement_cache
off
(can be done in
a login trigger)
22 – May 25, 2010
STATEMENT CACHE (+ LITERAL AUTOPARAM)
•
For OLTP workload, and ad‐hoc SQL queries only, enable the
Statement Cache + Literal Autoparam
•
Use an optgoal
of allrows_oltp
or use compatibility mode
•
Fully application‐transparent: no SQL code changes needed
•
Little effort required for deployment–
Main effort is in monitoring
•
Requires ASE 15.0.1 or later
Summary
23 – May 25, 2010
SEMANTIC TABLE PARTITIONING (ASE 15.0)
24 – May 25, 2010
ASE 15 SEMANTIC TABLE PARTITIONING
•
Splitting large tables into smaller, manageable chunks–
A chunk is know as a "table partition"
•
Data in each partition processed independent of other
partitions–
For queries, functionally identical to an unpartitioned
table
•
DBA maintenance tasks can be broken down by partition–
Divide big tasks divide into smaller sub‐tasks–
Sub‐tasks an process in parallel
•
Divide‐and‐conquer strategy to manage large and ever‐
increasing volumes of data
What is table partitioning?
25 – May 25, 2010
ASE 15 SEMANTIC TABLE PARTITIONING
•
What is 'semantic' about table partitioning?
•
Data is divided into partitions based on the value
of a column
•
Three partitioning schemes:–
By range
–
Example: One partition for every week of new orders–
By list
– Example: One partition for every geography–
By hash
– Used when no natural partition key is available
•
NB: Pre‐15 partitioning is still supported, not subject to
licensing–
Also known as round‐robin partitioning–
Not semantic, but randomized
26 – May 25, 2010
ASE 15 SEMANTIC TABLE PARTITIONING
•
Financial stock trading data: 115‐million‐row table–
Table is range‐partitioned by trading date
create table StockTrade (id int, order_date datetime,…)
partition by range (order_date)
(p1 values <= ('01-Jan-2009'),
p2 values <= ('01-Apr-2009'),
p3 values <= ('01-Jul-2009'),
p4 values <= ('01-Oct-2009'),
p5 values <= ('01-Jan-2010'),
p6 values <= ('01-Apr-2010'))
100 million rows over 1996-20081 million rows per month…= 3 million rows per partition
27 – May 25, 2010
ASE 15 SEMANTIC TABLE PARTITIONING
•
ASE 15 query optimizer takes advantage of the semantic
partitioning–
ASE Query Processing eliminates partitions where the requested data
cannot be, avoiding search through full‐sized table
–
For the following query, ASE will search only in partition p6select avg(order_amount) from StockTrade
where order_date between '01-Feb-2010' and '01-Mar-2010'
•
ASE 15 Query Processing engine designed to utilize parallelism
for partitioned tables
Benefits of table partitioning for performance
28 – May 25, 2010
•
Global index: Single index tree for entire table
•
Local index: Separate index tree for each partition–
Reduced root page contention–
Fewer index pages searched less I/O–
Better performance
3‐4
Index Levels
Query A
Query B
Query C
3 million rows 3 million rows 3 million rows
PERFORMANCE ENHANCEMENTS WITH LOCAL INDEX
Local (Partitioned) IndexGlobal (Unpartitioned) Index
Unpartitioned
Table
Local Index
SHORTER
ACCESS
PATH
Partitioned Table115 million rows
Query A Query B Query C
Global Index6‐7
Index Levels
29 – May 25, 2010
ASE 15 SEMANTIC TABLE PARTITIONING
•
Reduced maintenance window–
Breaking up maintenance operations–
Maintain only those partitions requiring maintenance
•
Better mixing of DSS and OLTP workload–
Improved intra‐query parallel processing
•
Offers a strategy for data "archiving"
Benefits of semantic table partitioning for maintenance
30 – May 25, 2010
PARTITIONING AND TABLE MAINTENANCE
•
Stock trading data: 115‐million‐row table–
Update statistics orders
(full table): 1 hr 20 min.–
Update statistics orders partition p4: 45 seconds
create table StockTrade (id int, order_date datetime,…)
partition by range (order_date)
(p1 values <= ('01-Jan-2009'), -- 100 million rowsp2 values <= ('01-Apr-2009'), -- 3 million rowsp3 values <= ('01-Jul-2009'), -- 3 million rowsp4 values <= ('01-Oct-2009'), -- 3 million rowsp5 values <= ('01-Jan-2010'), -- 3 million rowsp6 values <= ('01-Apr-2010')) -- 3 million rows
31 – May 25, 2010
PARTITIONING AND TABLE MAINTENANCE
•
When old data needs to be removed, just drop the oldest
partition
•
Add new partitions for new datacreate table StockTrade (id int, order_date datetime,…)partition by range (order_date) (p1 values <= ('01-Jan-2006'), p2 values <= ('01-Apr-2006'), […]p7 values <= ('01-Jul-2007'), […]p12 values <= ('01-Oct-2008'), […]p16 values <= ('01-Oct-2009'), p17 values <= ('01-Jan-2010'), […]
drop this partition
p22 values <= ('01-Apr-2011')) Add new partitions here
32 – May 25, 2010
PARTITIONING AND DBA TASKS
•
Various maintenance operations operate per partition:–
Update statistics
can be run on a single partition–
Reorg
utility (compact, forwarded_rows, reclaim_space) can be run on
a single partition
–
Online reorg
rebuild index
on a single index partition–
Truncate
individual partitions–
Drop
individual partitions (range & hash only)
•
Partition‐level utilities allow for DBAs
to periodically schedule
maintenance tasks, cycling through all partitions of a table
•
BCP into a specific partition
33 – May 25, 2010
Data load
UNPARTITIONED vs. PARTITIONED TABLES
Large Table
Large Delete
Data load
DSS
Query
OLTP apps
Update
statistics
Unpartitioned
Table
Partition
Large Delete
PartitionUpdate
statistics
Partition
OLTP appsPartition
Partition
DS
SPartitioned Table
34 – May 25, 2010
ASE 15 SEMANTIC TABLE PARTITIONING
•
Aimed at large tables–
Deliver performance gains for OLTP–
Provide higher availability (less downtime) for database maintenance
•
Not application‐transparent: Existing queries may need to be
modified for best results
•
Significant effort required for deployment–
Design partitioning scheme–
Likely modify existing queries to specify value for partition key–
Partition existing data
•
Requires ASE 15.0 or later
•
ASE license option
Summary
35 – May 25, 2010
FUNCTION INDEXES (ASE 15.0)
36 – May 25, 2010
FUNCTION INDEXES
•
In an ideal world, developers ensure their queries use the
well‐thought‐out indexes in the physical data model!
•
In reality, queries may have WHERE‐clauses that cannot use
any of the existing indexes…
•
Examples:–
Index on (a) ; query: where isnull(a,0) = @variable–
Index on (name) ; query: where upper(name) = 'SMITH'–
Indexes on (units) and (price) ; query: where units*price > 1000
•
In all of these cases, the queries cannot use the existing indexes,
causing slow performance–
Changing the queries is often cumbersome/convoluted/slow/expensive
37 – May 25, 2010
FUNCTION INDEXES
•
Solution in ASE 15.0: Function Indexes
•
A function index is an index on an expressioncreate index a_isnull on mytable ( isnull(a,0) )create index upper_name on customers ( upper(name) )create index units_times_price on orders ( unit*price )
•
Queries referencing the same expression can use the function
index
•
Function indexes can be added by the DBA to optimize
queries with 'sub‐optimal' WHERE‐clauses–
Fully application‐transparent–
Always need to balance the added overhead for inserts/updates/
deletes for each additional index
38 – May 25, 2010
FUNCTION INDEXES
•
Performance impact of adding the right function index can
be huge–
Can make the difference between a table scan and an index‐based
plan
•
Example:–
100,000‐row table with customer dataselect * from customerswhere upper(name) = 'SMITH'
–
Without index on (name): 2‐3 seconds; 16000 I/Os; table scan)–
With function index on ( upper(name) ): < 50 millisec; 4 I/Os; uses
function index
39 – May 25, 2010
FUNCTION INDEXES
•
A function index is shorthand for an index on a computed
column:create index upper_name on customers ( upper(name) )
•
Under the covers, this creates a (materialized) computed
column for upper(name), and then creates an index on
that column–
Computed columns created for function indexes are 'hidden', so they
don't show in select *
Computed columns is a bigger topic, but not discussed here
•
Function indexes can be created on any expression –
But can only refer to columns from the table itself…–
…unless SQL UDFs
are used in the expression, which can perform
arbitrary processing
•
Function indexes can be used for easy access to complex data
XML, Java object, Text/Image data
40 – May 25, 2010
FUNCTION INDEXES
•
Additional tool for DBA to fight performance issues, without
developer involvement
•
Fully application‐transparent: No SQL code changes needed–
Requires only adding indexes
•
Relatively little effort required for deployment–
Requires analysis of underperforming application queries to see if
function indexes could be a solution
•
Available in any ASE 15.x version
•
Potential downsides:–
Applications may generate more funny WHERE‐clauses than function
indexes can be created for
–
Additional indexes generate more overhead for insert/update/delete
Summary
41 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS (UDFs) (ASE 15.0.2)
42 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
A SQL UDF executes a series of SQL statements and returns a
single value. Example:create function greatest (@p1 int, @p2 int)returns int
asreturn case when @p1 > @p2 then @p1 else @p2 end
go
select dbo.greatest(2,3)returns 3
select dbo.greatest( dbo.greatest(2,3), 4)returns 4
select dbo.greatest(column1, column2) from tablereturns the greatest value for every row in the table
43 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
Processing in a SQL UDF can be arbitrarily complex.
•
Example: Formatting an integer into binary format–
Note that this UDF uses a loop‐based algorithm!
create function int2bin (@n bigint) returns varchar(80)as
declare @i bigint, @temp bigint, @s varchar(80)set @i = @n, @s = ''while (@i > 0)begin
set @temp = @i % 2set @i = @i /2set @s = char(48 + @temp) + @s
endreturn @s
go
select dbo. int2bin(1234567890123456789)returns:
'1000100100010000100001111010001111101111010011000000100010101'
44 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
How can SQL UDFs
help improve query performance?
–
Sounds counterintuitive, since each SQL UDF will have a small amount
of overhead itself…
45 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
How can SQL UDFs
help improve query performance?
–
Sounds counterintuitive, since each SQL UDF will have a small amount
of overhead itself…
•
With SQL UDFs, you can avoid loop‐based algorithms, which
are known to be slow
•
Example: Complex processing on every row of a table; without
UDFs, this requires a loop‐based algorithm and a stored
procedure:declare my_loop cursor forselect column1, column2 from my_table
while ... -- process every rowbegin
fetch my_loop into @col1, @col2exec sp_complex_processing @col1, @col2
end
46 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
By putting the complex processing in a SQL UDF, the top‐level
algorithm can avoid a loopcreate function f_complex_processing(@col1 int, @col2 int)
returns intas… arbitrary processing here, using loops and what not…
go
-- no loop here:select dbo.f_complex_processing(column1, column2) from my_table
•
Avoiding loop‐based algorithms can easily be an order of
magnitude faster
47 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
Complex join conditions may be simplified by using SQL UDFs
in the WHERE‐clauseselect * from table1 t1, table2 t2where dbo.f_complex_processing(t1.col1, t1.col2) =
dbo.f_other_processing(t2.col3, t2.col4)
•
Use function indexes on these UDFs
to optimize the query
•
Without SQL UDFs, the above could only have been written as
a complex loop‐based algorithm
48 – May 25, 2010
SQL USER‐DEFINED FUNCTIONS
•
SQL UDFs
can be used to restructure complex SQL code into
more efficient queries–
Especially useful to avoid loop‐based processing algorithms
•
Not application‐transparent: requires changes to SQL code
•
Effort required for deployment ranges from medium to high–
May require analysis of complex existing SQL code–
Least effort for newly developed functionality
•
Requires ASE 15.0.2 or later
Summary
49 – May 25, 2010
RELOCATED JOINS (ASE 15.0.2)
50 – May 25, 2010
RELOCATED JOINS
•
Applies only when using proxy tables
•
Purpose: Improve performance of joins between large tables
located in remote databases with smaller local tables
•
Concept: Join is executed where most of the data is located
•
Requires ASE 15.0.2 or later for all ASE servers involved
51 – May 25, 2010
RELOCATED JOINS
•
Without relocated joins:–
Local ASE server manages join between local data and
remote server–
Large remote table is accessed through a proxy table; data
is transferred to local ASE server–
Poor performance
–
Join could be faster if executed at the remote server
Joining a small local table with large remote table
local ASE server
remote ASE serverLocal table t_local, thousands of rows
Remote table t_remote, millions of rows
Proxy table t_proxy, mapping to t_remote
select count(*) from t_local t1, t_proxy t2 where t1.id = t2.id
52 – May 25, 2010
•
With relocated joins:–
Join is pushed to remote server
–
Proxy table is created in remote server,
mapping to the local table–
Proxy tables are created dynamically if needed
RELOCATED JOINS
local ASE server
remote ASE serverLocal table t_local, thousands of rows
Remote table t_remote, millions of rows
Proxy table t_proxy, mapping to t_remote
Temporary proxy table t2_proxy mapping to t_local
select count(*) from t2_proxy t1, t_remote t2 where t1.id = t2.id
select count(*) from t_local t1, t_proxy t2 where t1.id = t2.id
53 – May 25, 2010
RELOCATED JOINS — PERFORMANCE
•
Internal performance test: Performance improvement on LAN
table
sizer1
500,000 rows (remote)
r2
5,000 rows (remote)
l1
5,000 rows (local)
Query Before (seconds) Relocated (seconds)
select * from l1,r1 where l1.id = r1.id 28 1
select * from l1,r1,r2
where l1.id = r1.id and r1.id = r2.id
40 1
54 – May 25, 2010
RELOCATED JOINS — CONFIGURATION
•
Required configuration on local ASE server–
Relocated join strategy must be enabled for each remote serversp_serveroption
REMOTE_SERVER, 'relocated joins', true
•
Required configuration on remote ASE server–
Remote ASE server must be capable of using a proxy table reference
back to local server
sp_addserver
LOCAL_SERVER(also, may need to configure external logins, interfaces file)
–
Enable 'allow DDL in tran' for tempdbThe proxy tables will be created in tempdb if they don't existsp_dboption
tempdb,'ddl
in tran',true
55 – May 25, 2010
RELOCATED JOINS — RESTRICTIONS
•
For join relocation to be considered:–
All ASE servers involved must be 15.0.2 ASE or above–
No Text/Image data types may be part of the result set–
Applies to DML only (not to DDL)–
No client‐side cursors can be involved
56 – May 25, 2010
RELOCATED JOINS
•
Apply when proxy tables are used; potential gains can be
significant
•
Fully application‐transparent: Server‐side configuration
changes only
•
Little effort required for deployment –
Few aspects to monitor
•
Requires ASE 15.0.2 or later for both ASE servers involved
Summary
57 – May 25, 2010
IN‐MEMORY DATABASES (ASE 15.5)
58 – May 25, 2010
ASE IN‐MEMORY DATABASE (IMDB)
ASE database device
ASE database
Physical diskSystem memory
ASE cache
Classic ASE disk‐
based database
imdb
named cache
ASE 15.5
in‐memory database
59 – May 25, 2010
NEW FLAVORS OF ASE DATABASES
•
IMDB (In‐Memory Database)–
Fully in‐memory–
No disk storage; no disk I/O; no transaction logging to disk–
No transaction durability –
Supports minimally logged DML (relaxed transaction atomicity)–
No crash recovery; database always recreated upon ASE reboot;
optional template DB
•
RDDB (Reduced Durability Database)–
Disk‐based and not limited by memory size–
Many of the same optimizations and features as IMDB–
Optional crash recovery; optional template DB
•
Temporary in‐memory database–
An ASE tempdb
created as an IMDB
ASE 15.5 supports the following new types of databases
60 – May 25, 2010
NO DISK READS OR WRITES IN ASE IMDB
•
No crash recovery: On shutdown, IMDB contents are lost ("volatile
database")–
Will always be recreated upon restart (only moment when disk reads occur)–
A template database can be specified as boot‐time blueprint
•
No transaction logging to disk: no transaction durability ('D' in ACID)–
But IMDB data can be extracted and stored in a regular database (see later slides)
•
With minimally‐logged DML, can relax transaction atomicity ('A' in ACID)
•
Max. size is limited by amount of available cache memory–
Consider using RDDB if IMDB would be too large
61 – May 25, 2010
ASE 15: RELAXED ACID vs. PERFORMANCE
•
ASE 15.5 IMDB offers better performance in exchange for
relaxed ACID properties–
Not suitable for all applications! Full ACID is often required by many
business applications
•
Bottlenecks addressed in ASE 15.5 IMDB–
Disk writes–
Transaction logging overhead
•
Workload that may benefit from IMDB–
Write‐oriented, transaction‐logging intensiveHigh‐frequency OLTPLarge batch updates
–
Read‐only workload will likely gain less from ASE 15.5 IMDBUnless disk reads are often happening
62 – May 25, 2010
ASE 15.5 IMDB USE CASES
Use Case Example
Low latency and high concurrency access
to read‐mostly data
•Reference data, compliance data (trading systems)•Customer data, inventory data (e‐commerce operations)
High volume processing of data streams/
feeds with generating derived data
•Capital markets (quotes and trades, newsfeeds)•Monitoring systems, dashboards, key performance
indicators
Staging/batch processing •Data cleansing of raw data for eventual persistence to
disk
•Long‐running batch jobs operating on data copies
Distributed computing •Provision ASE data to point of action
Diskless alternative for temporary objects
(e.g., ASE #temptables, worktables)
•Running DSS workloads that generate many ASE
worktables
•# tables for task co‐ordination/synchronization •Transient application data (e.g., shopping carts)
Benefits
•Low latency querying with high concurrency rates•High volume data consumption/ingest rate•Save on disk resources for transient data•Integrates into existing ASE environments
63 – May 25, 2010
ATTRACTIONS OF ASE IMDB
•
Elegance: ASE IMDB is an ASE database, fully integrated
in ASE–
Additional system complexity is limited compared with IMDBs
by other vendors
•
Lower TCO: ASE IMDB is not another independent
infrastructure component–
Which would require integrating in the overall system…–
… and configuration, startup/shutdown…–
… and monitoring errorlogs, software upgrades…
•
Better performance
•
Full T‐SQL functionality, full connectivity support (OCS, ODBC,
JDBC)
•
Zero disk footprint
64 – May 25, 2010
IMDB PERFORMANCE
•
ASE IMDB offers significant performance benefits
•
Potential for performance improvement may depend on:–
Type of query/application–
Application concurrency–
Number of ASE engines –
Application's data access patterns
•
ASE IMDB scales better over increasing number of ASE
engines than classic ASE databases–
Due to reduced contention around transaction log
•
Customer test results: Factor 10 better
•
Internal tests: Factor 1.2 ‐
6 better, depending on type of
query
65 – May 25, 2010
ASE 15.5 IMDB
•
Apply for high‐frequency OLTP and batches doing big updates–
Optimizes primarily disk writes and transaction logging
•
Better performance at the expense of relaxed ACID
transactional semantics
•
Medium effort required for deployment–
Analyzing applications for functionality that can run on ASE IMDB–
Modify applications to relocate functionality to run in an ASE IMDB
•
Requires ASE 15.5
•
ASE license option
Summary
66 – May 25, 2010
Type your question in the online Q&A box
or call
1‐866‐803‐2143
United States & Canada
0800‐777‐0476
Argentina
0800‐8911992
Brazil
01800‐9‐156448
Colombia
001‐866‐888‐0267
Mexico
Password
SYBASE
Press
1
67 – May 25, 2010
Thank you.For further information on ASE 15.5
www.sybase.com/ASEextremeBUSINESS LEVEL
–
What’s New in ASE 15.5 – Overview–
ASE In‐Memory Database Flash Video–
IDC White Paper – Breaking the Disk Barrier with In‐Memory DBMS Technology
TECHNICAL LEVEL–
Getting Started With In‐Memory Databases in ASE 15.5–
Performance & Tuning for IMDBs–
Advanced Backup Services: Tivoli Storage Manager Option–
On‐demand Technical Webcasts
68 – May 25, 2010