am18 asa internals: query execution and optimization glenn paulley, development manager...
TRANSCRIPT
AM18ASA INTERNALS: QUERY EXECUTION AND OPTIMIZATION
GLENN PAULLEY, DEVELOPMENT [email protected] 2005
2
Contents
Query Execution Overview Discussion of join methods, and other relational operators Subquery/function memoization Access plan caching
Query optimization Overview Semantic query optimization
New in 9.0: cost-based query rewrite optimization Selectivity estimation
New techniques for estimating LIKE selectivity Improvements to histogram maintenance
Join enumeration and index selection ASA Index Consultant Forthcoming features in Jasper and beyond
3
Optimizer overview: role of the optimizer
Select an efficient access plan for the given DML request Typically queries, but UPDATE and DELETE statements also
require optimization
Try to do so in an efficient amount of time Requires tradeoffs between optimization and execution time Finding the ‘optimal’ plan a misnomer; we typically only desire
an adequate strategy, hence the goal is to avoid selecting poor strategies
Simple DML statements bypass the cost-based optimizer altogether Uses heuristics for index selection
4
Optimizer overview: role of the optimizer
For the most part, the 9.0 optimizer makes its decisions based on minimizing a query’s overall resource consumption
In 7.x and prior releases, default was to try and minimize response time Controlled by the OPTIMIZATION_GOAL connection option
Default is ALL-ROWS FIRST-ROW setting available via option, or per query using the WITH
FASTFIRSTROW table hint FIRST-ROW assumed if query contains SELECT FIRST or TOP 1
Desired cursor semantics (SENSITIVE, DYNAMIC, INSENSITIVE, KEYSET-DRIVEN) also play a role
Query syntax independent
At least, that’s the goal – but optimization is an intractable problem However: optimization is independent of table or predicate order in the
original statement
5
Underlying assumptions
Adaptive Server Anywhere is designed to operate with minimal administration
Assumption: relevant statistics are available Saved from the execution of previous queries, or Captured during LOAD TABLE statement, or Captured with explicit CREATE STATISTICS
Virtual memory is often a scarce resource Most access plan operators have alternative low-memory
strategies Some operators (recursive joins, hash inner join) have adaptive
strategies to revert to index nested loop join when conditions warrant
Cache requirements may be greater than those of older ASA releases
6
Query execution in 9.0.2: Overview
Self-managementFeatures
Physical Database
Design
Self-tuning optimizer
Management Tools
SQL AnywhereDatabase
7
Query execution in 9.0.2: Overview
ASA 9.0.2 supports different physical implementations of various relational operators
Various extensions over those available in 8.x Optimizer chooses amongst the applicable operators depending
on cost, application semantics, amount of cache available More complex rewritings performed, now based on cost analysis Offers significant performance improvements over previous
releases
8
Query execution enhancements in ASA 9.0.1
Improved sequential and index scan performance
Minimize latching of rows at isolation levels 0 and 1 Reduce the latency for reading a row into memory
Locking can be performed in a scan node, rather than only in a LOCK node
Predicate evaluation pushed into data store whenever possible on sequential scans
Predicates can now appear in SCAN nodes in a graphical plan, rather than in a FILTER node
Improved performance of processing simple LIKE predicates
New WINDOW relational operator
9
Query execution enhancements in ASA 9.0.1
Improved hash join performance with better memory management – reduce the number of writes to temporary storage
Sorting combined with SELECT TOP in a single operator
Pinned_cursor_percent_of_cache option
public-only; default of 10% Introduced in 8.0.1
10
Query execution enhancements in 9.0.1
Adaptive query optimization techniques:
Low-memory strategies for sort and hash-based operators, including hash join, recursive hash join, hash DISTINCT, hash GROUP BY, hash INTERSECT and hash EXCEPT join operators
Reimplemented to support partially-pipelined hash, sort Nested-loop adaptive strategy for RECURSIVE UNION
Used when inputs are small and an index can be exploited Nested-loop alternative strategy for inner and left-outer hash joins
Used when the “build” side is small, and the probe side is a base table with a suitable index
11
Query execution enhancements in 9.0.2
Derived table operator
Used to implement the materialization of a derived table (or view) to ensure correct semantics, particularly with outer joins
Corrected long-standing problems with constant values in the results of outer joins – now constants are also null-supplied to match ANSI semantics
Note differences here with other systems, including ASE and MS SQL Server
12
Join methods in SQL Anywhere 9.0
Nested loop ( JNL )
Hybrid-hash ( JH )
Sort-merge ( JM )
Block-nested-loop ( JNB )
All have Left Outer Join variants
JNL and JH have semijoin (JE and JHE) and anti-semijoin (JNE and JHNE ) variants
FULL OUTER JOIN supported with nested loop (JNLFO), hash (JHFO), and sort-merge (JMFO) methods
Recursive queries supported with Recursive Hash Join (JHR)
LEFT OUTER JOIN variant is JHRO Recursive nested-loop join is chosen when advantageous at execution
time, depending on the size of the inputs and the availability of an appropriate index
13
Join methods: Hybrid-hash
Very efficient, especially when cache-constrained
Can only be used for equi-joins (inner or left-outer)
New in 9.0: now requires (only) at least one equijoin predicate, other predicates handled as residuals in a two-phase process
Constructs an in-memory hash table of the smaller of the two inputs
Doesn’t require an index to be efficient New in 9.0: for large joins, partitions are evicted one at a time
Other input is used to probe the hash table to look for matching rows
Pipelining is possible depending on the operators above the join in the access plan
Materialization (e.g. work table) is necessary for scrollable cursors
14
PartitionStore
Hash join
JH*
Build input
Probe input
Bloom Filter
15
Join methods: Sort-merge
More efficient than hash join if inputs are already sorted on the join attribute(s)
As with hash join, can only be used for equi-joins (inner, left-outer, or full-outer)
Requires at least one equijoin predicate Sorts the two inputs by join attribute (if necessary)
Merges the two inputs in the classical way, back-tracking as necessary when encountering duplicates
Pipelining of merge joins are possible, but materialization (work table) is necessary for scrollable cursors
16
Sort-merge join
SortWorkFile
JM
SortSortSortWorkFile
RHSLHS
17
Join methods: Block-nested loop
Can be used for inner, left-outer joins with at least one equality predicate
JNB requires an index on the RHS to be worthwhile
Reads a block of rows from the LHS input
Sorts the block by join attribute, eliminating duplicates
Probes the RHS input with each distinct join attribute value
Outputs matching rows
Can be more efficient than plain nested loop, depending on duplication factor of the LHS
LHS input is partly materialized
18
Block Nested-loop
JNB
RHSSortBlk
LHS
19
Full Outer Join
Three join methods are supported:
Hash full outer (JHFO) nested-loop full outer ( JNLFO ) sort-merge full outer ( JMFO )
JHFO is more efficient, JMFO is most efficient if the inputs are already sorted
Both require at least one equijoin condition JNLFO will rarely perform well with large inputs
Operator of “last resort” LHS input cannot utilize an indexed strategy only alternative if the ON condition does not contain at least one
equijoin condition
20
Bloom Filters and “star” joins
9.0.2 optimizer includes plan operators to construct and evaluate Bloom filters
Useful in star-join situations or in join queries that involve correlated subqueries or derived tables
Main idea: prevent the processing and/or materialization of intermediate result set rows that will not satisfy the query’s WHERE clause in any event
Bloom Filters are built using a HashFilter (HF) operator
Tests for a value(s) matching a Bloom Filter are hashmap predicates, typically found in Filter plan nodes
21
Bloom Filters
A bitmap that represents matching values
Filter size depends on the domain involved a ‘1’ represents a (possible) match (there can be false positives)
A HashFilter operator may precede any blocking operator (one that reads its entire input before returning a row)
Entire intermediate result must be read to create the Bloom Filter Subsequent Filter operators in other parts of the plan can
introduce hashmap predicates
If the corresponding bit of the bitmap is ‘0’, a match is impossible
22
Grouping/duplicate elimination
SQL Anywhere 9.0 supports three different physical implementations:
hash-based: most efficient, but output isn’t sorted sort-based: cheapest if the input is already sorted index-based: fastest if all data is cached, uses less memory
Optimizer chooses from these three alternatives based on cost
New in 9.0.1: clustered hash GROUP BY
Uses index characteristics to determine grouping characteristics of the underlying table
Only enabled by setting the option OPTIMIZATION_WORKLOAD Default is ‘Mixed’ Clustered hash GROUP BY is considered if set to ‘OLAP’
23
Sorting
Two implementations:
index-based Intermediate result is first materialized, then an index built on that
work table; rows read in index sequence using a complete index scan Technique used in all releases prior to 8.0
disk-based physical sort Sorting done in memory If input is too large, sorted runs are formed from individual partitions
Used for
result set ordering sort-merge join sort-based group-by, duplicate elimination
24
Subquery/UDF Memoization
9.0 memoizes (caches) results of subqueries and user-defined functions on a per-request basis
Prior to each invocation, cache is accessed to determine if the function has been called previously with the same parameters
If hit ratio is lower than 0.0002, then caching is abandoned Results are saved starting with fourth invocation of the function/subquery
Size of cache is fixed at four pages
An LRU replacement policy ensures most recent invocations are saved Can significantly improve CPU consumption, cache pressure, overall
elapsed time
Caching is not used for NOT DETERMINISTIC functions Caching is avoided for functions/subqueries with:
any parameter or result value whose declared length is greater than 254 bytes (eg. BLOB strings)
Total size of parameter and result values exceeding 2560 bytes
25
Access plan caching
Introduced in the 8.0.1 release
Access plans for DML statements within stored procedures may be cached
New in 9.0: plans now cached for queries that return a result set Improves stored procedure execution time by eliminating time spent
during query optimization
Only those statements whose plans do not change during a “training period” are candidates for plan caching
Simple statements: training period is two queries Other statements: training period is ten queries Plans are re-tested periodically on a logarithmic scale, initially every 20
invocations and at least every 640 invocations New in 9.0.1: plans must be *identical* to enable caching
26
Access plan caching: caveats
Controlled through a new connection option MAX_PLANS_CACHED
default is 20 Plan caching is turned off if MAX_PLANS_CACHED is set to 0
Plans are not cached for statements referencing local or global temporary tables
Useful indicators:
SELECT db_property('QueryOptimized'), db_property('QueryBypassed'), db_property('QueryReused')
27
Query optimization
Self-managementFeatures
Physical Database
Design
Self-tuning optimizer
Management Tools
SQL AnywhereDatabase
28
The query optimization process
Parser converts query syntax into a parse tree representation New in 9.0: quantified subquery predicates are no longer normalized into
EXISTS predicates facilitates cost-based subquery rewritings
Heuristic semantic (rewrite) optimization Parse tree converted into a query optimization graph Selectivity analysis Join enumeration, group-by, order optimization performed for each
subquery Includes index selection, physical operator choices, cost-based rewrite
optimizations, placement of Bloom filter constructors and predicates Post-processing of chosen plan
Conversion of optimization structures to execution plan Construct execution plan
29
Optimizer bypass
Simple DML statements are optimized heuristically, rather than in a cost-based manner
Done for SELECT, UPDATE, DELETE, INSERT Not done for INSERT FROM SELECT
A simple statement: Single-table request: no joins, no nested queries Only for ASA tables with primary keys (not proxy tables) No complex operators (DISTINCT, UNION, aggregation) Equality conditions to constants on PK columns
DELETE statements No triggers or articles of any kind No ORDER BY (9.0.2 and up)
UPDATE statements No articles, COMPUTEd columns, CHECK constraints, triggers, ORDER BY
Each occurrence updates the QUERY_BYPASSED statistic
30
Semantic query optimization
Also known as query rewrite optimization
Goal is to transform the original query into a syntactically different, but semantically equivalent form
Two flavours of rewrite optimizations in 9.0
Heuristic-based: premise is that the rewritten query will almost always lead to a better access strategy
New in 9.0: cost-based subquery decorrelation
31
Heuristic rewrite optimizations performed by Adaptive Server Anywhere 9.0
Semantic query optimization
conversion of outer to inner joins
inner join elimination
MIN()/MAX() optimization
OR, IN-list optimization
LIKE optimizations
unnecessary ‘distinct’ elimination
subquery unnesting
predicate normalization, inference and subsumption analysis
predicate pushdown in UNION or GROUPed views
32
Unnecessary distinct elimination
Eliminates any unnecessary duplicate elimination (DISTINCT) from a query
Based on analysis of derived functional dependencies and candidate keys
Analysis involves computing the transitive closure of conjunctive equality conditions in the WHERE clause and in ON conditions
33
Unnecessary distinct elimination
‘Distinct’ is eliminated from the main query block, subquery blocks, views, and derived tables
Series of Distinct query expressions (UNION, EXCEPT, INTERSECT) are optimized and reduced where possible
Examples:select distinct * from product p
select distinct o.id, o.quantity, o.cust_idfrom sales_order o, customer c where o.cust_id = c.id and c.state = ‘NY’
34
Subquery unnesting
Heuristic rewrite of nested queries as joins
Done to avoid nested iteration execution strategies, and take advantage of highly selective conditions in the subquery’s where clause
Three cases to consider:
Subquery theta-comparisons of the form ‘column q (subquery)’ Exists subqueries that can match at most one row from the outer
block Exists subqueries that can match more than one row
35
Subquery theta-comparisons
e.g. T.x = ( select R.x from R join S where …)
Conversion to join performed only if table constraints can guarantee that the subquery can produce at most one row for each row in the outer block
Otherwise the server must generate a run-time error (SQLSTATE 21W01), cardinality violation
36
Subquery theta-comparisons
select p.* from product p, sales_order_items swhere p.id = s.prod_id and s.id = 2001 and s.line_id = 1
select * from product pwhere p.id =
(select s.prod_id from sales_order_items s where s.id = 2001 and s.line_id = 1)
PLAN: p<seq>: s<sales_order_items> PLAN: s<sales_order_items> JNB p<product>
37
Exists subqueries (1 row)
If table constraints can guarantee that a conjunctive exists subquery can produce at most one row for each row in the outer block
Then it is converted to a (simple) inner join (and possibly simplified further)
Example:
select s.* from product p, sales_order_items swhere p.id = s.prod_id and p.id = 300
select * from sales_order_items swhere exists (select * from product p where s.prod_id = p.id and p.id = 300)
PLAN: s<seq>: p<seq>
PLAN: s<seq>
38
Exists subqueries (> 1 row)
If table constraints cannot guarantee that a conjunctive exists subquery can produce more than one row
Then it is converted to an inner join combined with duplicate elimination
Example:
select distinct p.*from product p, sales_order_items swhere p.id = s.prod_id and s.id = 2001
select * from product pwhere exists (select * from sales_order_items s where s.prod_id = p.id and s.id = 2001)
PLAN: p<seq>: s<product>PLAN: DistH[ p<seq> JH* s<sales_order_items> ]
39
Exists subqueries (> 1 row)
Transformation of subqueries to joins tries to exploit highly selective predicates in the subquery’s where clause (or ON condition)
if additional tuples could be generated, introduce duplicate elimination above the join
Server can use row identifiers, if necessary, if primary keys are unavailable to distinguish between row
in cases where the additional tuples are generated by a join to the inner table, can consider using a semi-join to avoid generating the duplicates
Operator chosen will either be JE (nested loops) or JHE (hash semi-join), depending on estimated execution costs
40
Inner join elimination
Adaptive Server Anywhere’s optimizer will eliminate a FK-PK or PK-PK inner join if the primary table does not contribute any meaningful expressions to the query
Designed to assist the optimization of queries over views
Not performed for updateable cursors
Example:
Select s.*From sales_order_items s, product pWhere s.prod_id = p.id
PLAN: s<seq> JNB p<product>
Select s.*From sales_order_items sWhere s.prod_id is not NULL
PLAN: s<seq>
41
Predicate analysis and inference
Converts search conditions to CNF if feasible
Otherwise, appends useful implicates (usually atomic predicates) to the original expression
Eliminates trivial tautologies (1=1) and contradictions (1=0)
Transforms or eliminates syntactic sugar:
X+0 = Y becomes X = Y ISNULL(X,X) becomes X X = X becomes X IS NOT NULL
generates transitive equality conditions:
e.g. X = Y and Y = Z implies X = Z
42
Predicate analysis and inference
Additional improvements over those in SQL Anywhere 8.0.0:
algorithm now generates useful inequality conditions eliminates IS NULL and IS NOT NULL predicates when they are
redundant performs predicate subsumption analysis to eliminate/transform
sets of inequalities e.g. X < 5 and X < 10 X < 5 e.g. X < 5 or X > 3 X IS NOT NULL (if X is nullable), otherwise
predicate is simply eliminated altogether subsumption analysis works with both predicates involving literal
constants and predicates containing host variables
43
Example: TPC-H Query 19
SELECT SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT) ) AS REVENUEFROM LINEITEM, PARTWHERE( P_PARTKEY = L_PARTKEY and P_BRAND = ‘BRAND#12’ and P_CONTAINER in (‘SM CASE’, ‘SM BOX’, ‘SM PACK’, ‘SM PKG’)and L_QUANTITY >= 1 and L_QUANTITY <= 1 + 10 and P_SIZE between 1 and 5 and L_SHIPMODE in (‘AIR’, ‘AIR REG’) and L_SHIPINSTRUCT = ‘DELIVER IN PERSON’ )or( P_PARTKEY = L_PARTKEY and P_BRAND = ‘BRAND#23’ and P_CONTAINER in (‘MED BAG’, ‘MED BOX’, ‘MED PKG’, ‘MED PACK’)and L_QUANTITY >= 10 and L_QUANTITY <= 10 + 10 and P_SIZE between 1 and 10 and L_SHIPMODE in (‘AIR’, ‘AIR REG’) and L_SHIPINSTRUCT = ‘DELIVER IN PERSON’ )or( P_PARTKEY = L_PARTKEY and P_BRAND = ‘BRAND#34’and P_CONTAINER in ( ‘LG CASE’, ‘LG BOX’, ‘LG PACK’, ‘LG PKG’)and L_QUANTITY >= 20 and L_QUANTITY <= 20 + 10 and P_SIZE between 1 and 15 and L_SHIPMODE in (‘AIR’, ‘AIR REG’) and L_SHIPINSTRUCT = ‘DELIVER IN PERSON’ );
44
Example: TPC-H Query 19
Inferred additional conditions:
and P-Partkey = L-Partkey and L-Shipinstruct = ‘DELIVER IN PERSON’ and L-Shipmode in (‘AIR’, ‘AIR REG’) and L-Quantity <= 30 and P-Container in ( ‘MED BAG’, ‘MED BOX’, ‘MED PKG’,
‘MED PACK’, ‘SM CASE’, ‘SM BOX’, ‘SM PACK’, ‘SM PKG’, ‘LG CASE’, ‘LG BOX’, ‘LG PACK’, ‘LG PKG’)
and P-Brand in (‘BRAND#23’,‘BRAND#12’,‘BRAND#34’) and P-Size between 1 and 15
45
Predicate pushdown
Idea is to push predicates past a Group-by or Union operation
Reduces the cardinality of the inputs, thereby reducing the computation necessary to compute the result
Enables alternative access strategies
Can pay handsome dividends with queries over views
46
Predicate pushdown
Example:
Create View V (prod_id, total) as (Select prod_id, count(*)
From sales_order_items s Group by s.prod_id);Select * from V Where prod_id = 300;
Predicate pushdown duplicates the condition in the query’s WHERE clause to an equivalent clause in the view
Can be thought of as modifying the view definition on-the-fly for this specific invocation
Reduces the number of rows input to the GROUP-BY operation
47
Outer join conversion
Outer joins are automatically converted to inner joins when conditions in the query make an inner join semantically equivalent
Enlarges the class of join strategies available for selection by the optimizer
Again, targetted towards applications that utilize complex views Example:
select e.*, o.cust_id, o.order_date from employee e left outer join sales_order o on e.emp_id = o.sales_repwhere o.order_date > ‘1999-04-04’
PLAN: o<order_date> JNL e<employee>
48
Min()/Max() optimization
The optimizer will convert a simple aggregation query over a single table to an indexed retrieval of a single row
Example:
SELECT min(quantity)FROM product is converted to:
SELECT min(quantity) FROM (SELECT FIRST quantity
FROM product ORDER BY quantity ASC) as temp(quantity)
49
Rewriting IN-lists
Predicates of the form
T.X in (C1, C2, …, Cn) or T.X = C1 or T.X = C2 or T.X = Cn …
...can be converted to a join of a single-column virtual table whose rows consist of each value Cj
Requires the presence of an appropriate index on table T with X as the leading column
Permits indexed retrieval of rows in T through an index on column X
50
Rewriting LIKE predicates
if a LIKE pattern consists solely of ‘%’
rewritten as IS NOT NULL, eliminated if column or expression is not nullable
if a LIKE pattern contains no wildcards
converted to an equality predicate estimate selectivity as usual for equality conditions
if a LIKE pattern has a prefix of non-wildcard characters
add an equivalent BETWEEN predicate Retain the LIKE predicate estimate selectivity through an index probe or the column’s
histogram if available
51
Cost-based query rewrite optimization
Cost-based transformation of nested queries
Performed for all types of subqueries contained in a predicate
Optimizer evaluates the difference between
the cost of naïve nested iteration Decorrelation of the subquery and conversion to a join, semi-join
or anti-semi-join, possibly involving a WINDOW Both nested-loop and hash-based variants of semijoins Decorrelation cannot be done in all cases; with multiple outer
references the subquery is left alone Decorrelation, but with adding a Bloom filter predicate to reduce
the size of the intermediate result
52
Cost-based subquery optimization
Example: (TPC-H schema)
SELECT *FROM supplier, partsuppWHERE s_suppkey = ps_suppkey ANDps_availqty > ( SELECT 0.5* sum(l_quantity) FROM lineitem WHERE l_partkey = ps_partkey AND l_suppkey = ps_suppkey AND l_shipdate BETWEEN date( '1994-01-01') and date( '1995-01-01') )
53
Cost-based subquery optimization
Alternative 1
Predicate containing subselect is retained However, can be placed in the plan after the scan of the partsupp
table, or after the join – optimizer considers both options, and places the subquery where it will be evaluated the fewest number of times
SELECT *FROM supplier, partsuppWHERE s_suppkey = ps_suppkey ANDps_availqty > ( SELECT 0.5* sum(l_quantity) FROM lineitem WHERE l_partkey = ps_partkey AND l_suppkey = ps_suppkey AND l_shipdate BETWEEN date( '1994-01-01') and date( '1995-01-01') )
54
Cost-based subquery optimization
Alternative 2:
Automatic subquery decorrelation Subquery result computed for all possible rows
Any of four join methods can now be used to compute the final result
SELECT *FROM supplier, partsupp,
SELECT 0.5* sum(l_quantity), l_partkey, l_suppkeyFROM lineitem
WHERE l_shipdate BETWEEN date( '1994-01-01’) and date( '1995-01-01') GROUP BY l_partkey, l_suppkey as DT( asum, l_partkey, l_suppkey)WHERE s_suppkey = ps_suppkey AND ps_partkey = DT.l_partkey AND ps_suppkey = DT.l_suppkey AND ps_availqty > DT.asum
55
Selectivity Estimation
Is the core of query optimization
56
Selectivity estimation
Accurate estimates usually lead to better access strategies Perfection is infeasible and really not required anyway
The selectivity of a predicate measures how often the predicate evaluates to TRUE
Typically expressed in percentages; if highly selective, the percentage is low; if not, the percentage is high
Predicate selectivities are estimated using: User estimates provided in the original SQL statement Self-tuning column histograms Run-time sampling of an index “Magic” (default) values
Nothing but a raw guess Tend to desire over-estimation rather than under-estimation
57
Selectivity estimation: Graphical plans
The source of selectivity estimates are shown in graphical plans: GUESS (magic value) STATISTICS (column histogram) USER (user estimate) INDEX
Estimate derived from sampling the index at optimization time ALWAYS
100%, used with inferred predicates added to the original statement e.g. X LIKE ‘ABC’ X = ‘ABC’
COLUMN-COLUMN (join predicate using base table columns: join histogram) COLUMN
For unique column, selectivity is 1/cardinality For non-unique columns, uses density information
COMPUTED (predicates combinations using AND or OR) BOUNDED – upper and/or lower bound applied to estimate based on table
cardinalities and/or the existence of referential integrity constraints and/or estimates of NULL values
58
Selectivity estimation
Each atomic condition in a predicate is assumed to be independent
Column statistics are persistently stored in the database catalog
Histograms are built and maintained automatically during query processing
Different host variable values can also affect the choice of plan
Values are determined at optimization time (OPEN CURSOR) and will affect selectivity estimation
In 8.0.1 and above, plans are cached for DML statements in procedures In 9.0, this includes queries in procedures that return result sets
59
User estimates of selectivity
Any predicate or group of predicates can be modified to explicitly specify a selectivity estimate To group predicates, use parentheses Overrides any estimate determined by the optimizer
With complex conditions, user estimates are distributed across the individual conditions
Actual selectivities of comparison conditions are still computed and saved in each column’s histogram during query processing
User estimates are respected or ignored through the connection option USER_ESTIMATES User estimates should be necessary only in relatively rare situations, or
as workarounds Use for:
predicates involving complex scalar or user-defined functions Join conditions over derived tables
60
Column histograms
“Self-tuning” histogram implementation Combines support for frequent-value statistics with a self-tuning
histogram implementation, able to handle highly skewed distributions Stored in the database catalog
Table SYSCOLSTAT Flushed by idle task (not logged) and at shutdown sa_flush_statistics() procedure guarantees persistence
Statistics are updated during processing of search conditions (WHERE, ON, HAVING) Statistics are also created during LOAD TABLE and CREATE
STATISTICS Handles all data types, including strings NULL selectivities are also saved
61
Column histograms
Histograms can be created on-the-fly during query processing Can be explicitly created with the CREATE STATISTICS
statement, but should rarely be necessary if LOAD TABLE is used
CREATE STATISTICS is useful after bulk loads or other build operations
By default, the server will create histograms only for base tables of significant size – controlled by option MIN_TABLE_SIZE_FOR_HISTOGRAM In 9.x, default changed to 100 rows from 1000 (8.x) Option is deprecated in the 9.0.2 release
DROP STATISTICS statement will drop a histogram - but the histogram may be recreated during subsequent processing
DROP OPTIMIZER STATISTICS is deprecated
62
Column histograms
New in 9.0: connection option UPDATE_STATISTICS to disable statistics collection on a per-connection basis
New in 9.0: histograms are modified as appropriate to mirror changes to column distributions due to INSERT, UPDATE or DELETE statements
Histograms are automatically preserved through UNLOAD/RELOAD (only with 8.0 database format and higher)
retains frequent-value statistics discovered during server activity New in 9.0.1: options to avoid histogram creation on certain
columns New in 9.0.2: option to turn off modifying histograms with
update DML statements
63
Column histogram internals
Histograms begin with 20 buckets; number of buckets can grow or shrink depending on the characteristics of the distribution
Numeric values: traditional equi-depth histograms with bucket ranges, coupled with frequent-value statistics Interpolation used to compute selectivities If a value is one of the top N in the distribution, then that value
warrants a frequent-value statistic N varies between 10 and 100, depending on table size
String values: frequent-value stats only Can only support equality, IS NULL, LIKE predicates
64
Column histogram maintenance
Tradeoff of overhead versus accuracy Histogram modifications are not transactional
Histogram modifications are computed for each request independently
Histograms are created or modified by the following server activities: the execution of a CREATE STATISTICS, CREATE INDEX, or
LOAD TABLE statement the modification (INSERT, UPDATE, or DELETE) of a minimum
threshold of rows in a table Minimum threshold: 5 x ( log10( table cardinality + 1) )
the evaluation of a predicate in any DML statement (INSERT, UPDATE, DELETE, SELECT) with which column distribution information can be refined
65
Column histogram maintenance
Histogram maintenance due to predicate evaluation a comparison { <, <=, =, >, >=, BETWEEN, IN } predicate that
compares a column reference to a constant (literal constant, host variable, or outer reference if a subquery)
an IS NULL predicate on a table column; a LIKE predicate on string and binary columns whose declared
length is greater than 7 bytes an equi-join predicate between the columns of two tables
Histogram modifications follow the data being queried Greater accuracy with frequently-accessed data
Not every predicate will or can result in histogram modifications Depends on order of predicate evaluation in the access plan
66
Column histogram diagnosis
DBHIST utility can be used to display a histogram in an Excel spreadsheet for problem determination
New in 9.0: better representation of date, timestamp, string values in DBHIST output
Useful procedure:
sa_get_histogram( column, table, userid ) Output a column histogram in text
Specific predicate selectivities can still be queried through the ESTIMATE() functions
May be necessary for longer string data Technical whitepaper is forthcoming
67
DBHIST Histogram output
Histogram for column prod_id in table sales_order_items
0.00
00
0.00
00
0.00
00
0.00
00
0.00
00
0.00
00
0.00
00
0.00
00
0.00
00
0.00
00
0.10
12
0.09
66
0.08
66
0.12
03
0.09
85
0.09
57
0.09
12
0.10
39
0.09
02
0.11
58
0.0000
0.0200
0.0400
0.0600
0.0800
0.1000
0.1200
0.1400N
ULL 30
0
300
- 30
1
301
301
- 30
2
302
302
- 40
0
400
400
- 40
1
401
401
- 50
0
500
500
- 50
1
501
501
- 60
0
600
600
- 60
1
601
601
- 70
0
700
Domain
Sel
ecti
vity
RangeSingleton
68
Selectivity estimation: index probing
Essentially a form of run-time sampling Used when
the predicate references an indexed column and compares it to a constant value or host variable and
Column histogram does not exist Especially useful in cases where the server is started with –k
command line switch (do not update column statistics) A maximum of two levels of the index are probed to determine
the approximate selectivity Can only be done if the column is first in the index
Index column order not only important for processing, but for optimization as well
69
LIKE predicate selectivity – new in 9.0
Sampling technique used for all string domains < 255 bytes For strings that contain no white space, frequency values statistics
are created Otherwise, engine collects LIKE statistics based on “word”
boundaries If column value is “w1 w2 w3” statistics are kept on patterns ‘w1%’,
‘%w1%’, ‘%w2%’, ‘%w3%’ and ‘%w3’ String value (or patterns) are hashed; most frequently occurring
values are retained If value not found, a small selectivity value (1%) is assumed “magic” (default) selectivity for LIKE remains at 0.5, used when above
conditions are not met LIKE predicates with a non-wildcard prefix generate inferred range
predicates which can give more accurate estimates
70
“Magic” (default) selectivity estimates
varies by type of predicate and operator
5% for equality 25% for inequalities 6% for IS NULL 6% for BETWEEN 1% for LIKE if histogram exists and pattern is unknown
50% otherwise (changed in 8.0.2) 50% for quantified subquery predicates (EXISTS, NOT EXISTS) 50% for subquery comparison predicates
71
Estimating a single-table predicate
If the predicate involves a base column and a constant
Lookup the statistic using a histogram if available If the histogram is not available, or was unable to compute an
estimate, and there exists a matching index
Sample the data using index probes Otherwise
Use the magic value estimate for this predicate
72
Equijoin selectivity estimation
Join selectivity is defined as:
For KEY joins, optimizer assumes a uniform distribution of values in the referencing (foreign) table; same estimate used for both inner and outer KEY joins
Only takes effect if USER-ESTIMATES option is set to ON If the join is over base columns, and histograms are available
for both, estimate the join selectivity using the distributions in the two histograms
Card( R join S)————————Card(R) * Card(S)
73
Join histograms – new in 9.0
Built as the product of two base-table column histograms Contains buckets corresponding to overlapping regions of the
two domains Takes into account any singleton buckets, the size of the
buckets, and the density information for each column Bucket counts are aggregated to determine the “final” join
selectivity Much more accurate join selectivity estimates than in previous
releases Used for join predicates in an ON condition and those in a
Where or Having clause Otherwise a join predicate estimate defaults to 5%
74
Diagnosis of selectivity problems
Many problems with poor join strategies can be traced to an inaccurate selectivity estimate
Regenerating the histogram with CREATE STATISTICS may correct out-of-date statistics Also captures number of distinct values in a column
One can always override the optimizer’s estimate with a user estimate when investigating a problem with any join strategy
DROP STATISTICS can temporarily restore defaults by deleting a column’s histogram, but subsequent processing will likely only restore the same values
Tables with sufficient “churn” may exceed the capabilities of dynamic statistics management – it may be necessary to explicitly CREATE STATISTICS in some cases
75
Diagnosis of selectivity problems
One can determine the optimizer’s estimate of the selectivity of a predicate through several means:
The ESTIMATE() and ESTIMATE_SOURCE() built-in functions e.g. Select Estimate( color, ‘white’, ‘=‘ ) From Product
DBHIST utility Procedure sa_get_histogram() Long or graphical plan output
GRAPHICAL PLAN WITH STATISTICS in DBISQL can illustrate differences between actual selectivities and their estimates
76
Graphical access plan display
Part of DBISQL (Java version only)
Access plan is displayed graphically as an operator tree
Base tables are at the leaves
Subqueries are displayed separately in their own pane
User can navigate through the tree, displaying *both* estimates and actual performance statistics
Enhancements in 9.0
Printing ability Greater customization More output detail Operators/edges annotated with line thickness, colour to denote
expensive operators
77
Join enumeration and index selection
78
Join enumeration
SQL Anywhere uses a depth-first search join enumeration algorithm to search for an optimal access plan
modified from that used in the 7.x releases – now performs join method selection and cost-based index selection
chooses between hash-, index-, and sort-based Group-by, duplicate elimination
Subqueries are placed in the plan in a fashion similar to that for tables
Cost-based rewriting of nested queries is also completely integrated into the 9.0 enumeration algorithm
79
Join enumeration
Access plans can vary depending on: OPTIMIZATION_GOAL setting
Cursor type
Amount of buffer pool available at optimization time
Percentage of table data already in the database cache (buffer pool) (introduced in 8.0.1)
OPTIMIZATION_LEVEL setting (introduced in 8.0.1)
OPTIMIZATION_WORKLOAD setting (introduced in 9.0.1)
Isolation level of cursor or table
Capabilities of the underlying hardware
80
Join enumeration
In the worst case, for N tables there are N! possible linear (left-deep) join strategies
With 4 join methods and k indexes per table on average, we have 4(N-1)N!(K+1)N
possible access plans
Plans for views containing GROUP BY, DISTINCT, OR UNION operations are determined separately
81
Join enumeration
“Bushy” strategies are developed for queries containing FULL OUTER JOINs, or right-deep nested LEFT OUTER JOINs.
Pruning of the search space is cost-based, subject to invocation of the optimizer governor
Optimizer will choose a plan from those that contain the minimum number of Cartesian products
82
Predicate placement
Simple predicates are placed in the plan as soon as they can be evaluated (predicate pushdown)
New in 9.0 – predicate evaluation pushed into the scan operators to reduce the work involved with rejected rows
Predicates are ordered by selectivity within groups, as follows:
simple comparison conditions LIKE predicates predicates involving functions or subselects quantified subquery predicates
idea is to eliminate unqualifying rows as quickly as possible
83
Optimizer governor
OPTIMIZATION_LEVEL connection option
Introduced in the 8.0.1 release permitted values: 0-15; default is 9 Controls the governor’s quota of plan nodes that it considers
Allows a tradeoff of optimization time versus better execution strategies
The higher the number, the more plans are attempted Level 0 no cost-based optimization; only one plan is considered
Does not affect heuristic optimization of simple requests that bypass cost-based optimization
84
Optimizer governor
In some cases the optimizer may have to choose between many possible plans that offer little, if any, cost improvement
The optimizer’s governor sets a maximum number of plan nodes that will be visited during enumeration
Based on the current setting of OPTIMIZATION_LEVEL Maximum number of nodes will be reduced automatically if the
optimizer discovers a sufficiently cheap access plan
Optimizer spends less effort on optimizing inexpensive queries
85
Index selection
Index selection on each table is re-evaluated for each join strategy permutation and join method chosen
Index selection is decided on the basis of cost, not simply selectivity estimates
Optimizer ignores duplicate indexes over the same set of columns
Look for performance warnings in the server console window Bit maps and large block I/O substantially improve the
performance of sequential scans
Hash joins are often significantly faster than simple nested loop or block nested loop
Pay-off increases with smaller cache sizes
86
Index selection
For older (5.X or 6.X) databases
Index selection decisions are based on cost, so large combined PK indexes will be avoided if another alternative is less expensive
Combined FK-PK indexes have lower fanout, due to the existence of additional FK entries in the B+tree
The optimizer exploits referential integrity constraints during join enumeration and plan costing
Primary key indexes are assumed to be approximately clustered
New in 9.0.1: WITH INDEX hints to force the use of an index with any table in a query’s FROM clause
87
Cost estimation
88
Cost model
ASA uses a mix of metrics to estimate the cost of an access plan:
Expected number of rows Need for full or partial materialization Amount of cache available at the time the request is optimized Anticipated amount of CPU and I/O to compute the result for any
one access plan
89
Cost model
A typical, modern machine’s capabilities are built-in to ASA’s default cost model
Cost model can be recomputed on demand using ALTER DATABASE CALIBRATE SERVER Calibration is costly, but can be very useful when running ASA on
a Windows CE device Test application performance before making any changes
permanent If altered, model can restored to default settings
Stored in SYSOPTSTAT; can be viewed using procedure sa_get_dtt()
I/O characteristics can be evaluated separately for each dbspace
90
Optimization goal
User-specified choice of cost model Enables application to customize the optimization process to fit
application requirements Two mechanisms:
Database/connection option setting (OPTIMIZATION_GOAL) New with 8.0.2: FASTFIRSTROW table hint in SQL FROM clause
New in 8.0.2: ALL-ROWS is the default FIRST-ROW may be better for online applications that FETCH very few
rows of a result set Recommended for most applications:
Use ‘ALL-ROWS’ for most queries Use ‘FIRST-ROW’ on a per-query basis when conditions warrant Make your choice based on application requirements
Use other tuning mechanisms (when necessary) to coerce the optimizer to choose a specific access plan
91
Optimization goal
Effect of FIRST-ROW/FASTFIRSTROW Optimizer tries to avoid strategies with materializing operators
such as hash join
Optimizer tries to satisfy an ORDER BY clause with an index as much as possible
Overall cost of executing the query is less of a factor in choosing the access plan
In fact, overall execution time may be much greater when the entire result set is FETCHed
Especially when the database cache is cold and a significant number of pages must be read from disk 40:1 sequential-to-random speed ratio
92
Costing table access
Number of rows in a table is always accurately known to the optimizer
Number of pages in each arena of a table is also accurately known
BLOB values are stored in an arena separate from other table pages New in 8.0.1: server keeps track of table pages currently in cache
when optimizing DML statements
plan can change considerably if cache is “hot” usually involves more indexed retrieval percentage of table in cache displayed in graphical plan
New in 9.0: sa_table_stats() is an undocumented procedure to return the number of table pages in cache and the number of page reads for each table
93
Scan factors and indexed retrieval
Scan factor - measure of how much data must be retrieved to compute the result
Example: range predicate on a 300,000 row table, 4K pages, totaling 7928 pages (37 rows per page)
Assume cache is “cold” Assume selectivity is 3.77%; corresponds to 11,328 rows With a non-clustered index, optimizer assumes uniform
distribution 11,328 rows scattered over 7928 pages
Scan factor is 100% - likely considerably cheaper to perform a sequential scan
94
Costing index access
A scan factor is computed for each table accessed via an index, based on selectivity estimates, the fanout of the index, its depth, and whether or not it is clustered
New in 8.0.2: physical characteristics (value clustering, distinct values) of the index are determined dynamically and kept up-to-date in real time Database property “IndexStatistics” Disabled by default in 8.0.2; need to use special syntax with
CREATE DATABASE statement Otherwise: statistics are estimated based on sampling
Enabled in 9.0.0 and above User estimates are used if specified and the setting of the
USER_ESTIMATES option permits it
95
Costing index access
If the index is a key index, assumes uniform distribution of primary to foreign key values
Tempered by number of distinct values If the index is clustered, optimizer assumes that rows in table
pages are in index sequence
Different table page access cost (fewer disk seeks) uniqueness, clustering of an index can be inferred from other
indexes on the same table
ASA does not perform index intersection or index union to reduce the need to retrieve rows from base tables
96
Temporary tables
With a FIRST-ROW optimization goal, ASA will try to use an index whenever possible to avoid the need to materialize intermediate results
Example: duplicate elimination via use of a primary key index scan Optimizer attempts to accurately cost the use of temporary tables
through the use of the cost model
Histograms are created and maintained on temporary tables Accuracy will depend on how the temporary tables are populated, and their
level of “churn” Again, explicit CREATE STATISTICS may be necessary
Cost is based on both size of each row, and the number of expected rows in the temporary table
97
Order optimization
SQL Anywhere 9.0 will attempt to satisfy an ORDER BY clause through an index if FIRST-ROW optimization is desired
GROUP BY by list is rearranged to match the ORDER BY clause if necessary
Redundant or literal columns are ignored
98
Order optimization
A physical sort will be necessary if:
The first table T of a join strategy is processed using an index scan, but the order property of the index scan does not match the ORDER BY clause
The ORDER BY clause cannot be matched with any index of T The ORDER BY clause references more than one table
Indexes are traversed in either direction to match ASC or DESC qualifiers
99
ASA Index Consultant
Recommends indexes to improve query performance Main idea is to improve application performance Particularly useful when DBA has limited experience
Permits “what-if” analysis on an existing query load and database instance
Allows DBA to find statements that are the most sensitive to the presence of an index
Can be used to find indexes that are unnecessary, i.e. those that are not utilized by the optimizer
Can be used to estimate disk storage overhead of additional indexes Workload capture enables analysis of total application activity
Contains textual form of access plan chosen for each occurrence of a DML statement
100
ASA Index Consultant
Supports three different analysis approaches: Automatic tuning from Sybase Central, using a workload
Automatic tuning from ISQL, using a single statement
Manual tuning from ISQL
101
Overview of the workload approach
Workload is captured during normal application execution
Goal 1 – Allow the application to execute without requiring significant intervention and without degrading performance
Goal 2 – Capture the important context in which the query is run Includes cache size, optimization goal, and so on
Workloads are independent of an Analysis
Can analyze a captured workload multiple times, with different server configurations if desired
Different cache size, create new indexes, add referential constraints During an analysis, index consultant initiates the re-optimization of
each DML statement, over multiple phases
Analysis results can persist in the database for review at a later time
102
Overview of workload approach
Phase 1: ASA query optimizer analyzes each statement during optimization, searching for plausible indexes that supply either
A better physical access path to a base table to satisfy a sargable predicate
An ordering of the rows in a base table that can be exploited by other operators in the plan
Advantageous virtual indexes are created accordingly Patent-pending technique
Phase 2 through Phase M
Index consultant re-optimizes each statement, reducing the set of virtual indexes in each phase until the space constraint is met
Process can be halted after any phase
103
Caveats
Index consultant deals only with secondary indexes indexes that embody RI constraints are retained, used where appropriate
Recommendations are based on estimates of virtual index characteristics and the quality of the workload
Not all server state is captured Index Consultant assumes a stable state for the server, including the
contents and size of the database cache Not every parameter to the optimization process is saved with each query
Example: amount of available cache at optimization time for this specific request, based on current server activity
Recommendations for queries over global temp tables will likely be erroneous
Not currently possible to consider queries over local temporary tables Local temporary tables referred to in captured queries will not exist for
the connection running the Index Consultant Query will get a syntax error; flagged in analysis results
104
Workload capture
Workload is captured during normal application execution Only one capturing session can be active at one time Periodic messages in server window to indicate that capturing is active
Can affect server performance to some degree Server shutdown or abnormal termination ends capturing Capturing is done by a separate internal connection to the server, and
hence is independent of COMMITs or ROLLBACKs performed by the application
Statements are saved to dbo.ix_consultant_capture along with its state; state includes Timestamp User name Query text (with variables substituted) Current option settings for OPTIMIZATION_GOAL,
OPTIMIZATION_LEVEL, USER_ESTIMATES Textual (short) access plan used for that statement
105
Workload capture
Statements include those executed in any context, for any connection – SELECT, INSERT, UPDATE, DELETE
Update DML statements are important – need to access the impact of index maintenance for an application
Statements that utilize host variables or procedure variables are modified to include actual literal values
Similar queries are amalgamated; annotated with their collective frequency
106
Using the Recommendations
All analysis results generated by the index consultant are persistently stored in the database by name
Analysis output: Information on Individual queries – user can assess individual plans with
and without virtual indexes in place Characteristics of proposed indexes, including which queries they benefit Effect of updates on proposed indexes Existing physical indexes that are not useful for the workload
Index Consultant can generate a script to create recommended indexes, and drop useless existing ones Indexes should probably be renamed by the DBA Index Consultant provides a measure of importance to each index, taking
into account the total cost savings, the number of queries affected, and the size of the underlying table Attempt to deal with the separability problem
107
Query processing in Jasper
Statements concerning iAnywhere Solutions' new products are forward-looking statements that involve a number of uncertaintiesand risks and cannot be guaranteed. Factors that could ultimately affect such statements are detailed from time to time in Sybase's Securities and Exchange Commission filings, including but notlimited to its annual report on Form 10-K and its quarterly reports on Form 10-Q (copies of which can be viewed on the Company's website).-----------------------------------------------------All of the information in this presentation are forward-lookingstatements, as defined above. As such, there is uncertaintyassociated with if or when any of these features will be added to theproduct.
108
Intra-query parallelism
In the Jasper release, the server will be able to exploit multiple CPUs to execute a single query
Parallelization is performed as a post-processing step of the left-deep optimization plan chosen by the optimizer
Option MAX_QUERY_TASKS controls the maximum degree of parallelism Default is 0, leaving the decision up to the optimizer Otherwise, sets the maximum number of threads for any query
Can be higher than the number of processors
109
Intra-query parallelism
Parallelism introduces a new operator, termed EXCHANGE, that provides a point within the plan that can be scheduled on a separate thread Visible in the query’s graphical plan
Parallel versions of relational operators: parallel single row groupby parallel hash groupby parallel hash join parallel Bloom filter
Operators that can appear below an EXCHANGE and hence be executed in parallel with other portions of the plan: nested loop join table (sequential) scan index scan materialize (population of a work table)
110
Intra-query parallelism
Restrictions:
only SELECT statements are parallelized only certain types of operators in a plan are parallelized parallel operators have overhead that needs to be justified by the
expected benefit some constructs like subqueries, UDFs, and outer references
limit how parallel plans can be constructed queries will only be parallelized when there are enough tasks
available
111
Materialized views
Precomputed result of a view stored as an ordinary base table
Offered by other vendors as materialized views (Oracle), indexed views (Microsoft), summary tables or materialized query tables (IBM)
CREATE MATERIALIZED VIEW statement
Base table used for view instance can be queried but not updated directly Materialized view is a redundant copy of the actual base table data Must be refreshed
SQL Anywhere Jasper will support only batch refresh Applications must be aware of the staleness of the view
Optimizer will automatically rewrite the query to use a view, rather than the base tables, if it is advantageous on a cost basis
A materialized view cannot be disabled nor invalidated Cannot modify a view’s dependent objects if it affects the view
112
Materialized views
CREATE MATERIALIZED VIEW[owner.]view-name [(column-name, ...)][IN dbspace-name][BUILD { IMMEDIATE | DEFERRED } ][REFRESH { IMMEDIATE | DEFERRED } ][PCTFREE nn][ { DISABLE | ENABLE } USE IN OPTIMIZATION ]AS select-statement
[WITH CHECK OPTION] [SET HIDDEN] DROP { MATERIALIZED VIEW | TABLE } [owner.]mat-view-name REFRESH { MATERIALIZED VIEW | TABLE } [owner.]mat-view-name
Get exclusive lock on mat view being populated using the connection BLOCKING option. Get exclusive locks, without blocking, on all tables being referenced by the materialized view. Truncate the view using the delete by page option, i.e., no logging of rows being deleted (CHECKPOINTS
disabled) Execute the query for the view and insert the result set into the view. Perform an implicit COMMIT
Does page level undo. Does a CHECKPOINT before starting.
113
Other Jasper changes
Reimplementation of all expressions to use a virtual machine to perform evaluation
Better performance, more opportunities for optimization Many changes to options
Some default settings changed, other options eliminated outright Deprecation of T-SQL outer joins
Not permitted by default; can be enabled with a new option Support (and the option) will be eliminated entirely from the next
major release after Jasper
114
After Jasper
Additional work on automatic statistics collection and discovery of inaccurate statistics
Immediate refresh for materialized view maintenance
Additional SQL/OLAP extensions: sequences, additional WINDOW functions (FIRST, LAST, LEAD), Boolean functions, GROUP BY DISTINCT support
Improved global query optimization for queries involving proxy tables (Remote Data Access)
Support for full-text searching of data in the database
115
iAnywhere at TechWave 2005
Ask the iAnywhere Experts on the Technology Boardwalk (exhibit hall)• Drop in during exhibit hall hours and have all your questions answered by our technical
experts!• Appointments outside of exhibit hall hours are also available to speak one-on-one with our
Senior Engineers. Ask questions or get your yearly technical review – ask us for details!
TechWave ToGo Channel• TechWave ToGo, an AvantGo channel providing up-to-date information about TechWave
classes, events, maps and more –now available via your handheld device! • www.ianywhere.com/techwavetogo
iAnywhere Developer Community - A one-stop source for technical information!Access to newsgroups,new betas and code samples• Monthly technical newsletters• Technical whitepapers,tips and online product documentation• Current webcast,class,conference and seminar listings• Excellent resources for commonly asked questions• All available express bug fixes and patches • Network with thousands of industry experts
http://www.ianywhere.com/developer/
116
SQL Anywhere ‘Jasper’ Release
Learn more about 'Jasper', the upcoming SQL Anywhere release, loaded with features focused on:
• Enhanced data management including performance, data protection, and developer productivity
• Innovative data movement including manageability, flexibility and performance, and messaging
Attend the following sessions:SQL Anywhere 'Jasper' New Feature Overview Session SQL512 will be held Monday, August 22nd, 1:30pm
MobiLink 'Jasper' New Feature Overview Session SQL515 will be held Wednesday, August 24th, 1:30pm
... and remember to look for sneak peeks in other sessions and morning education courses!
Register for the Jasper Beta program: www.ianywhere.com/jasper
117
Questions
?