postgresql portland performance practice project - database test 2 tuning
Post on 11-May-2015
3.225 Views
Preview:
DESCRIPTION
TRANSCRIPT
PostgreSQL Portland
Performance Practice Project
Database Test 2 (DBT-2)Tuning
Mark Wongmarkwkm@postgresql.org
Portland State University
May 14, 2009
Review from last time
__ __
/ \~~~/ \ . o O ( Questions? )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
Contents
◮ pgtune
◮ Physical disk layout
◮ Covering a few PostgreSQL configuration options - GlobalUser Configuration(GUC)
Next step!
__ __
/ \~~~/ \ . o O ( Run pgtune! )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
◮ http://pgfoundry.org/projects/pgtune/◮ http://notemagnet.blogspot.com/2008/11/automating-initial-postgresqlconf.html
Test Parameters
For illustrative purposes, we’ll look at one thing first...
◮ PostgreSQL 8.3.5
◮ 1000 warehouses
Out of the Box - From a 25 disk RAID 0 device1
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.99 11.433 : 12.647 45757 0 0.00
New Order 45.24 10.257 : 11.236 518945 5224 1.02
Order Status 4.00 9.998 : 11.023 45926 0 0.00
Payment 42.81 9.983 : 11.022 491102 0 0.00
Stock Level 3.95 9.855 : 10.837 45344 0 0.00
------------ ----- --------------------- ----------- --------------- -----
8574.99 new-order transactions per minute (NOTPM)
59.3 minute duration
0 total unknown errors
1041 second(s) ramping up
This result is from before we ran pgtune to show if it’ll help.
1http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.2/
pgtune - From a 25 disk RAID 0 device2
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.99 8.715 : 10.553 48961 0 0.00
New Order 45.22 8.237 : 9.949 554565 5425 0.99
Order Status 3.95 8.037 : 9.828 48493 0 0.00
Payment 42.84 8.026 : 9.795 525387 0 0.00
Stock Level 3.99 7.829 : 9.563 48879 0 0.00
------------ ----- --------------------- ----------- --------------- -----
9171.46 new-order transactions per minute (NOTPM)
59.3 minute duration
0 total unknown errors
1041 second(s) ramping up
This result is from after running pgtune 0.3.
2http://207.173.203.223/~markwkm/community6/dbt2/pgtune.1000.100.3/
__ __
/ \~~~/ \ . o O ( Yaay, 7% improvement! )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
__ __
/ \~~~/ \ . o O ( Cut it in half! )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
Using 2 12-disk RAID 0 devices3
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.99 27.093 : 30.836 30529 0 0.00
New Order 45.19 25.820 : 29.066 345845 3440 1.00
Order Status 3.97 25.536 : 28.976 30403 0 0.00
Payment 42.83 25.530 : 28.950 327761 0 0.00
Stock Level 4.01 25.147 : 28.516 30705 0 0.00
------------ ----- --------------------- ----------- --------------- -----
5717.71 new-order transactions per minute (NOTPM)
59.3 minute duration
0 total unknown errors
1041 second(s) ramping up
3http://207.173.203.223/~markwkm/community6/dbt2/split/split.1/
__ __
/ \~~~/ \ . o O ( Oops, 38% loss! )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
Using 1 disk for logs, 24-disk RAID 0 device for data4
Response Time (s)
Transaction % Average : 90th % Total Rollbacks %
------------ ----- --------------------- ----------- --------------- -----
Delivery 3.97 8.599 : 10.304 49125 0 0.00
New Order 45.27 7.978 : 9.553 560094 5596 1.01
Order Status 3.97 7.820 : 9.448 49138 0 0.00
Payment 42.78 7.770 : 9.383 529181 0 0.00
Stock Level 4.01 7.578 : 9.163 49558 0 0.00
------------ ----- --------------------- ----------- --------------- -----
9256.36 new-order transactions per minute (NOTPM)
59.3 minute duration
0 total unknown errors
1041 second(s) ramping up
4http://207.173.203.223/~markwkm/community6/dbt2/split/split.9/
__ __
/ \~~~/ \ . o O ( Umm, 1% improvement? )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
Lost in the noise...
Physical Disk Layout
◮ Not enough drives to do an interesting example...
◮ Entertain some old data? (Mostly lost now.)
Back in 2005 at a place called OSDL...
Systems donated by HP:
◮ HP Integrity rx4640
◮ 4x 1.5GHz Itanium 2
◮ 16 GB RAM
◮ 6 x 14-disk 15,000 RPM 3.5” SCSI Disk Arrays
◮ 6 SCSI disk controllers, 12 channels
Disks configured as single luns (1 disk RAID 0 or JBOD?), arraysin split bus configuration. Finer details may be lost forever at thispoint...
How much did physical disk layouts help back then?
PostgreSQL v8.0, guessing a 400 or 500 Warehouse database:
◮ Single 84 disk LVM2 striped volume: 3000 NOTPM
◮ Separated logs, tables, indexes using LVM2: 4078 NOTPM,35% increase of throughput.
◮ Only 35 drives used.
If memory serves, system was processor bound.
So what do I do now?
Actually have additional SCSI disk enclosures donated by Hi5.com.Attached 28 more drives, but some used for storing results anddata files.
MSA 70 Disk Configuration
◮ 1 disk for transaction lot
◮ 9 disks for customer table
◮ 9 disks for stock table
◮ 3 disks for stock table primary key index
◮ 2 disks for customer table index
◮ 2 disks for orders table index
◮ 2 disks for order line table primary key index
◮ 2 disks for customer table primary key
◮ 1 disk for warehouse, district, history, item tables plus primarykey indexes for warehouse, district and item tables
SCSI Disk Configuration
◮ 1 disk for history table
◮ 2 disk LVM2 striped volume for orders table
◮ 2 disk LVM2 striped volume for orders table primary key index
◮ 2 disk LVM2 striped volume for new order
◮ 2 disk LVM2 striped volume for new order table primary keyindex
◮ 2 disk LVM2 striped volume for order line table
What did pgtune do?
◮ Increase maintenance work mem from 16 MB to 1GB - Sets
the maximum memory to be used for maintenance operations.
◮ Increase checkpoint completion target from 0.5 to 0.9 -Time spent flushing dirty buffers during checkpoint, as
fraction of checkpoint interval.
◮ Increase effective cache size from 128 MB to 22 GB -Sets the planner’s assumption about the size of the disk cache.
◮ Increase work mem from 1 MB to 104 MB - Sets the
maximum memory to be used for query workspaces.
◮ Increase checkpoint segments from 3 to 16 - Sets the
maximum distance in log segments between automatic WAL
checkpoints.
◮ Increase shared buffers from 24 MB to 7680 MB - Sets the
number of shared memory buffers used by the server.
◮ Increase max connections from 2505 to 300 - Sets the
maximum number of concurrent connections.5max connections already increased in order to run the test.
effective cache size6
effective cache size should be set to an estimate of how muchmemory is available for disk caching by the operating system, aftertaking into account what’s used by the OS itself, dedicatedPostgreSQL memory, and other applications. This is a guideline forhow memory you expect to be available in the OS buffer cache,not an allocation! This value is used only by the PostgreSQL queryplanner to figure out whether plans it’s considering would beexpected to fit in RAM or not. If it’s set too low, indexes may notbe used for executing queries the way you’d expect.Setting effective cache size to 1/2 of total memory would bea normal conservative setting, and 3/4 of memory is a moreaggressive but still reasonable amount. You might find a betterestimate by looking at your operating system’s statistics. OnUNIX-like systems, add the free+cached numbers from free or topto get an estimate. On Windows see the ”System Cache” size inthe Windows Task Manager’s Performance tab. Changing thissetting does not require restarting the database (HUP is enough).
6http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Effects of changing effective cache size
effectivecache size notpm
3 12281.667
9 12280.698
12 12317.839
15 12380.8310
18 12241.4111
21 12165.3012
22 12249.5513
7http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.ecs.3/
8http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.ecs.9/
9http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.ecs.12/
10http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.ecs.15/
11http://207.173.203.223/~markwkm/community6/dbt2/merge/m1500.ecs.18/
12http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.1000/
13http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.1000/
Analysis of changing effective cache size
Doesn’t appear DBT-2 benefits from having a largeeffective cache size.
work mem14
If you do a lot of complex sorts, and have a lot of memory, thenincreasing the work mem parameter allows PostgreSQL to dolarger in-memory sorts which, unsurprisingly, will be faster thandisk-based equivalents.This size is applied to each and every sort done by each user, andcomplex queries can use multiple working memory sort buffers. Setit to 50MB, and have 30 users submitting queries, and you aresoon using 1.5GB of real memory. Furthermore, if a query involvesdoing merge sorts of 8 tables, that requires 8 times work mem.You need to consider what you set max connections to in order tosize this parameter correctly. This is a setting where datawarehouse systems, where users are submitting very large queries,can readily make use of many gigabytes of memory.
14http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Back to the SQL statements...
Run EXPLAIN against of all the SQL statements:
◮ Try to remove any table scans.
◮ Identify any joins that can benefit from indexes.
Note: Most SQL statements in this workload are not complex andbenefit from primary key indexes.
__ __ / \
/ \~~~/ \ . o O | What are other obvious |
,----( oo ) | optimizations? |
/ \__ __/ \ /
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
In order to get good plans
__ __
/ \~~~/ \ . o O ( VACUUM ANALYZE first! )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
VACUUM ANALYZE
◮ VACUUM ANALYZE performs a VACUUM and then anANALYZE for each selected table. This is a handycombination form for routine maintenance scripts. SeeANALYZE for more details about its processing.15
◮ VACUUM reclaims storage occupied by dead tuples. In normalPostgreSQL operation, tuples that are deleted or obsoleted byan update are not physically removed from their table; theyremain present until a VACUUM is done. Therefore it’snecessary to do VACUUM periodically, especially onfrequently-updated tables.
◮ ANALYZE collects statistics about the contents of tables inthe database, and stores the results in the pg statistic systemcatalog. Subsequently, the query planner uses these statisticsto help determine the most efficient execution plans forqueries.16
15http://www.postgresql.org/docs/8.3/interactive/sql-vacuum.html16http://www.postgresql.org/docs/8.3/interactive/sql-analyze.html
EXPLAIN Crash Course17
EXPLAIN SELECT * FROM tenk1;
QUERY PLAN
-------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=244)
The numbers that are quoted by EXPLAIN are:
◮ Estimated start-up cost (Time expended before output scancan start, e.g., time to do the sorting in a sort node.)
◮ Estimated total cost (If all rows were to be retrieved, thoughthey might not be: for example, a query with a LIMIT clausewill stop short of paying the total cost of the Limit plannode’s input node.)
◮ Estimated number of rows output by this plan node (Again,only if executed to completion.)
◮ Estimated average width (in bytes) of rows output by thisplan node
17http://www.postgresql.org/docs/8.3/static/using-explain.html
Delivery SQL Statements
EXPLAIN
SELECT no_o_id
FROM new_order
WHERE no_w_id = 1
AND no_d_id = 1;
QUERY PLAN
------------------------------------------------------------------------------------
Index Scan using pk_new_order on new_order (cost=0.00..1876.57 rows=1056 width=4)
Index Cond: ((no_w_id = 1) AND (no_d_id = 1))
(2 rows)
EXPLAIN
DELETE FROM new_order
WHERE no_o_id = 1
AND no_w_id = 1
AND no_d_id = 1;
QUERY PLAN
------------------------------------------------------------------------------
Index Scan using pk_new_order on new_order (cost=0.00..9.66 rows=1 width=6)
Index Cond: ((no_w_id = 1) AND (no_d_id = 1) AND (no_o_id = 1))
(2 rows)
Delivery SQL Statements
EXPLAIN
SELECT o_c_id
FROM orders
WHERE o_id = 1
AND o_w_id = 1
AND o_d_id = 1;
QUERY PLAN
-------------------------------------------------------------------------
Index Scan using pk_orders on orders (cost=0.00..12.89 rows=1 width=4)
Index Cond: ((o_w_id = 1) AND (o_d_id = 1) AND (o_id = 1))
(2 rows)
EXPLAIN
UPDATE orders
SET o_carrier_id = 1
WHERE o_id = 1
AND o_w_id = 1
AND o_d_id = 1;
QUERY PLAN
--------------------------------------------------------------------------
Index Scan using pk_orders on orders (cost=0.00..12.89 rows=1 width=38)
Index Cond: ((o_w_id = 1) AND (o_d_id = 1) AND (o_id = 1))
(2 rows)
Delivery SQL Statements
EXPLAIN
UPDATE order_line
SET ol_delivery_d = current_timestamp
WHERE ol_o_id = 1
AND ol_w_id = 1
AND ol_d_id = 1;
QUERY PLAN
-----------------------------------------------------------------------------------
Index Scan using pk_order_line on order_line (cost=0.00..69.95 rows=11 width=63)
Index Cond: ((ol_w_id = 1) AND (ol_d_id = 1) AND (ol_o_id = 1))
(2 rows)
EXPLAIN
SELECT SUM(ol_amount * ol_quantity)
FROM order_line
WHERE ol_o_id = 1
AND ol_w_id = 1
AND ol_d_id = 1;
QUERY PLAN
----------------------------------------------------------------------------------------
Aggregate (cost=69.92..69.94 rows=1 width=8)
-> Index Scan using pk_order_line on order_line (cost=0.00..69.89 rows=11 width=8)
Index Cond: ((ol_w_id = 1) AND (ol_d_id = 1) AND (ol_o_id = 1))
(3 rows)
Delivery SQL Statements
EXPLAIN
UPDATE customer
SET c_delivery_cnt = c_delivery_cnt + 1,
c_balance = c_balance + 1
WHERE c_id = 1
AND c_w_id = 1
AND c_d_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using pk_customer on customer (cost=0.00..12.90 rows=1 width=571)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND (c_id = 1))
(2 rows)
New Order SQL Statements
EXPLAIN
SELECT w_tax
FROM warehouse
WHERE w_id = 1;
QUERY PLAN
------------------------------------------------------------------------------
Index Scan using pk_warehouse on warehouse (cost=0.00..8.27 rows=1 width=4)
Index Cond: (w_id = 1)
(2 rows)
EXPLAIN
SELECT d_tax, d_next_o_id
FROM district
WHERE d_w_id = 1
AND d_id = 1;
QUERY PLAN
----------------------------------------------------------------------------
Index Scan using pk_district on district (cost=0.00..8.27 rows=1 width=8)
Index Cond: ((d_w_id = 1) AND (d_id = 1))
(2 rows)
New Order SQL Statements
EXPLAIN
UPDATE district
SET d_next_o_id = d_next_o_id + 1
WHERE d_w_id = 1
AND d_id = 1;
QUERY PLAN
-----------------------------------------------------------------------------
Index Scan using pk_district on district (cost=0.00..8.27 rows=1 width=98)
Index Cond: ((d_w_id = 1) AND (d_id = 1))
(2 rows)
EXPLAIN
SELECT c_discount, c_last, c_credit
FROM customer
WHERE c_w_id = 1
AND c_d_id = 1
AND c_id = 1;
QUERY PLAN
------------------------------------------------------------------------------
Index Scan using pk_customer on customer (cost=0.00..12.89 rows=1 width=19)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND (c_id = 1))
(2 rows)
New Order SQL Statements
EXPLAIN
INSERT INTO new_order (no_o_id, no_d_id, no_w_id)
VALUES (-1, 1, 1);
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0)
(1 row)
EXPLAIN
INSERT INTO orders (o_id, o_d_id, o_w_id, o_c_id, o_entry_d, o_carrier_id, o_ol_cnt, o_all_local)
VALUES (-1, 1, 1, 1, current_timestamp, NULL, 1, 1);
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.02 rows=1 width=0)
(1 row)
EXPLAIN
SELECT i_price, i_name, i_data
FROM item
WHERE i_id = 1;
QUERY PLAN
---------------------------------------------------------------------
Index Scan using pk_item on item (cost=0.00..8.28 rows=1 width=62)
Index Cond: (i_id = 1)
(2 rows)
New Order SQL Statements
EXPLAIN
SELECT s_quantity, s_dist_01, s_data
FROM stock
WHERE s_i_id = 1
AND s_w_id = 1;
QUERY PLAN
------------------------------------------------------------------------
Index Scan using pk_stock on stock (cost=0.00..23.67 rows=1 width=67)
Index Cond: ((s_w_id = 1) AND (s_i_id = 1))
(2 rows)
EXPLAIN
UPDATE stock
SET s_quantity = s_quantity - 10
WHERE s_i_id = 1
AND s_w_id = 1;
QUERY PLAN
-------------------------------------------------------------------------
Index Scan using pk_stock on stock (cost=0.00..23.68 rows=1 width=319)
Index Cond: ((s_w_id = 1) AND (s_i_id = 1))
(2 rows)
New Order SQL Statements
EXPLAIN
INSERT INTO order_line (ol_o_id, ol_d_id, ol_w_id, ol_number, ol_i_id, ol_supply_w_id, ol_delivery_d,
ol_quantity, ol_amount, ol_dist_info)
VALUES (-1, 1, 1, 1, 1, 1, NULL, 1, 1.0, ’hello kitty’);
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.02 rows=1 width=0)
(1 row)
Order Status SQL Statements
EXPLAIN
SELECT c_id
FROM customer
WHERE c_w_id = 1
AND c_d_id = 1
AND c_last = ’BARBARBAR’
ORDER BY c_first ASC;
QUERY PLAN
--------------------------------------------------------------------------------------
Sort (cost=6659.64..6659.65 rows=4 width=16)
Sort Key: c_first
-> Index Scan using pk_customer on customer (cost=0.00..6659.60 rows=4 width=16)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1))
Filter: ((c_last)::text = ’BARBARBAR’::text)
(5 rows)
EXPLAIN
SELECT c_first, c_middle, c_last, c_balance
FROM customer
WHERE c_w_id = 1
AND c_d_id = 1
AND c_id = 1;
QUERY PLAN
------------------------------------------------------------------------------
Index Scan using pk_customer on customer (cost=0.00..12.89 rows=1 width=34)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND (c_id = 1))
(2 rows)
Order Status SQL Statements
EXPLAIN
SELECT o_id, o_carrier_id, o_entry_d, o_ol_cnt
FROM orders
WHERE o_w_id = 1
AND o_d_id = 1
AND o_c_id = 1
ORDER BY o_id DESC;
QUERY PLAN
-------------------------------------------------------------------------------------
Index Scan Backward using pk_orders on orders (cost=0.00..5635.74 rows=1 width=20)
Index Cond: ((o_w_id = 1) AND (o_d_id = 1))
Filter: (o_c_id = 1)
(3 rows)
EXPLAIN
SELECT ol_i_id, ol_supply_w_id, ol_quantity, ol_amount, ol_delivery_d
FROM order_line
WHERE ol_w_id = 1
AND ol_d_id = 1
AND ol_o_id = 1;
QUERY PLAN
-----------------------------------------------------------------------------------
Index Scan using pk_order_line on order_line (cost=0.00..69.89 rows=11 width=24)
Index Cond: ((ol_w_id = 1) AND (ol_d_id = 1) AND (ol_o_id = 1))
(2 rows)
Payment SQL Statements
EXPLAIN
SELECT w_name, w_street_1, w_street_2, w_city, w_state, w_zip
FROM warehouse
WHERE w_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using pk_warehouse on warehouse (cost=0.00..8.27 rows=1 width=67)
Index Cond: (w_id = 1)
(2 rows)
EXPLAIN
UPDATE warehouse
SET w_ytd = w_ytd + 1.0
WHERE w_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using pk_warehouse on warehouse (cost=0.00..8.27 rows=1 width=88)
Index Cond: (w_id = 1)
(2 rows)
Payment SQL Statements
EXPLAIN
SELECT d_name, d_street_1, d_street_2, d_city, d_state, d_zip
FROM district
WHERE d_id = 1
AND d_w_id = 1;
QUERY PLAN
-----------------------------------------------------------------------------
Index Scan using pk_district on district (cost=0.00..8.27 rows=1 width=69)
Index Cond: ((d_w_id = 1) AND (d_id = 1))
(2 rows)
EXPLAIN
UPDATE district
SET d_ytd = d_ytd + 1.0
WHERE d_id = 1
AND d_w_id = 1;
QUERY PLAN
-----------------------------------------------------------------------------
Index Scan using pk_district on district (cost=0.00..8.28 rows=1 width=98)
Index Cond: ((d_w_id = 1) AND (d_id = 1))
(2 rows)
Payment SQL Statements
EXPLAIN
SELECT c_id
FROM customer
WHERE c_w_id = 1
AND c_d_id = 1
AND c_last = ’BARBARBAR’
ORDER BY c_first ASC;
QUERY PLAN
--------------------------------------------------------------------------------------
Sort (cost=6659.64..6659.65 rows=4 width=16)
Sort Key: c_first
-> Index Scan using pk_customer on customer (cost=0.00..6659.60 rows=4 width=16)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1))
Filter: ((c_last)::text = ’BARBARBAR’::text)
(5 rows)
EXPLAIN
SELECT c_first, c_middle, c_last, c_street_1, c_street_2, c_city, c_state, c_zip, c_phone, c_since,
c_credit, c_credit_lim, c_discount, c_balance, c_data, c_ytd_payment
FROM customer
WHERE c_w_id = 1
AND c_d_id = 1
AND c_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using pk_customer on customer (cost=0.00..12.89 rows=1 width=545)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND (c_id = 1))
(2 rows)
Payment SQL Statements
EXPLAIN
UPDATE customer
SET c_balance = c_balance - 1.0,
c_ytd_payment = c_ytd_payment + 1
WHERE c_id = 1
AND c_w_id = 1
AND c_d_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using pk_customer on customer (cost=0.00..12.90 rows=1 width=571)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND (c_id = 1))
(2 rows)
EXPLAIN
UPDATE customer
SET c_balance = c_balance - 1.0,
c_ytd_payment = c_ytd_payment + 1,
c_data = ’hello dogger’
WHERE c_id = 1
AND c_w_id = 1 AND c_d_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using pk_customer on customer (cost=0.00..12.90 rows=1 width=167)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND (c_id = 1))
(2 rows)
Payment SQL Statements
EXPLAIN
INSERT INTO history (h_c_id, h_c_d_id, h_c_w_id, h_d_id, h_w_id, h_date, h_amount, h_data)
VALUES (1, 1, 1, 1, 1, current_timestamp, 1.0, ’ab cd’);
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.02 rows=1 width=0)
(1 row)
Stock Level SQL Statements
EXPLAIN
SELECT d_next_o_id
FROM district
WHERE d_w_id = 1 AND d_id = 1;
QUERY PLAN
----------------------------------------------------------------------------
Index Scan using pk_district on district (cost=0.00..8.27 rows=1 width=4)
Index Cond: ((d_w_id = 1) AND (d_id = 1))
(2 rows)
Stock Level SQL Statements
EXPLAIN
SELECT count(*)
FROM order_line, stock, district
WHERE d_id = 1
AND d_w_id = 1
AND d_id = ol_d_id
AND d_w_id = ol_w_id
AND ol_i_id = s_i_id
AND ol_w_id = s_w_id
AND s_quantity < 15
AND ol_o_id BETWEEN (1) AND (20);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------
Aggregate (cost=5458.89..5458.90 rows=1 width=0)
-> Nested Loop (cost=0.00..5458.86 rows=13 width=0)
-> Index Scan using pk_district on district (cost=0.00..8.27 rows=1 width=8)
Index Cond: ((d_w_id = 1) AND (d_id = 1))
-> Nested Loop (cost=0.00..5450.46 rows=13 width=8)
-> Index Scan using pk_order_line on order_line (cost=0.00..427.82 rows=212 width=12)
Index Cond: ((ol_w_id = 1) AND (ol_d_id = 1) AND (ol_o_id >= 1) AND (ol_o_id <= 20))
-> Index Scan using pk_stock on stock (cost=0.00..23.68 rows=1 width=8)
Index Cond: ((stock.s_w_id = 1) AND (stock.s_i_id = order_line.ol_i_id) AND
(stock.s_quantity < 15::double precision))
Indexes to Create
CREATE INDEX i_orders
ON orders (o_w_id, o_d_id, o_c_id)
CREATE INDEX i_customer
ON customer (c_w_id, c_d_id, c_last, c_first, c_id)
Indexing affect on Payment and Order Status Transaction
EXPLAIN
SELECT c_id FROM customer
WHERE c_w_id = 1
AND c_d_id = 1
AND c_last = ’BARBARBAR’
ORDER BY c_first ASC;
QUERY PLAN
--------------------------------------------------------------------------------------
Sort (cost=6659.64..6659.65 rows=4 width=16)
Sort Key: c_first
-> Index Scan using pk_customer on customer (cost=0.00..6659.60 rows=4 width=16)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1))
Filter: ((c_last)::text = ’BARBARBAR’::text)
(5 rows)
QUERY PLAN
----------------------------------------------------------------------------------------
Index Scan using i_customer on customer (cost=0.00..22.49 rows=4
width=17)
Index Cond: ((c_w_id = 1) AND (c_d_id = 1) AND ((c_last)::text = ’BARBARBAR’::text))
(2 rows)
Indexing affect on Order Status Transaction
SELECT o_id, o_carrier_id, o_entry_d, o_ol_cnt
FROM orders
WHERE o_w_id = 1
AND o_d_id = 1
AND o_c_id = 1
ORDER BY o_id DESC;
QUERY PLAN
-------------------------------------------------------------------------------------
Index Scan Backward using pk_orders on orders (cost=0.00..5635.74 rows=1 width=20)
Index Cond: ((o_w_id = 1) AND (o_d_id = 1))
Filter: (o_c_id = 1)
(3 rows)
QUERY PLAN
-------------------------------------------------------------------------------
Sort (cost=12.90..12.91 rows=1 width=20)
Sort Key: o_id
-> Index Scan using i_orders on orders (cost=0.00..12.89 rows=1 width=20)
Index Cond: ((o_w_id = 1) AND (o_d_id = 1) AND (o_c_id = 1))
(4 rows)
work mem impact on DBT-2
EXPLAIN indicates none of the SQL statements calls for sorts.Impact to DBT-2 is likely minimal.
maintenance work mem18
maintenance work mem is used for operations like vacuum. Usingextremely large values here doesn’t help very much, and becauseyou essentially need to reserve that memory for when vacuum kicksin, which takes it away from more useful purposes. Something inthe 256MB range has anecdotally been a reasonable large settinghere.In 8.3 you can use log temp files to figure out if sorts are using diskinstead of fitting in memory. In earlier versions you might insteadjust monitoring the size of them by looking at how much space isbeing used in the various $PGDATA/base/<db oid>/pgsql tmpfiles. You can see sorts to disk happen in EXPLAIN ANALYZEplans as well. For example, if you see a line like ”Sort Method:external merge Disk: 7526kB” in there, you’d know a work mem ofat least 8MB would really improve how fast that query executed,by sorting in RAM instead of swapping to disk.
18http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
maintenance work mem impact on DBT-2
Vacuuming is a frequent event in DBT-2 because of the largenumber of INSERT, UPDATE and DELETE statements being run.I didn’t know about using log temp files, will have toexperiment and see what the case is for this workload.
checkpoint segments19
PostgreSQL writes new transactions to the database in file called WALsegments that are 16MB in size. Every time checkpoint segments worthof them have been written, by default 3, a checkpoint occurs.Checkpoints can be resource intensive, and on a modern system doingone every 48MB will be a serious performance bottleneck. Settingcheckpoint segments to a much larger value improves that. Unless you’rerunning on a very small configuration, you’ll almost certainly be bettersetting this to at least 10, which also allows usefully increasing thecompletion target.
For more write-heavy systems, values from 32 (checkpoint every 512MB)
to 256 (every 128GB) are popular nowadays. Very large settings use a lot
more disk and will cause your database to take longer to recover, so
make sure you’re comfortable with both those things before large
increases. Normally the large settings (>64/1GB) are only used for bulk
loading. Note that whatever you choose for the segments, you’ll still get
a checkpoint at least every 5 minutes unless you also increase
checkpoint timeout (which isn’t necessary on most system).
19http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Notes regarding checkpoint segments
PostgreSQL settings to help:
◮ Time stamp log messages: log line prefix = "%t "
◮ Log checkpoints: log checkpoints = on
PostgreSQL log when checkpoint segments is 500
2009-05-11 13:58:51 PDT LOG: checkpoint starting: xlog2009-05-11 14:02:02 PDT LOG: checkpoint complete: wrote517024 buffers (49.3%); 1 transaction log file(s) added, 0 removed,0 recycled; write=153.927 s, sync=36.297 s, total=190.880 s
Effects of changing checkpoint segments
checkpoint Approximate Approximatesegments notpm Frequency (min) Duration (min)
3 8544.2520 .25 .2510 8478.9321 .5 .5100 9114.3222 2 2250 9457.6023 3 3500 10820.4924 5 31000 12249.5525 10 510000 9252.4326 FAIL FAIL20000 9349.9027 FAIL FAIL
20http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.3/
21http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.10/
22http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.100/
23http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.250/
24http://207.173.203.223/~markwkm/community6/dbt2/merge/merge.15/
25http://207.173.203.223/~markwkm/community6/dbt2/m1500/m1500.cs.1000/
26http://207.173.203.223/~markwkm/community6/dbt2/merge/merge.16/
27http://207.173.203.223/~markwkm/community6/dbt2/merge/merge.17/
Learning about checkpoint segments the hard way
Reminder: I have one 72 GB drive for the xlog, amounting to 68GB of usable space with ext2.
10000checkpointsegments × 16MB = 160000MB = 156.25GB
(1)20000checkpointsegments × 16MB = 320000MB = 312.50GB
(2)
PANIC: could not write to file ”pg xlog/xlogtemp.28123”: Nospace left on device
checkpoint completion target28
Starting with PostgreSQL 8.3, the checkpoint writes are spread outa bit while the system starts working toward the next checkpoint.You can spread those writes out further, lowering the average writeoverhead, by increasing the checkpoint completion targetparameter to its useful maximum of 0.9 (aim to finish by the time90% of the next checkpoint is here) rather than the default of 0.5(aim to finish when the next one is 50% done). A setting of 0gives something similar to the behavior of the earlier versions. Themain reason the default isn’t just 0.9 is that you need a largercheckpoint segments value than the default for broader spreadingto work well. For lots more information on checkpoint tuning, seeCheckpoints and the Background Writer (where you’ll also learnwhy tuning the background writer parameters, particularly those in8.2 and below, is challenging to do usefully).
28http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
shared buffers29
The shared buffers configuration parameter determines how muchmemory is dedicated to PostgreSQL use for caching data. Thedefaults are low because on some platforms (like older Solarisversions and SGI) having large values requires invasive action likerecompiling the kernel. If you have a system with 1GB or more ofRAM, a reasonable starting value for shared buffers is 1/4 of thememory in your system. If you have less ram you’ll have toaccount more carefully for how much RAM the OS is taking up,closer to 15% is more typical there.
29http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Effects of changing shared buffers DBT-2
Ranged from approximately 10% to 90% total physical memory.3030http://207.173.203.223/ markwkm/community6/dbt2/m-sb/
shared buffers FAIL when more than 23 GB
2009-03-28 14:20:13 PDT LOG: background writer process (PID11001) was terminated by signal 9: Killed2009-03-28 14:20:13 PDT LOG: terminating any other activeserver processes2009-03-28 14:20:13 PDT WARNING: terminating connectionbecause of crash of another server process2009-03-28 14:20:13 PDT DETAIL: The postmaster hascommanded this server process to roll back the current transactionand exit, because another server process exited abnormally andpossibly corrupted shared memory.2009-03-28 14:20:13 PDT HINT: In a moment you should be ableto reconnect to the database and repeat your command.
Final Notes
If something wasn’t covered, it means its relevance was notexplored not that it wasn’t relevant.
Materials Are Freely Available
◮ http://www.slideshare.net/markwkm
LATEX Beamer (source)◮ http://git.postgresql.org/gitweb?p=performance-tuning.git
Time and Location
When: 2nd Thursday of the monthLocation: Portland State UniversityRoom: FAB 86-01 (Fourth Avenue Building)Map: http://www.pdx.edu/map.html
Coming up next time. . .
Tuning!
__ __
/ \~~~/ \ . o O ( Thank you! )
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
Acknowledgements
Haley Jane Wakenshaw
__ __
/ \~~~/ \
,----( oo )
/ \__ __/
/| (\ |(
^ \ /___\ /\ |
|__| |__|-"
License
This work is licensed under a Creative Commons Attribution 3.0Unported License. To view a copy of this license, (a) visithttp://creativecommons.org/licenses/by/3.0/us/; or, (b)send a letter to Creative Commons, 171 2nd Street, Suite 300, SanFrancisco, California, 94105, USA.
top related