PostgreSQL的分析功能一些东西

这篇帖子是 PostgreSQL Advent Calendar 2022 的第八天。
昨天是由 @quiver 介绍的关于 PostgreSQL 的 UPSERT 时死锁如何增加的帖子。

~特此声明~

尽管每天都接触到PostgreSQL,但我几乎不曾编写与业务相关的SQL语句,只关注于设计和运维方面。最近我重新学习了SQL技能(主要是窗口函数),在此与大家分享一下我对最新的PostgreSQL分析功能以及其他我感兴趣的内容。

“然后呢?所以怎么样呢?”这种没有结论、没有实用性的话题,我考虑了3秒是否要发布,但同时也想把它作为我今后想要追求的事情的备忘录。

窗口函数,彻底理解了。

image.png

当我读《达人学SQL彻底指南书第2版》时,语法规则变得清晰明了。接下来,要进行有意义的分析,需要了解聚合函数的选择,并理解数学和统计意义,这是一个关键点。本书中以移动平均作为例子,我觉得通过查阅相关资料,世界变得更加清晰了,就像是”这是那些在分析冠状病毒时那些聪明人说的”一样的知识。

我会继续这样慢悠悠地进行下去。可以吗?

PostgreSQL的分析功能

既然已经学习了Window函数,站在编写分析查询的入口上,我想与其他分析功能一起比较一下。虽然标题中包含“分析功能”,但其中包括像SQL语法中的Window函数、与服务器端设置和优化相关的并行查询之类的话题。共同之处在于对整个表格或广泛范围进行聚合处理的类型。

分類機能SQL構文GROUP BY、HAVING
Window関数
GROUPING SETS、ROLLUP、CUBE
TABLESAMPLE
各種集計関数 などサーバーサイドの機能パラレルクエリ
テーブルパーティショニング
BRINインデックス など

如果说GROUP BY经常被使用,那确实如此。但是,假设使用索引将符合条件的内容聚合以进行显示是OLTP处理的常规用法,那么我所描述的情况是指没有使用索引而是将整个表作为目标进行GROUP BY。同样,BETWEEN条件的范围也很广。

所以,我个人一直很在意这些功能是否能够很好地配合?我想更深入地探讨一下对于使用窗口函数的人来说,并行查询的增强意味着什么?分区呢?

用一个简单的话题进行验证

最近我有一个工作是设计一个巨大表的分区。首先我仔细查看了经常执行的SQL,提取出了WHERE条件,并确定了适合作为分区键的列。然后,我研究了该列适合使用哪种类型的分区(范围、列表或哈希),以及把范围分区划分成什么样的范围是最好的。为了考虑这些问题,我调查了该列中存储的数据类型和分布。我被告知可以向巨大的表中发送即兴的SQL查询,并紧张地编写了查询。

作为例子,我们将以HammerDB的TPROC-H Schema中最大的lineitem表为主题,结合各种功能来查询。该表中存储着数百万到一千万条左右的商品销售历史数据(在规模因子10的情况下,大约为10GB的lineitem表)。

年件数19927606292199391109101994911696119959100951199691169041997912171019986825492

如果按年度划分数据,看起来数据应该相对均等。我将使用实际数据在SQL中进行确认。

SQL的示例

由于lineitem表的l_shipdate列包含时间戳类型(YYYY-MM-DD HH:MI:SS.ususus),可以使用date_part函数提取其中的年份信息,并使用该结果进行GROUP BY操作。

SELECT date_part('year',l_shipdate),count(*)
FROM lineitem
GROUP BY 1;

 date_part |  count
-----------+---------
      1992 | 7606292
      1993 | 9110910
      1994 | 9116961
      1995 | 9100951
      1996 | 9116904
      1997 | 9121710
      1998 | 6825492

此时的执行计划是:

                                                                       QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize GroupAggregate  (cost=1479646.64..1481921.74 rows=2507 width=16) (actual time=4152.351..4154.872 rows=7 loops=1)
   Group Key: (date_part('year'::text, l_shipdate))
   ->  Gather Merge  (cost=1479646.64..1481802.66 rows=17549 width=16) (actual time=4152.340..4154.857 rows=56 loops=1)
         Workers Planned: 7
         Workers Launched: 7
         ->  Sort  (cost=1478646.52..1478652.79 rows=2507 width=16) (actual time=4147.755..4147.756 rows=7 loops=8)
               Sort Key: (date_part('year'::text, l_shipdate))
               Sort Method: quicksort  Memory: 25kB
               Worker 0:  Sort Method: quicksort  Memory: 25kB
          (中略)
           ☆  ->  Partial HashAggregate  (cost=1478473.64..1478504.98 rows=2507 width=16) (actual time=4147.702..4147.723 rows=7 loops=8)
                     Group Key: date_part('year'::text, l_shipdate)
                     Batches: 1  Memory Usage: 121kB
                     Worker 0:  Batches: 1  Memory Usage: 121kB
            (中略)
                ★   ->  Parallel Seq Scan on lineitem  (cost=0.00..1435620.89 rows=8570551 width=8) (actual time=0.098..2939.771 rows=7499902 loops=8)
 Planning Time: 0.128 ms
 Execution Time: 4154.936 ms

使用10GB的表格,将其分成8个工作进程进行并行扫描,大约只需3秒钟。通过对每个工作进程所得到的结果进行GROUP BY操作,可以将结果集缩小到7行。

在☆操作之上,通过Gather Merge将每个工作进程拥有的7行结果集合并起来,得到56行的结果,再通过Finalize GroupAggregate进行真正的聚合计算,但由于☆操作已经很好地筛选了结果,所以这些处理只需几毫秒。

并行查询的参数已经被增大到大约10个核心左右,这样即使没有特别注意,只要编写了带有GROUP BY的SQL语句,就会自动启动并行查询,我认为结果非常好。

只需一個選項,以中文將無茶なSQL①改寫:不合理的SQL①

起初,我忘记了date_part函数的存在,写了一些不合理的SQL语句哈哈。我试图使用子查询之类的方法努力提取“包含在1992年的数据”。

這個例子在這裡。

WITH "1992" AS ( SELECT '1992' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1992-01-02 00:00:00' and '1992-12-31 23:59:59')
    ,"1993" AS ( SELECT '1993' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1993-01-01 00:00:00' and '1993-12-31 23:59:59')
    ,"1994" AS ( SELECT '1994' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1994-01-01 00:00:00' and '1994-12-31 23:59:59')
    ,"1995" AS ( SELECT '1995' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1995-01-01 00:00:00' and '1995-12-31 23:59:59')
    ,"1996" AS ( SELECT '1996' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1996-01-01 00:00:00' and '1996-12-31 23:59:59')
    ,"1997" AS ( SELECT '1997' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1997-01-01 00:00:00' and '1997-12-31 23:59:59')
    ,"1998" AS ( SELECT '1998' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1998-01-01 00:00:00' and '1998-12-31 23:59:59')

SELECT year,count from "1992"
UNION ALL SELECT year,count from "1993"
UNION ALL SELECT year,count from "1994"
UNION ALL SELECT year,count from "1995"
UNION ALL SELECT year,count from "1996"
UNION ALL SELECT year,count from "1997"
UNION ALL SELECT year,count from "1998"
ORDER BY year
;

只有整齐排列的字符串很让人满意,这是这个SQL的优点。这个SQL的缺点是,

    • 1992~1998のデータしかないことが分かってないと書けない

 

    • 1秒以下の精度でタイムスタンプが入ってるとBETWEENが使えない。(不等号ならOK)

 

    長い

这个时候的执行计划就在这里。

                                                                                  QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Gather Merge  (cost=6745842.65..6745843.00 rows=3 width=40) (actual time=48330.190..48330.328 rows=7 loops=1)
   Workers Planned: 3
   Workers Launched: 3
   ->  Sort  (cost=6744842.61..6744842.61 rows=1 width=40) (actual time=42049.744..42049.750 rows=2 loops=4)
         Sort Key: ('1996'::text)
         Sort Method: quicksort  Memory: 25kB
         Worker 0:  Sort Method: quicksort  Memory: 25kB
         Worker 1:  Sort Method: quicksort  Memory: 25kB
         Worker 2:  Sort Method: quicksort  Memory: 25kB
     ☆  ->  Parallel Append  (cost=2251415.81..6744842.60 rows=1 width=40) (actual time=25118.415..42049.641 rows=2 loops=4)
               ->  Aggregate  (cost=2251999.42..2251999.43 rows=1 width=40) (actual time=21549.800..21549.801 rows=1 loops=1)
                 ★   ->  Seq Scan on lineitem  (cost=0.00..2228396.84 rows=9441033 width=0) (actual time=0.234..20920.489 rows=9116904 loops=1)
                           Filter: ((l_shipdate >= '1996-01-01 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1996-12-31 23:59:59'::timestamp without time zone))
                           Rows Removed by Filter: 50882316
               ->  Aggregate  (cost=2251424.16..2251424.17 rows=1 width=40) (actual time=26309.258..26309.259 rows=1 loops=1)
                     ->  Seq Scan on lineitem lineitem_1  (cost=0.00..2228396.84 rows=9210928 width=0) (actual time=0.225..25713.021 rows=9121710 loops=1)
                           Filter: ((l_shipdate >= '1997-01-01 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1997-12-31 23:59:59'::timestamp without time zone))
                           Rows Removed by Filter: 50877510
                       :
                     (以下、同じSeq ScanとAggregaetの塊が7年分続く)
 Planning Time: 0.594 ms
 Execution Time: 48330.426 ms

☆Parallel Append是指并行执行Union和分区表搜索结果的追加,是近年来相对较新的功能。
然而,根据★,仅有一个工作者在处理扫描10GB表的任务,需要耗时超过20秒。而之前仅进行GROUP BY的简单扫描只需8个工作者,仅需3秒。

不合理的SQL②

心中默默嘀咕着「一定有更好的写法吧~」,只用了大概3分钟做了些微小的修改。

SELECT year,count FROM ( SELECT '1992' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1992-01-02 00:00:00' and '1992-12-31 23:59:59') AS "1992"
UNION
SELECT year,count FROM ( SELECT '1993' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1993-01-01 00:00:00' and '1993-12-31 23:59:59') AS "1993"
UNION
SELECT year,count FROM ( SELECT '1994' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1994-01-01 00:00:00' and '1994-12-31 23:59:59') AS "1994"
UNION
SELECT year,count FROM ( SELECT '1995' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1995-01-01 00:00:00' and '1995-12-31 23:59:59') AS "1995"
UNION
SELECT year,count FROM ( SELECT '1996' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1996-01-01 00:00:00' and '1996-12-31 23:59:59') AS "1996"
UNION
SELECT year,count FROM ( SELECT '1997' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1997-01-01 00:00:00' and '1997-12-31 23:59:59') AS "1997"
UNION
SELECT year,count FROM ( SELECT '1998' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1998-01-01 00:00:00' and '1998-12-31 23:59:59') AS "1998"
;

思想保持不变,但是看到这个执行计划感到惊讶。

                                                                                     QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=10227809.26..10227809.33 rows=7 width=40) (actual time=16130.753..16133.169 rows=7 loops=1)
   Group Key: ('1992'::text), (count(*))
   Batches: 1  Memory Usage: 24kB
☆  ->  Append  (cost=1460720.90..10227809.23 rows=7 width=40) (actual time=2308.403..16133.125 rows=7 loops=1)
         ->  Finalize Aggregate  (cost=1460720.90..1460720.91 rows=1 width=40) (actual time=2308.402..2308.539 rows=1 loops=1)
               ->  Gather  (cost=1460720.17..1460720.88 rows=7 width=8) (actual time=2308.299..2308.527 rows=8 loops=1)
                     Workers Planned: 7
                     Workers Launched: 7
                     ->  Partial Aggregate  (cost=1459720.17..1459720.18 rows=1 width=8) (actual time=2303.714..2303.715 rows=1 loops=8)
                      ★   ->  Parallel Seq Scan on lineitem  (cost=0.00..1457047.26 rows=1069164 width=0) (actual time=0.054..2240.762 rows=950786 loops=8)
                                 Filter: ((l_shipdate >= '1992-01-02 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1992-12-31 23:59:59'::timestamp without time zone))
                                 Rows Removed by Filter: 6549116
         ->  Finalize Aggregate  (cost=1461336.42..1461336.43 rows=1 width=40) (actual time=2323.858..2323.961 rows=1 loops=1)
               ->  Gather  (cost=1461335.69..1461336.40 rows=7 width=8) (actual time=2323.751..2323.950 rows=8 loops=1)
                     Workers Planned: 7
                     Workers Launched: 7
                     ->  Partial Aggregate  (cost=1460335.69..1460335.70 rows=1 width=8) (actual time=2318.852..2318.853 rows=1 loops=8)
                           ->  Parallel Seq Scan on lineitem lineitem_1  (cost=0.00..1457047.26 rows=1315370 width=0) (actual time=0.051..2244.974 rows=1138864 loops=8)
                                 Filter: ((l_shipdate >= '1993-01-01 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1993-12-31 23:59:59'::timestamp without time zone))
                                 Rows Removed by Filter: 6361039
                       :
          (以下、同じParallel Seq ScanとPartial Aggregateの塊が7年分続く)
 Planning Time: 0.519 ms
 Execution Time: 16133.327 ms
(62 行)

通过并行处理8个线程,对工人持有的结果集进行汇总,实现并行添加而非简单追加。结果显示,速度从48秒缩短到16秒,近乎三倍的提速。

很多人在义务教育中学到过「UNION是用来汇总结果的,会增加开销。如果只是简单地排列结果,用UNION ALL就可以了」,但当涉及并行查询时,情况可能并非如此。这一点值得更深入地探究。顺便说一下,上述的SQL ①②使用的是WITH子句和子查询,但在内部被认为是等价的。如果交换互相的UNION和UNION ALL,结果还是采用UNION的执行计划,并且处理时间相当。

不合理的SQL查询③

实际上,我在这里第一次意识到了GROUP BY,但既然已经学了窗口函数,我打算用类似的方式进行编写,于是写了一段几乎没有任何意义的计算SQL。

SELECT year,max(count) FROM
    (
     SELECT date_part('year',l_shipdate) AS year,
            count(*) OVER (PARTITION BY date_part('year',l_shipdate)) AS count
     FROM lineitem
    ) AS li
GROUP BY year;

期待是不是能在一次扫描中完成,所以和GROUP BY差别不大呢?

                                                                       QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=2423560.94..11744037.00 rows=200 width=16) (actual time=13321.240..44203.630 rows=7 loops=1)
   Group Key: (date_part('year'::text, lineitem.l_shipdate))
☆ ->  WindowAgg  (cost=2423560.94..10844127.16 rows=59993856 width=16) (actual time=7647.706..39173.654 rows=59999220 loops=1)
         ->  Gather Merge  (cost=2423560.94..9794234.68 rows=59993856 width=8) (actual time=4386.070..17289.370 rows=59999220 loops=1)
               Workers Planned: 7
               Workers Launched: 7
               ->  Sort  (cost=2422560.82..2443987.20 rows=8570551 width=8) (actual time=4362.319..5116.062 rows=7499902 loops=8)
                     Sort Key: (date_part('year'::text, lineitem.l_shipdate))
                     Sort Method: quicksort  Memory: 551372kB
                     Worker 0:  Sort Method: quicksort  Memory: 543433kB
                     Worker 1:  Sort Method: quicksort  Memory: 549387kB
                (中略)
                 ★  ->  Parallel Seq Scan on lineitem  (cost=0.00..1435620.89 rows=8570551 width=8) (actual time=0.087..2978.888 rows=7499902 loops=8)
 Planning Time: 0.159 ms
 Execution Time: 44270.792 ms

扫描很好。并行查询正按预期工作。
然而,像窗口函数这样的全行对象无法在并行中执行聚合操作,所以我们需要使用☆进行聚集操作,针对集合结果的6000万行进行统计。这真是困难!

这次由于窗口太大,导致不必要地对所有行进行了普通的GROUP BY计算,但从考虑到以窗口帧移动来获取结果的本来窗口函数的行为,我还是能接受这个结果。

除此之外,通过改变参数和进行一些验证,我们发现如果没有并行查询,在★扫描部分会花费数十秒,而应用窗口函数的部分还会花费数十秒,所以最终所需时间会增加一倍左右,这表明并行查询确实有其益处。

将表格分区细化

这里我也会附上之前进行分割后的结果。

《俗气的SQL②(UNION)的运用方式》

在年度单位进行分区后,将`set enable_partitionwise_aggregate`设置为 `on`,然后执行。

WITH "1992" AS ( SELECT '1992' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1992-01-02 00:00:00' and '1992-12-31 23:59:59')
    ,"1993" AS ( SELECT '1993' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1993-01-01 00:00:00' and '1993-12-31 23:59:59')
    ,"1994" AS ( SELECT '1994' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1994-01-01 00:00:00' and '1994-12-31 23:59:59')
    ,"1995" AS ( SELECT '1995' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1995-01-01 00:00:00' and '1995-12-31 23:59:59')
    ,"1996" AS ( SELECT '1996' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1996-01-01 00:00:00' and '1996-12-31 23:59:59')
    ,"1997" AS ( SELECT '1997' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1997-01-01 00:00:00' and '1997-12-31 23:59:59')
    ,"1998" AS ( SELECT '1998' AS year,count(*) AS count FROM lineitem WHERE l_shipdate between '1998-01-01 00:00:00' and '1998-12-31 23:59:59')

SELECT year,count from "1992"
UNION SELECT year,count from "1993"
UNION SELECT year,count from "1994"
UNION SELECT year,count from "1995"
UNION SELECT year,count from "1996"
UNION SELECT year,count from "1997"
UNION SELECT year,count from "1998"
ORDER BY year
;

执行计划&所需时间如下。

                                                                                        QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=1551507.59..1551507.66 rows=7 width=40) (actual time=3801.693..3803.776 rows=7 loops=1)
   Group Key: ('1992'::text), (count(*))
   Batches: 1  Memory Usage: 24kB
   ->  Append  (cost=196045.59..1551507.56 rows=7 width=40) (actual time=460.998..3803.733 rows=7 loops=1)
         ->  Finalize Aggregate  (cost=196045.59..196045.60 rows=1 width=40) (actual time=460.997..461.136 rows=1 loops=1)
               ->  Gather  (cost=98420.61..196045.57 rows=10 width=8) (actual time=460.895..461.128 rows=9 loops=1)
                     Workers Planned: 5
                     Workers Launched: 5
                     ->  Parallel Append  (cost=97420.61..195044.57 rows=2 width=8) (actual time=451.896..456.650 rows=2 loops=6)
                           ->  Partial Aggregate  (cost=97623.92..97623.93 rows=1 width=8) (actual time=446.380..446.381 rows=1 loops=3)
                                 ->  Parallel Seq Scan on lineitem_1992_2 lineitem_1  (cost=0.00..95720.57 rows=761341 width=0) (actual time=0.051..380.260 rows=1269063 loops=3)
                                       Filter: ((l_shipdate >= '1992-01-02 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1992-12-31 23:59:59'::timestamp without time zone))
                           ->  Partial Aggregate  (cost=97420.61..97420.62 rows=1 width=8) (actual time=233.455..233.456 rows=1 loops=6)
                                 ->  Parallel Seq Scan on lineitem_1992_1 lineitem  (cost=0.00..95521.31 rows=759723 width=0) (actual time=0.063..199.148 rows=633184 loops=6)
                                       Filter: ((l_shipdate >= '1992-01-02 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1992-12-31 23:59:59'::timestamp without time zone))
         ->  Finalize Aggregate  (cost=234621.36..234621.37 rows=1 width=40) (actual time=564.965..565.065 rows=1 loops=1)
               ->  Gather  (cost=117750.26..234621.34 rows=10 width=8) (actual time=564.411..565.056 rows=9 loops=1)
                     Workers Planned: 5
                     Workers Launched: 5
                     ->  Parallel Append  (cost=116750.26..233620.34 rows=2 width=8) (actual time=559.713..560.105 rows=2 loops=6)
                           ->  Partial Aggregate  (cost=116870.05..116870.06 rows=1 width=8) (actual time=558.645..558.646 rows=1 loops=3)
                                 ->  Parallel Seq Scan on lineitem_1993_2 lineitem_3  (cost=0.00..114591.56 rows=911394 width=0) (actual time=0.050..471.799 rows=1519285 loops=3)
                                       Filter: ((l_shipdate >= '1993-01-01 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1993-12-31 23:59:59'::timestamp without time zone))
                           ->  Partial Aggregate  (cost=116750.26..116750.27 rows=1 width=8) (actual time=280.777..280.778 rows=1 loops=6)
                                 ->  Parallel Seq Scan on lineitem_1993_1 lineitem_2  (cost=0.00..114474.17 rows=910436 width=0) (actual time=0.034..237.637 rows=758843 loops=6)
                                       Filter: ((l_shipdate >= '1993-01-01 00:00:00'::timestamp without time zone) AND (l_shipdate <= '1993-12-31 23:59:59'::timestamp without time zone))

     (以下、略)
 Planning Time: 1.643 ms
 Execution Time: 3814.569 ms

由于年份分区和子查询范围相匹配,每个表的扫描时间被缩短了。

總所需時間由16秒減少到3.8秒。

一方面,普通的GROUP BY是

SELECT date_part('year',l_shipdate),count(*) FROM lineitem
GROUP BY 1;

                                                                                  QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------
 Finalize GroupAggregate  (cost=1545894.53..1549019.19 rows=2506 width=16) (actual time=5340.199..5342.802 rows=7 loops=1)
   Group Key: (date_part('year'::text, lineitem_10.l_shipdate))
   ->  Gather Merge  (cost=1545894.53..1548865.04 rows=24565 width=16) (actual time=5340.187..5342.789 rows=22 loops=1)
         Workers Planned: 5
         Workers Launched: 5
         ->  Sort  (cost=1544894.45..1544906.73 rows=4913 width=16) (actual time=5334.671..5334.679 rows=4 loops=6)
               Sort Key: (date_part('year'::text, lineitem_10.l_shipdate))
               Sort Method: quicksort  Memory: 25kB
               Worker 0:  Sort Method: quicksort  Memory: 25kB
               Worker 1:  Sort Method: quicksort  Memory: 25kB
               Worker 2:  Sort Method: quicksort  Memory: 25kB
               Worker 3:  Sort Method: quicksort  Memory: 25kB
               Worker 4:  Sort Method: quicksort  Memory: 25kB
               ->  Parallel Append  (cost=116887.63..1544593.22 rows=4913 width=16) (actual time=2321.327..5334.633 rows=4 loops=6)
                     ->  Partial HashAggregate  (cost=117018.51..117023.08 rows=365 width=16) (actual time=2417.284..2417.286 rows=1 loops=1)
                           Group Key: date_part('year'::text, lineitem_10.l_shipdate)
                           Worker 4:  Batches: 1  Memory Usage: 37kB
                           ->  Parallel Seq Scan on lineitem_1997_1 lineitem_10  (cost=0.00..112455.22 rows=912658 width=8) (actual time=0.049..1687.392 rows=4563289 loops=1)
                     ->  Partial HashAggregate  (cost=116907.82..116912.38 rows=365 width=16) (actual time=2432.840..2432.842 rows=1 loops=1)
                           Group Key: date_part('year'::text, lineitem_5.l_shipdate)
                           Worker 3:  Batches: 1  Memory Usage: 37kB
                           ->  Parallel Seq Scan on lineitem_1994_2 lineitem_5  (cost=0.00..112348.73 rows=911818 width=8) (actual time=0.058..1706.500 rows=4559091 loops=1)

             (中略)

 Planning Time: 0.953 ms
 Execution Time: 5343.265 ms

由于进行了分区划分,导致并行查询的并行度最多只能达到8,并且由于需要遍历所有分区表格,这造成了一些额外开销。所以,查询时间从大约4秒增加到了5秒。

表的抽样 TABLESAMPLE 语句

由于篇幅较长,我将简洁地介绍一下,在进行大量数据的聚合计算时,可以使用TABLESAMPLE子句来指定要访问表的百分比,并仅计算近似结果。通过减少访问的数据块数量,但如果样本足够大,则聚合结果的影响将变得微不足道。以本次的例子中使用的COUNT函数为例,由于它仅表示所获取行数的比例,因此如果以10%的比例获取,则实际行数将是原数的10倍,并且平均值等指标可以直接使用。但是,如果涉及并行查询,则情况可能并非如此。

SELECT date_part('year',l_shipdate),count(*) FROM lineitem TABLESAMPLE SYSTEM(50)
GROUP BY 1;

                                                                                QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize GroupAggregate  (cost=3482483.56..3482560.06 rows=200 width=16) (actual time=17152.488..17154.426 rows=7 loops=1)
   Group Key: (date_part('year'::text, lineitem.l_shipdate))
   ->  Gather Merge  (cost=3482483.56..3482554.56 rows=600 width=16) (actual time=17152.475..17154.412 rows=10 loops=1)
         Workers Planned: 3
         Workers Launched: 3
         ->  Sort  (cost=3481483.52..3481484.02 rows=200 width=16) (actual time=14360.652..14360.657 rows=2 loops=4)
               Sort Key: (date_part('year'::text, lineitem.l_shipdate))
               Sort Method: quicksort  Memory: 25kB
               Worker 0:  Sort Method: quicksort  Memory: 25kB
               Worker 1:  Sort Method: quicksort  Memory: 25kB
               Worker 2:  Sort Method: quicksort  Memory: 25kB
               ->  Partial HashAggregate  (cost=3481473.37..3481475.87 rows=200 width=16) (actual time=14360.621..14360.627 rows=2 loops=4)
                     Group Key: (date_part('year'::text, lineitem.l_shipdate))
                     Batches: 1  Memory Usage: 40kB
                     Worker 0:  Batches: 1  Memory Usage: 40kB
                     Worker 1:  Batches: 1  Memory Usage: 40kB
                     Worker 2:  Batches: 1  Memory Usage: 40kB
                     ->  Parallel Append  (cost=0.00..3336313.64 rows=29031946 width=8) (actual time=0.093..10723.944 rows=22505707 loops=4)
                           ->  Sample Scan on lineitem_1996_2 lineitem_10  (cost=0.00..696327.44 rows=6839955 width=8) (actual time=0.112..2840.457 rows=6823816 loops=1)
                                 Sampling: system ('50'::real)
                           ->  Sample Scan on lineitem_1993_1 lineitem_3  (cost=0.00..696236.45 rows=6838756 width=8) (actual time=0.110..2843.729 rows=6849432 loops=1)
                                 Sampling: system ('50'::real)
                           ->  Sample Scan on lineitem_1997_2 lineitem_12  (cost=0.00..696226.70 rows=6837976 width=8) (actual time=0.110..2833.507 rows=6838093 loops=1)
                                 Sampling: system ('50'::real)
                           ->  Sample Scan on lineitem_1993_2 lineitem_4  (cost=0.00..696020.82 rows=6836546 width=8) (actual time=0.122..2834.469 rows=6845848 loops=1)
                                 Sampling: system ('50'::real)
                           ->  Sample Scan on lineitem_1996_1 lineitem_9  (cost=0.00..695851.40 rows=6834512 width=8) (actual time=0.120..2802.484 rows=6837681 loops=1)
                                 Sampling: system ('50'::real)
                           ->  Sample Scan on lineitem_1994_2 lineitem_6  (cost=0.00..695848.88 rows=6833670 width=8) (actual time=0.092..2807.648 rows=6823304 loops=1)

这个计划类型被称为”Sample Scan”,并且在表的扫描部分没有起到并行的效果。相比上一层的GROUP BY SQL,对于一个分区的扫描时间仅为1秒左右,而这里却是2.8秒。总共所需的时间也增加到了17秒。

鉴于对所需取样块数的考虑结果,这是因为简单地不适用于并行处理,还是存在着一些无法并行处理的限制?我想在未来深入探究。

广告
将在 10 秒后关闭
bannerAds