SQL 连接与一系列值(整数范围、日期范围等)

发布于 2024-07-26 12:51:18 字数 818 浏览 0 评论 0原文

我有两个表,第一个是一个大表(数百万行),最有趣的列是一个整数,我称之为“键”。 我相信这个解决方案对于日期或日期时间范围也将是相同的。

第二个表要小得多(数千行),其中有一堆我感兴趣的属性,这些属性是在一系列键上定义的。 它具有以下结构:

key_lower_bound : int key_upper_bound : int 有趣的值1:浮动 有趣的值2:int 有趣的_值3:varchar(50) ...

我想查找第一个表中的所有值,并根据第一个表中的键是否落在区间 [key_lower_bound, key_upper_bound) 内将它们与第二个表“连接”。

这在数学上有点像稀疏内积或稀疏点积,但有点奇怪,因为第二个表中涉及这些范围。 不过,如果我将其写在代码中,它将是一个 O(|第一个表| + |第二个表|) 算法。 我将在两个(已排序)列表中保留一个指针,并遍历它们,以确定第一个表中的每个键是否属于第二个表的范围。 诀窍在于,每次检查第一个表中的键时,我不会迭代第二个列表,因为两个列表都已排序。

当我构建最明显的 SQL 查询(涉及检查键是否 > key_lower_bound 和 < key_upper_bound)时,它花费的时间太长。

这个幼稚的查询会发生某种二次行为,因为我认为查询引擎正在对第二个表中的每一行进行每次比较,而实际上,如果第二个表按 key_lower_bounds 排序,则这应该是不必要的。 所以我得到了 O(|第一个表| x​​ |第二个表|) 类型的行为,而不是所需的 O(|第一个表| + |第二个表|) 行为。

是否可以使用线性 SQL 查询来执行此操作?

I have two tables, the first is a big table (millions of rows), with the most interesting column being an integer I'll just call "key." I believe this solution would be identical for date or datetime ranges as well though.

The second table is much smaller (thousands of rows) with a bunch of attributes that are interesting to me which are defined over a range of keys. It has the following structure:

key_lower_bound : int
key_upper_bound : int
interesting_value1 : float
interesting_value2 : int
interesting_value3 : varchar(50)
...

I want to look up all of the values in the first table and "join" them with the second table based on whether the key in the first table falls inside the interval [key_lower_bound, key_upper_bound).

This is sort of like a sparse inner product or sparse dot product mathematically, but it's a little weird since there are these ranges involved in the second table. Still, if I were to write this up in code it would be an O(|first table| + |second table|) algorithm. I would keep a pointer into both (sorted) lists and walk through them each in order to determine if each key in the first table belonged in the range of the second table. The trick is that I am not iterating through the second list each time I examine a key in the first table because both lists are sorted.

When I construct the most obivous SQL query (involving checking that key is > key_lower_bound and < key_upper_bound) it takes WAY too long.

There is some kind of quadratic behavior going on with that naive query because I think the query engine is doing each compare against each row in the second table, when in reality, if the second table is sorted by key_lower_bounds this shouldn't be necessary. So I'm getting a O(|first table| x |second table|) kind of behavior instead of the desired O(|first table| + |second table|) behavior.

Is it possible to get a linear SQL query to do this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

明媚如初 2024-08-02 12:51:18

好吧,我已经解决了这个问题并提出了一些建议。
但首先让我们填充辅助表

CREATE TABLE dbo.Numbers(n INT NOT NULL PRIMARY KEY)
GO
DECLARE @i INT;
SET @i = 1;
INSERT INTO dbo.Numbers(n) SELECT 1;
WHILE @i<1024000 BEGIN
  INSERT INTO dbo.Numbers(n)
    SELECT n + @i FROM dbo.Numbers;
  SET @i = @i * 2;
END;
GO

和测试数据,一年中每分钟一分钟的广告,同一年每分钟一个客户呼叫:

CREATE TABLE dbo.Commercials(
  StartedAt DATETIME NOT NULL 
    CONSTRAINT PK_Commercials PRIMARY KEY,
  EndedAt DATETIME NOT NULL,
  CommercialName VARCHAR(30) NOT NULL);
GO
INSERT INTO dbo.Commercials(StartedAt, EndedAt, CommercialName)
SELECT DATEADD(minute, n - 1, '20080101')
    ,DATEADD(minute, n, '20080101')
    ,'Show #'+CAST(n AS VARCHAR(6))
  FROM dbo.Numbers
  WHERE n<=24*365*60;
GO
CREATE TABLE dbo.Calls(CallID INT 
  CONSTRAINT PK_Calls NOT NULL PRIMARY KEY,
  AirTime DATETIME NOT NULL,
  SomeInfo CHAR(300));
GO
INSERT INTO dbo.Calls(CallID,
  AirTime,
  SomeInfo)
SELECT n 
    ,DATEADD(minute, n - 1, '20080101')
    ,'Call during Commercial #'+CAST(n AS VARCHAR(6))
  FROM dbo.Numbers
  WHERE n<=24*365*60;
GO
CREATE UNIQUE INDEX Calls_AirTime
  ON dbo.Calls(AirTime) INCLUDE(SomeInfo);
GO

最初的尝试是选择一年中三个小时的广告期间拨打的所有电话非常慢:

SET STATISTICS IO ON;
SET STATISTICS TIME ON;
GO

SELECT COUNT(*) FROM(
SELECT s.StartedAt, s.EndedAt, c.AirTime
FROM dbo.Commercials s JOIN dbo.Calls c 
  ON c.AirTime >= s.StartedAt AND c.AirTime < s.EndedAt
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
) AS t;

SQL Server parse and compile time: 
   CPU time = 15 ms, elapsed time = 30 ms.

(1 row(s) affected)
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 2, logical reads 3338264, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Commercials'. Scan count 2, logical reads 7166, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 71704 ms,  elapsed time = 36316 ms.

原因很简单:我们知道广告不会重叠,所以一通电话
最多适合一个商业广告,但优化器不知道这一点。
我们知道广告很短,但优化器也不知道。
这两个假设都可以作为约束来强制执行,但优化器仍然不会这样做。

假设广告时长不超过15分钟,我们可以判断
优化器的查询速度非常快:

SELECT COUNT(*) FROM(
SELECT s.StartedAt, s.EndedAt, c.AirTime
FROM dbo.Commercials s JOIN dbo.Calls c 
  ON c.AirTime >= s.StartedAt AND c.AirTime < s.EndedAt
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
AND s.StartedAt BETWEEN '20080630 23:45' AND '20080701 03:00'
) AS t;

SQL Server parse and compile time: 
   CPU time = 15 ms, elapsed time = 15 ms.

(1 row(s) affected)
Table 'Worktable'. Scan count 1, logical reads 753, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Commercials'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 24 ms.

假设广告不重叠,所以一次调用
我们可以看出,最多适合一个广告
到优化器,查询再次非常快:

SELECT COUNT(*) FROM(
SELECT s.StartedAt, s.EndedAt, c.AirTime
FROM dbo.Calls c CROSS APPLY(
  SELECT TOP 1 s.StartedAt, s.EndedAt FROM dbo.Commercials s 
  WHERE c.AirTime >= s.StartedAt AND c.AirTime < s.EndedAt
  ORDER BY s.StartedAt DESC) AS s
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
) AS t;

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 7 ms.

(1 row(s) affected)
Table 'Commercials'. Scan count 181, logical reads 1327, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 31 ms.

Well I have played with the problem and have a couple of suggestions.
But first let's populate helper table

CREATE TABLE dbo.Numbers(n INT NOT NULL PRIMARY KEY)
GO
DECLARE @i INT;
SET @i = 1;
INSERT INTO dbo.Numbers(n) SELECT 1;
WHILE @i<1024000 BEGIN
  INSERT INTO dbo.Numbers(n)
    SELECT n + @i FROM dbo.Numbers;
  SET @i = @i * 2;
END;
GO

and test data, one minute commercials every minute for one year, and one customer call per minute for the same year:

CREATE TABLE dbo.Commercials(
  StartedAt DATETIME NOT NULL 
    CONSTRAINT PK_Commercials PRIMARY KEY,
  EndedAt DATETIME NOT NULL,
  CommercialName VARCHAR(30) NOT NULL);
GO
INSERT INTO dbo.Commercials(StartedAt, EndedAt, CommercialName)
SELECT DATEADD(minute, n - 1, '20080101')
    ,DATEADD(minute, n, '20080101')
    ,'Show #'+CAST(n AS VARCHAR(6))
  FROM dbo.Numbers
  WHERE n<=24*365*60;
GO
CREATE TABLE dbo.Calls(CallID INT 
  CONSTRAINT PK_Calls NOT NULL PRIMARY KEY,
  AirTime DATETIME NOT NULL,
  SomeInfo CHAR(300));
GO
INSERT INTO dbo.Calls(CallID,
  AirTime,
  SomeInfo)
SELECT n 
    ,DATEADD(minute, n - 1, '20080101')
    ,'Call during Commercial #'+CAST(n AS VARCHAR(6))
  FROM dbo.Numbers
  WHERE n<=24*365*60;
GO
CREATE UNIQUE INDEX Calls_AirTime
  ON dbo.Calls(AirTime) INCLUDE(SomeInfo);
GO

The original attempt to select all the calls made during commercials for three hours in the middle of the year is terribly slow:

SET STATISTICS IO ON;
SET STATISTICS TIME ON;
GO

SELECT COUNT(*) FROM(
SELECT s.StartedAt, s.EndedAt, c.AirTime
FROM dbo.Commercials s JOIN dbo.Calls c 
  ON c.AirTime >= s.StartedAt AND c.AirTime < s.EndedAt
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
) AS t;

SQL Server parse and compile time: 
   CPU time = 15 ms, elapsed time = 30 ms.

(1 row(s) affected)
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 2, logical reads 3338264, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Commercials'. Scan count 2, logical reads 7166, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 71704 ms,  elapsed time = 36316 ms.

The reason is simple: we know that commercials do not overlap, so one call
fits into at most one commercial, but the optimizer does not know it.
We know that commercials are short, but the optimizer does not know it either.
Both assumptions can be enforced as constraints, but the optimizer will not not it still.

Assuming that commercials are no longer than 15 minutes, we can tell
that to the optimizer, and the query is very fast:

SELECT COUNT(*) FROM(
SELECT s.StartedAt, s.EndedAt, c.AirTime
FROM dbo.Commercials s JOIN dbo.Calls c 
  ON c.AirTime >= s.StartedAt AND c.AirTime < s.EndedAt
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
AND s.StartedAt BETWEEN '20080630 23:45' AND '20080701 03:00'
) AS t;

SQL Server parse and compile time: 
   CPU time = 15 ms, elapsed time = 15 ms.

(1 row(s) affected)
Table 'Worktable'. Scan count 1, logical reads 753, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Commercials'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 24 ms.

Assuming that commercials do not overlap so so one call
fits into at most one commercial, we can tell
that to the optimizer, and the query is again very fast:

SELECT COUNT(*) FROM(
SELECT s.StartedAt, s.EndedAt, c.AirTime
FROM dbo.Calls c CROSS APPLY(
  SELECT TOP 1 s.StartedAt, s.EndedAt FROM dbo.Commercials s 
  WHERE c.AirTime >= s.StartedAt AND c.AirTime < s.EndedAt
  ORDER BY s.StartedAt DESC) AS s
WHERE c.AirTime BETWEEN '20080701' AND '20080701 03:00'
) AS t;

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 7 ms.

(1 row(s) affected)
Table 'Commercials'. Scan count 181, logical reads 1327, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Calls'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 31 ms,  elapsed time = 31 ms.
绅刃 2024-08-02 12:51:18

对于第一个表,我将在“key”上放置一个聚集索引。 对于第二个表,我将在“key_lower_bound”上放置一个聚集索引。 然后我会尝试:

select *
from FirstTable f
inner join SecondTable s 
    on f.key between s.key_lower_bound and s.key_upper_bound

然后我会在“key_upper_bound”上添加第二个非聚集索引,看看是否可以提高性能。

For the first table I would put a clustered index on "key". For the second table I would put a clustered index on "key_lower_bound". Then I would try:

select *
from FirstTable f
inner join SecondTable s 
    on f.key between s.key_lower_bound and s.key_upper_bound

I would then add a second non-clustered index on "key_upper_bound" to see if that improved the performance.

我不会写诗 2024-08-02 12:51:18

根据我的经验,没有简单而可靠的解决方案。 我已经在许多类似的情况下成功地使用了非规范化,将 key_lower_bound 和 key_upper_bound 复制到大表,并让外键从大表引用到有间隔的表。 您还可以创建一个检查约束来确保(key > key_lower_bound 且 key < key_upper_bound),但此检查仅涉及一个表中的列,因此它可以正常工作。 这绝对是非规范化,但数据永远不会不同步,因为 FK 约束确保大表中的 (key_lower_bound, key_upper_bound) 与父表中的间隔匹配。 因为您不需要联接,所以您的选择执行得非常快。

通过非规范化解决了类似的问题:

http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/03/08/storing-intervals-of-time-with-no-overlaps.aspx

如果您需要完整的内容,请告诉我DDL,很容易写起来。

In my experience there is no easy and robust solution. I have successfully used denormalization in many similar cases, copying key_lower_bound and key_upper_bound to the big table, and having a foreign key refer from the big table to the one with intervals. You also create a check constraint to make sure that (key > key_lower_bound and key < key_upper_bound), but this check only involves columns in one table, so it works OK. This is definitely denormalization, but the data never gets out of sync, because the FK constraint ensures that (key_lower_bound, key_upper_bound) in the big table matches the interval in the parent one. Because you don't need a join, your select performs very fast.

Similar problem solved by denormalization:

http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/03/08/storing-intervals-of-time-with-no-overlaps.aspx

Let me know if you need full DDL, it is very easy to write up.

似梦非梦 2024-08-02 12:51:18

要执行您描述的线性算法需要数据库不具备的两件事:

  1. 能够理解小表中的每一行都包含许多 LargeTable.Key 到单个 SmallTable.Range 的不同(不相交)映射,这些
  2. 表需要存储为链接列表或数组(但事实并非如此)。

我相信您最接近所描述的行为是 合并加入< /a>:

选择 t1.key

大表t1
在 t1.key >= t2.key_lower_bound 和 t1.key < 上内部合并连接smallTable t2 t2.key_upper_bound

您应该了解表存储为 B 树或堆 - 因此它经过优化以查找特定节点 - 而不是扫描。 扫描意味着您必须保持 log_B(N) 指针(例如在堆栈中)以记住您在树中的位置,而无需遍历回。这甚至不是在谈论磁盘访问模式。

作为次要性能理念,您应该尝试定义一个表示范围的单个值,并将其用作小表的主键,该主键可以作为外键从大表中引用。 这比复合键更有效(复合键本质上就是 lower_bound 和 upper_bound 列所代表的)。
也许是一个哈希值,例如 PK = lower_bound & 上限 << 一定数量的位


只是另一个参考应该说明为什么 SQL 很难将这个算法组合在一起。 如果你可以使用 Matlab 来处理你的东西 - 这可能是一个更好的选择:)

To perform the linear algorithm that you describe would require 2 things that the database doesn't have:

  1. The ability to understand that each row in your small table contains a distinct (disjoint) mapping of many LargeTable.Key's to a single SmallTable.Range, and
  2. The tables would be required to be stored as linked lists or arrays (which they are not).

I believe the closest you will get to the behavior you are describing is a merge join:

select t1.key
from
largeTable t1
inner merge join smallTable t2 on t1.key >= t2.key_lower_bound and t1.key < t2.key_upper_bound

You should understand that a table is stored as a B-tree or heap - so it is optimized to look for particular nodes - not for scanning. Scanning means you must keep up to log_B(N) pointers (e.g. in a stack) to remember your place in the tree without having to traverse back in. And this isn't even talking about disk access patterns.

As secondary performance idea, you should try defining a single value that represents the range and using that as the primary key of the smallTable, which can be referenced from the largeTable as a foreign key. This is more efficient than a compound key (which is essentially what your lower_bound and upper_bound columns represent).
Perhaps a hashed value such as PK = lower_bound & upper_bound << certain number of bits


Just another reference that should illustrate why it's difficult for SQL to put this algorithm together. If you can use Matlab to process your stuff - that's probably a better bet :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文