如何将前导空值替换为一行中的第一个非空值?
我将尽量避免在这里描述背景。我现在有一个查询结果(不是表),其中包含如下行:
ID SP1 SP2 SP3 SP4 SP5 SP6 SP7 SP8 1 null null 2500 1400 700 null null null
非空值部分周围可能有前导和/或尾随空值(这实际上代表了递减过程)。我想要的是这样的:
ID SP1 SP2 SP3 SP4 SP5 SP6 SP7 SP8 1 2500 2500 2500 1400 700 0 0 0
这意味着,用第一个非空值替换前导空值,用 0 替换尾随空值。
请指教。我正在研究 SQL Server 2000。
I'll try to avoid describing the background here. I now have a query result (not a table) which contains rows like this:
ID SP1 SP2 SP3 SP4 SP5 SP6 SP7 SP8 1 null null 2500 1400 700 null null null
There may be leading and/or trailing null values around a section of non-null values (which represents a decreasing process actually). And what I want is like this:
ID SP1 SP2 SP3 SP4 SP5 SP6 SP7 SP8 1 2500 2500 2500 1400 700 0 0 0
That means, to replace leading nulls with the first non-null value, and trailing nulls with 0.
Please advise. I'm working on SQL Server 2000.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
COALESCE 接受多个表达式,并返回第一个非空值价值。
COALESCE takes a number of expressions, and returns the first non-null value.
您可能应该重新构建您的架构。每当您在单行中有看起来像数组的内容时,通常最好将其拆分为单独的行。
但是,假设您坚持使用当前模式,我只会选择一个简单的:(
如果需要,替换为正确的 SQL Server 事务语法)。
如果您不想实际更改底层数据,您可以使用视图,但它可能很丑陋,并且您可能希望选择在您使用的任何应用程序中进行转换来执行 SQL。
一种可能性,但我强烈敦促您不要这样做,并且可能有更好的特定于供应商的方法:
view8
,使所有列保持不变除了SP8
之外,它会变成coalesce(sp8,0)
(或者任何 SQL Server 等值 -SP8
,如果它不为 NULL,否则为 0)。view8
上创建一个视图view7
,该视图使所有列保持不变,除了SP7
变为coalesce(sp7,sp8).
view7
上创建一个视图view6
,该视图使所有列保持不变,除了SP6
变为coalesce(sp6,sp7).
view2
上创建一个视图view1
,该视图使所有列保持不变,除了SP1
变为coalesce(sp1,sp2).
view1
。正如我所说,这是一个巨大的拼凑,请出于对您信仰的任何神的热爱,不要使用它。但有时需要决定我们的行动,所以我把它放在那里以防万一。
完全关心,不承担任何责任,请自行测试(和分析)。
而且,在发布该内容并发现达米安有一个更紧凑的版本后,我还想提供以下内容。
有时牺牲空间换取时间很有用(以占用更多磁盘空间为代价加快速度)。
您可以创建另外 8 列(
MORPHSP1
到MORPHSP8
),以存储我在第一个解决方案中建议的变形值。这通常会违反 3NF,但如果你做两件事实际上是可以的:(1)理解后果; (2) 减少数据不一致的可能性。
通过使用插入/更新触发器,您实际上可以保证数据保持一致。
每当行发生更改时,让触发器执行以下操作。
这样,您只需在数据更改时产生成本,而不是每次使用数据时都会产生成本。在读取次数超过写入次数(占绝大多数)的表上,这可以带来令人印象深刻的性能改进。
You should probably re-do your schema. Whenever you have something that looks like an array in a single row, it's often better to split it out to separate rows.
But, assuming you're stuck with the current schema, I'd just go for a simple:
(substituting in the correct SQL Server transaction syntax if need be).
If you don't want to actually change the underlying data, you can use a view, but it's likely to be hideous, and you may want to opt for doing the transformation in whatever application you're using to execute the SQL instead.
One possibility, but I strongly urge you not to do this, and there may be a better vendor-specific way:
view8
over the table which leaves all columns untouched except forSP8
which becomescoalesce(sp8,0)
(or whatever the SQL Server equivaluent is -SP8
if it's not NULL otherwise 0).view7
over the viewview8
which leaves all columns untouched except forSP7
which becomescoalesce(sp7,sp8)
.view6
over the viewview7
which leaves all columns untouched except forSP6
which becomescoalesce(sp6,sp7)
.view1
over the viewview2
which leaves all columns untouched except forSP1
which becomescoalesce(sp1,sp2)
.view1
.As I said, a massive kludge and please, for the love of whatever gods you believe in, don't use it. But sometimes needs dictate our actions so I'm putting it out there just in case.
All care, no responsibility, test (and profile) it yourself.
And, having posted that and discovered Damien has a more compact version, I also like to offer the following.
It's sometimes useful to sacrifice space for time (make things faster at the expense of more disk space taken up).
You can create another 8 columns,
MORPHSP1
throughMORPHSP8
, to store the morphed values that I suggested in my first solution.This would normally violate 3NF but that's actually okay if you do two things: (1) understand the ramifications; and (2) mitigate the chance of inconsistent data.
By the use of insert/update triggers, you can actually guarantee that the data will remain consistent.
Have your trigger do the following whenever a row changes.
That way, you only incur the cost when the data changes, not every single time you use the data. On a table where reads outnumber writes (and that's the vast majority), this can lead to an impressive performance improvement.