增加 SQL Server 的访问次数会损失多少性能?
我有一个 Web 应用程序,其中 Web 服务器和 SQL Server 2008 数据库位于同一服务器场的不同机器上。
如果我采用一个整体存储过程并将其分解为几个较小的存储过程,从而使客户端代码负责调用多个存储过程而不是仅调用一个存储过程,我会注意到我的 Web 应用程序的性能受到显着影响?
附加背景信息:
我有一个存储过程,其中包含数百行代码,其中包含决策逻辑、更新语句,最后是一个向客户端返回一组数据的选择语句。
我需要在客户端代码中插入一个调用组件 DLL 的功能(从这个意义上来说,客户端代码是调用数据库服务器的 ASP Web 服务器)。但是,存储过程正在更新记录集并在同一调用中返回更新的数据,并且理想情况下,我的代码需要在调用决策逻辑和更新语句之后调用,但在调用决策逻辑和更新语句之前调用。 /em> 数据返回给客户端。
为了使此功能发挥作用,我可能必须将现有存储过程至少分成两部分:一个存储过程用于更新数据库,另一个存储过程用于从数据库检索数据。然后,我将在这些存储过程调用之间插入新代码。
当我看到这个问题时,我不禁想到,从代码维护的角度来看,将我的所有 update 和 select 语句隔离到精简存储过程中会更好,将业务逻辑留给客户端代码。这样,每当我需要将功能或决策逻辑插入到客户端代码中时,我所需要做的就是更改客户端代码,而不是修改巨大的存储过程。
尽管从代码维护的角度来看,使用精简存储过程可能会更好,但增加数据库访问次数会带来多大的性能痛苦?数据的最终结果是相同的,但我更频繁地接触数据库。当扩展应用程序以满足需求时,这种方法如何影响性能?
我并不是一个将性能优化置于一切之上的人,尤其是当它影响代码维护时,但我不想搬起石头砸自己的脚,并在 Web 应用程序必须扩展时造成麻烦。
I have a web application where the web server and SQL Server 2008 database sit on different boxes in the same server farm.
If I take a monolithic stored procedure and break it up into several smaller stored procs, thus making the client code responsible for calls to multiple stored procedures instead of just one, and I going to notice a significant performance hit in my web application?
Additional Background Info:
I have a stored procedure with several hundred lines of code containing decision logic, update statements, and finally a select statement that returns a set of data to the client.
I need to insert a piece of functionality into my client code (in this sense, the client code is the ASP web server that is calling the database server) that calls a component DLL. However, the stored procedure is updating a recordset and returning the udpated data in the same call, and my code ideally needs to be called after the decision logic and update statements are called, but before the data is returned to the client.
To get this functionality to work, I'm probably going to have to split the existing stored proc into at least two parts: one stored proc that updates the database and another that retrieves data from the database. I would then insert my new code between these stored proc calls.
When I look at this problem, I can't help but think that, from a code maintenance point of view, it would be much better to isolate all of my update and select statements into thin stored procs and leave the business logic to the client code. That way whenever I need to insert functionality or decision logic into my client code, all I need to do is change the client code instead of modifying a huge stored proc.
Although using thin stored procs might be better from a code maintenance point-of-view, how much performance pain will I experience by increasing the number of trips to the database? The net result to the data is the same, but I'm touching the database more frequently. How does this approach affect performance when the application is scaled up to handle demand?
I'm not one to place performance optimization above everything else, especially when it affects code maintenance, but I don't want to shoot myself in the foot and create headaches when the web application has to scale.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
一般来说,根据经验,您应该至少与 SQL Server 进行往返。
服务器上的“命中”非常昂贵,实际上将相同的操作分为 3 个部分,然后在服务器上执行 1 个命中和其他所有操作的成本更高。
关于维护,您可以从客户端调用 1 个存储过程,让该过程调用另外 2 个过程。
我有一个具有极端搜索逻辑的应用程序,这就是我实现它的方法。
一些基准测试结果...
不久前我有一个客户,服务器崩溃了,当我们检查问题时,它与 SQL Server 进行了多次往返,当我们最小化它时,服务器恢复正常。
in general, as a rule of thumb you should make roundtrips to SQL server at a minimum.
the "hit" on the server is very expensive, it's actually more expensive to devide the same operation into 3 parts then doing 1 hit and everything else on the server.
regarding maintenance, you can call 1 stored proc from the client, having that proc call another 2 proc's.
I had an application with extreme search logic, thats what I did to implement it.
some benchmarking results...
I had a client a while back that had servers falling and crumbling down, when we checked for the problem it was many roundtrips to SQL server, when we minimized it, the servers got back to normal.
它会影响它。我们使用 Weblogic 服务器,其中所有业务逻辑都位于连接到 DB/2 数据库的 AppServer 中。我们在项目中主要使用实体 bean,并且对于大多数业务服务调用,都会多次访问数据库,而没有明显的副作用。 (我们确实在需要时将一些查询调整为多表)。
这实际上取决于您的应用程序。您将需要进行基准测试。
It will affect it. We use a Weblogic server where all the business logic is in the AppServer connected to a DB/2 database. We mostly use entity beans in our project and for most business service calls make several trips to the DB with no visible side effects. (We do tune some queries to be multi-table when needed).
It really depends on your app. You are going to need to benchmark.
在良好的硬件上设置良好的 SQL Server 每秒可以处理数千个事务。
事实上,分解大型存储过程可能是有益的,因为每批只能有一个缓存的查询计划。分成几个批次意味着每个批次都会得到自己的查询计划。
您绝对应该在代码维护方面犯错误,但可以肯定的是进行基准测试。
鉴于查询计划格局将会改变,您还应该准备好更新索引,也许创建不同的覆盖索引。
A well setup SQL Server on good hardware can process many thousands of transactions per second.
In fact breaking up a large stored procedure can be beneficial, because you can only have one cached query plan per batch. Breaking into several batches means they will each get their own query plan.
You should definitely err on the side of code-maintenance, but benchmark to be sure.
Given that the query plan landscape will chnage, you should alos be prepared to update your indexes, perhaps creating different covering indexes.
本质上,这个问题与紧耦合与松耦合密切相关。
一开始:您始终可以将整体存储过程分解为几个较小的存储过程,它们全部由一个存储过程调用,从而使客户端代码只负责调用一个存储过程。
除非客户端会执行某些操作(更改数据或向用户提供状态),否则我可能不建议将多个调用移至客户端,因为您会将客户端与存储过程的操作顺序更紧密地耦合在一起,而不会显着提高性能增加。
不管怎样,我都会对其进行基准测试并从中进行调整。
In essence, this question is closely related to tight vs. loose coupling.
At the outset: You could always take the monolithic stored procedure and break it up into several smaller stored procs, that are all called by one stored procedure, thus making the client code only responsible for calling one stored procedure.
Unless the client will do something (change the data or provide status to user) I would probably not recommend moving multiple calls to the client, since you would be more tightly coupling the client to the order of operations for the stored procedure without a significant performance increase.
Either way, I would benchmark it and adjust from there.