我怎样才能“冻结”? Datasnap 服务器中的数据集?
对于需要很长时间才能打开的数据集(它是一个存储过程),我想在 Datasnap 服务器上实现某种缓存。
因此,如果第一次加载此数据集并传输到客户端(TClientDataSet),则不应针对以下请求关闭并重新打开它,除非服务器重新启动或调用服务器上的“重新加载”过程。
因此,在第一次打开后,每个新客户端只会收到数据集的副本(克隆),而不会刷新/重新加载服务器端数据集。
对于此数据集“缓存”的简单实现,不得为每个会话创建 Datasnap 服务器数据模块(因为对于每个新会话,服务器端数据集将关闭,直到客户端发送打开 DatasetProvider 的请求)。也许我可以找到一种解决方案来克隆会话数据模块的数据集,但我的基本问题是:
有没有办法重写 DatasetProvider 中的方法,以便客户端仍然可以打开但不能关闭服务器端数据集?
For a dataset which takes a very long time to open (it is a stored procedure), I would like to implement some kind of caching on the Datasnap server.
So if this dataset is loaded the first time and transferred to the client (TClientDataSet), it should not be closed and reopened for the following requests unless the server restarts or a "reload" procedure on the server is called.
So after the first open, every new client would only receive a copy (clone) of the dataset without refreshing / reloading the server side dataset.
For a simple implementation of this dataset 'cache' the Datasnap server datamodule must not be created per session (because for each new session, the server side dataset would be closed until the client sends the request to open the DatasetProvider). Maybe I can find a solution to clone the dataset also for session datamodules but my basic question is:
Is there a way to override methods in the DatasetProvider so that the client can still open, but not close the server-side dataset?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
几年前,我工作的一些 DataSnap 服务器必须从非常非常慢的 SQL Server 7 服务器中提取数据。然后,我开发了一个基于 TClientDataSets 的服务器缓存“玩具”,其中“缓存提供程序”连接到那些“服务器 ClientDataSets”,后者依次从文件缓存或数据库读取数据。
缓存根据每个数据集的一组特定硬编码规则进行刷新。当需要刷新缓存时,服务器-ClientDataSet 使用提供程序通过 ADOQuery 从数据库中提取数据,然后使用 TClientDataSet 的二进制格式将数据保存到应用程序服务器磁盘。 (它支持服务器实例之间的缓存共享)。
为了防止不同实例在确定更新缓存时同时从数据库中提取信息,开发了非常基本的同步方法。 “控制文件”是在数据检索操作期间在磁盘上创建的,并在完成或失败时删除。在拉取数据操作开始之前,服务器实例检查文件是否存在。如果存在,则进入等待循环,直到该文件不存在为止,并检查 .cds 相关文件中的有效数据...并据此采取行动。
如果文件不存在,则尝试创建它,覆盖完全相同的毫秒情况。
这不是 24x7 应用程序,只是一种 12x6 :D。事实证明,该方法非常好,在我维护该代码的近 3 年中,我不记得这种粗鲁同步的任何失败。但您可能想创建一个更强大的机制。
当不需要刷新缓存时,只需从磁盘加载数据。
所有缓存工作都是使用提供程序方法完成的。
所以,关系是这样的:
用于需要更新检查的伪代码和 OpenDataSet 是这样的:
我无法再访问代码了,当然我记不起每一个细节,幸运的是当前的服务器非常好足够好和快,现在不需要考虑这个......希望解释其工作方式的机制。如果您需要澄清或进一步帮助,请发表评论。
Few years ago, some DataSnap server I worked on had to pull data from a very-very slow SQL Server 7 server. I then worked out a server cache "toy" based on TClientDataSets where "cached providers" are connected to those "server ClientDataSets" which in turns reads data from file cache or from database.
Cache was refreshed based on a set of specific hard-coded rules for each dataset. When the cache needs to be refreshed, server-ClientDataSet use a provider to pull data from database via ADOQuery and then the data is saved to the app-server disk using TClientDataSet's binary format. (it enables cache sharing between server instances).
To prevent different instances to pull information from database at the same time when it determines is time to update cache, very basic synchronization method was developed. A "control file" is created on disk during the data-retrieveral operation and deleted upon completion or failure. Before pull-data operation starts, the server instance checks for file existence. If it exists, enter a wait-loop until the file is not present and check for valid data in the .cds asociated file... and act according to that.
If file don't exists, tries to create it, covering the very same millisecond case.
This was not a 24x7 application, just a kind of 12x6 :D. The method proved to be very good, I can't remember a single failure for this rude synchronization during almost 3 years I was maintaining that code.. but you maybe want to create a more robust mechanism.
When there's no need to refresh the cache, data is just loaded from disk.
All the cache work was done using the provider methods.
So, the relation was something like this:
Pseudo-code for need-to-update check and OpenDataSet was like this:
I don't have access to the code anymore, sure I can't remember every detail, happily current servers are very good and fast enough not to need to think on this now... hope explained mechanism the way it worked. If you need clarification or further help, please commment.