客户端-服务器解决方案中等待用户交互的资源锁
我有一个客户端-服务器进程,在服务器端长时间运行操作(2-5 分钟)后,要求用户确认或更改操作结果。
用户可能需要一个小时或更长时间来检查服务完成的工作,进行更改并将其发送回服务。
在完美的世界中,没有人会更改服务使用的下划线源数据来构建操作结果..但这不是一个完美的世界!
如何锁定以防止源数据损坏?我不想要 SQL 表锁...我正在考虑一种软件机制,例如内存表,其中包含我的所有操作请求和一些互锁条件,以便放入可能损坏其他操作数据的等待状态操作。
还有其他提示吗?
编辑
有关该过程的更多信息可能是必要的。
我有一个带时间戳的实体,代表电气网络拓扑的时间点。 该实体包含所有未通电元件的列表。
调用时,服务器进程必须获取所有尚未处理的实体,并为每个元素创建一个列表。
public class ElementRecord{
public string ElementName {get;set;}
public DateTime OffTimeStamp {get;set;}
public DateTime OnTimeStamp {get;set;}
}
基于某些业务规则,服务器进程聚合元素并等待用户确认或更改。
问题在于,实体加载后,真实网络可能发生变化,表也会发生变化;此外,在单个时间点,更多元件可能会断电,因此服务器进程必须失效。 如果用户界面已经更改数据,我必须尽快发出警报,可能某些数据无效。
你会做什么?
I have a client-server process that after a long-running operation (2-5 minutes) server side, ask the user to confirm or change the operation results.
User can take an hour or more to check the work done by the service, make changes and send those back to service.
In a perfect world nobody would change underlining source data used by the service to build the operation results.. but this isn't a perfect world!
How can I lock in order to prevent damage on my source data? I don't want SQL table lock... I'm thinking about a software mechanism like an in-memory table with all my operation requests and some interlock condition in order to put in a wait state operation that can damage other operation data.
Any other hint about?
Edit
More info about the process is probably necessary..
I have a timestamped entity that represent a point in time of an electrical network topology.
The entity contains a list of all elements not energized.
The server process when called must take all entities not already processed and for each element create a list of
public class ElementRecord{
public string ElementName {get;set;}
public DateTime OffTimeStamp {get;set;}
public DateTime OnTimeStamp {get;set;}
}
Based on some business rules the server process aggregate the elements and wait for user ack or changes.
The problem is that after entities are loaded, real network can change and the table will change; also in a single point of time more elements can be de-energized so server process must invalidate.
If user UI is already changing data I have to alert soon that probably some data is invalid.
What you will do?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我在过去的一个项目中也遇到过类似的需求。
我们的解决方案是:服务器完成所有操作后,我们将所有需要用户确认的数据复制到一些临时数据表中,每行都有时间戳。然后,发送邮件给用户要求确认,确认后,将临时数据表合并到我们的真实数据表中。我们无法锁定任何数据表,因为其他用户需要在线更改该表。
I has encountered the similar requirement in one past project.
Our solution was: after the server finished all operation, we copied all data which needs user to confirm to some temp data tables with timestamp for every row. And then, send mail to user to ask confirming, after the confirming, merge the temp data tables to our real data tables. We could not lock any data table, because other users need to change the table online.
如何在表中添加额外的列以显示数据已被验证。
未设置已验证字段的数据将被忽略。在遗留系统中,这意味着创建新表并迁移数据,但之后您所需要做的就是确保只有设置了已验证字段的结果才会返回到您的代码。
编辑
我的建议是读取数据的进程只能使用已经验证的数据。如果您的用户进程正在使用和更改已验证的数据,则它必须在检索时将数据设置为未验证,但如果该数据已被标记为未验证,则无法检索该数据。
对于新数据,您只需使用默认数据创建一个新行,并在验证新数据之前获取行索引。然后,您的代码将具有索引,因此用户可以根据需要验证数据,如果他们不这样做,则您将拥有索引,以便您可以删除已创建的数据。
How about adding an extra column to your tables to show that the data has been verified.
Data that doesn't have the verfied field set is then ignored. In a legacy system this would mean creating new tables and migrating the data over, but after that all you need is to ensure that only results where the verified field is set are returned to your code.
edit
What I was suggesting was that processes reading data can only use data that has been verified. If your user process is using and changing verified data then it must set the data unverified on retrieval, but it cannot retrieve that data if it has already been marked as unverified.
For new data you just create a new row with default data and get the row index back before verifying the new data. Your code then has the index and so the user can take as long as they like to verify the data and if they don't then you have the index so that you can delete the data already created.