编写带有事务和事务的 Delphi 数据库应用程序的首选方法数据感知组件

发布于 2024-09-25 18:24:47 字数 1037 浏览 6 评论 0原文

使用事务和数据感知组件编写 Delphi 数据库应用程序的最佳方法是什么?

我必须编写一个访问 InnoDB 表的客户端应用程序,并在事务内执行一些主从类型的操作。在对事务进行了一些研究(从一般角度来看)之后,我谦虚地得出一个结论:非数据感知组件和手工编码的 SQL 将是事务的“完美匹配”;但数据感知组件则不然。他们似乎不是天生的一对。

我确实需要使用事务,但另一方面,我不能直接丢弃数据感知组件,因为它们极大地简化了事情。

有人可以启发我吗?我一直在谷歌搜索,但没有找到任何有用的答案。也许是因为我的英文不够好,所以我的关键词有限。

顺便说一句,我正在使用 Delphi 7,目前正在评估 UniDAC 作为数据访问库。

谢谢。

编辑

描述我的问题的一个方面的示例:

想象一个带有 2 个 DBGrid 的表单。第一个网格是 MasterGrid,其上方是以下按钮:添加、编辑和编辑。删除。第二个网格是 DetailGrid。如果用户点击添加,那么它会像这样:

  • Connection.StartTransaction
  • Master.Append then Master.Post then Master.Edit(因此主数据集具有自动增量主键,并且现在可以编辑)
  • 以模态方式显示编辑表单,在用户填写主记录,并使用另一种形式添加一些详细记录。
  • 如果用户单击“确定”,应用程序将执行 Master.Post 和 Connection.Commit。如果用户单击取消,则应用程序将执行 Connection.Rollback。

我知道交易应该尽可能短,但是您可以在上面看到交易的时间只与用户填写表单的速度一样短。

如果我使用非数据感知组件,我将根据用户输入创建自定义插入 SQL,然后在 StartTransaction 和 Commit 之间执行 SQL。所以我可以实现非常短的交易。

编辑2

我感谢大家的积极参与。我从 vcldeveloper 中选择答案,因为它是最接近我当前需求的解决方案。

What is the preferable way to write Delphi database applications using transactions and also data-aware components?

I have to write a client app that access InnoDB tables, and do some master-detail kind of things inside transactions. After doing some research on transactions (from general point-of-view), then I humbly make a conclusion that non data-aware components and hand-coded SQL would be the "perfect match" of transactions; But the data-aware components wouldn't be. They don't seem to be made for each other.

I have the real need to use transactions, but on the other hand I could not just throw the data-aware components away because they greatly simplify things.

Could somebody please enlighten me? I have been Googling it, but I have not found any useful answer. Perhaps because my English is not good enough that my keywords are limited.

BTW, I'm using Delphi 7 and currently evaluating UniDAC as the data access library.

Thank you.

EDIT

Example to describe an aspect of my question:

Imagine a form with 2 DBGrids on it. The first grid is MasterGrid and above it are these buttons: Add, Edit & Delete. The second grid is DetailGrid. If the user click Add, then it go like this:

  • Connection.StartTransaction
  • Master.Append then Master.Post then Master.Edit (so the master dataset has the autoincrement primary key, and it is editable now)
  • Show the editing form modally, in which the user fills the master records, and also add some detail records using another form.
  • If the user click OK, the app would do Master.Post and Connection.Commit. If the user click Cancel, then the app would do Connection.Rollback.

I know that transactions should be as short as possible, but you can see above that the transaction is only as short as the speed of the user filling the form.

If I were using non data-aware components, I would make custom insert SQLs based from user input, then execute the SQL between StartTransaction and Commit. So I can achieve very short transaction.

EDIT 2

I thank all of you for your kind participation. I pick the answer from vcldeveloper because it is the closest solution to my current need.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

绮烟 2024-10-02 18:24:47

我想我理解你的问题。当使用数据感知组件打开包含要在表单上编辑的 10 行数据的 TADODataSet 时,在某些情况下您可能希望缓存对所有 10 行所做的所有更改(以及可能的删除和插入)并将其作为一批提交。您无法在第一次更改时打开事务,因为这会阻止其他用户更改相同的数据。事务应该尽可能短。

我在草图场景中所做的是在链中使用以下组件:

TADOConnection >>> TADO数据集>> TDataSetProvider>> TClientDataSet>> TDataSource>> TDBEdits 等。

现在所有更改都缓存在 TClientDataSet 中,您可以调用它的方法 ApplyUpdates 在一个快速事务中发布所有更改。请注意,也可以将多个 TADODataSet 和多个 TClientDataSet 用于具有嵌套数据集的主从(-detail-etc)结构。所有主从更改也可以被缓存并在一个事务中批量应用。有关实现此功能的所有详细信息,请参阅其他地方的帮助和资源。起初这并不容易。但如果你弄清楚了,那就很容易,并且提供了大量的可能性。 (离线编辑、在应用更改之前检查更改等)

I understand your question, I think. When opening a TADODataSet with e.g. 10 rows of data to be edited on a form, with data-aware components, there are situations in which you would want to cache all the changes made to all of the 10 rows (and possibly deletions and insertions) and commit it as one batch. You cannot open the transaction on the first change, for that would block other users changing the same data. Transactions should be as short as possible.

What I do in sketched scenario is use the following components in a chain:

TADOConnection >> TADODataSet >> TDataSetProvider >> TClientDataSet >> TDataSource >> TDBEdits etc.

Now all changes are cached in TClientDataSet and you can call it's method ApplyUpdates to post all changes in one fast transaction. Mind that it's also possible to use multiple TADODataSets and multiple TClientDataSets for a master-detail(-detail-etc) structure with nested datasets. All master-detail changes may also be cached and applied in one batch in one transaction. See the help and resources elsewhere for all the details about implementing this. At first it's not easy. But if you figured it out it's easy and offers tons of possibilities. (Offline editing, examining changes before applying them, etc.)

云裳 2024-10-02 18:24:47

其他人提到使用DatasetProvider和ClientDataset的组合来进行批量更新,但是如果使用ADO或UniDAC组件,则不需要额外的DatasetProvider + ClientDataset层,因为ADO和UniDAC都支持批量更新。

对于ADO,您应该做的是将数据集的LockType设置为ltBatchOptimistic。对于UniDAC,您应该将CacheUpdate属性设置为True

此更改使您的数据集缓存您对其内存中记录集所做的所有更改,并仅在您调用 UpdateBatch 方法 (ADO) 或 ApplyUpdates 时将它们全部发送到数据库方法(UniDAC)。

现在您应该做的是让您的用户使用您喜欢的任何数据感知组件在主数据集中插入/编辑一条记录以及他/她想要在详细数据集中插入/编辑的任何记录。所有更改都将被缓存。当您的用户完成后,您可以启动一个新事务,并首先针对主数据集调用 UpdateBatch(或在 UniDAC 的情况下为 ApplyUpdate),然后针对详细数据集调用,如果一切顺利,则提交事务。

这将使您的交易变得简短,而无需额外的 ClientDataset 层。

问候

Others mentioned using a combination of DatasetProvider and ClientDataset to have a batch update, but in case of using ADO or UniDAC components, you do not need the extra layer of DatasetProvider + ClientDataset, because both ADO and UniDAC support batch updates.

For ADO, what you should do is to set LockType of your dataset to ltBatchOptimistic. For UniDAC, you should set CacheUpdate property to True.

This change makes your dataset to cache all the changes you make on its in-memory recordset, and send them alltogether to database only when you call UpdateBatch method (ADO) or ApplyUpdates method (UniDAC).

Now what you should do is to let your user insert/edit a record in the master dataset and whatever records he/she wants in the details dataset using whatever data-aware components you like. All the changes would be cached. When your user is done, you can start a new transaction, and first call UpdateBatch (or ApplyUpdate in case of UniDAC) for the master dataset, and then for the details dataset, and if everything goes fine, commit the transaction.

This will make your transactions short without needing the extra layer of ClientDataset.

Regards

嗳卜坏 2024-10-02 18:24:47

为了避免执行大型事务,我使用 DataSetProvidersClientDatasets (甚至在本地)。

考虑使用它作为一种缓存,它可以为您提供两全其美的效果。您可以使用数据感知控件来简化 UI 操作。用户对数据集的操作由 ClientDataSets(数据库缓存的一种)“记录”。

当您的用户准备好将更改保存到数据库时(例如,发票数据全部就位),您可以调用数据集的ApplyUpdates方法。

在最简单的场景中,所有数据集都处于主从关系(由提供者嵌套),提供者自行启动并提交/回滚事务,因此您会自动处于全有或全无的情况。

如果您有更复杂的关系,您可以在开始为每个涉及的 ClientDataSet 集应用更新之前调用 StartTransaction,并在最后根据需要调用 Commit 或 Rollback)。提供程序的逻辑是,如果在调用 ApplyUpdates 时连接有一个活动事务,那么它不会对事务执行任何操作,而只是将更改发布到数据库,假设您控制该事务。

在将其放入生产环境之前,您必须阅读有关 TClientDataSet 以及如何处理 OnReconcileError 并试验该技术的信息,但它对我来说非常非常有效。

我的2分钱。

To avoid the need to perform large transactions I use DataSetProviders and ClientDatasets (even locally).

Consider using this as a kind of cache and it gives you the best of both worlds. You can use data-aware controls to simplify things while working on the UI. User actions over the datasets are "recorded" by the ClientDataSets (kind of database cache).

When your user are ready to save the changes to the database (for example, the invoice data is all in place), you call the ApplyUpdates method for the dataset(s).

In the simplest scenario, where all datasets are in master-detail relationship (nested by the provider), the provider starts and commits/rollbacks the transaction by itself, so you're in a all or nothing situation automatically.

If you have more complex relationships, you can call StartTransaction before start applying updates for each involved ClientDataSet sets, and at the end call Commit or Rollback as needed). The logic for the provider is if the connection has an active transaction when ApplyUpdates is called, then it does nothing with transaction, but simply post changes to database, assuming you're in control of the transaction.

You have to read about TClientDataSet and how to handle the OnReconcileError and experiment with the technology before putting it in production environments, but it works very, very well for me.

My 2 cents.

千秋岁 2024-10-02 18:24:47

写入事务应该尽可能短,并且在用户填写表单时它不应该一直处于活动状态,这是绝对正确的。

正如已经回答的,一般解决方案是使用中间层(ClientDataSet)。但您的场景的真正问题是,如果没有 Master.Append 和 Master.Post,您无法获得 Master 表的自动增量值,因此您在实际需要之前很久就开始了写入事务。

因此,如果您不想使用中间层并仍然使用具有短写入事务的数据感知组件,您应该考虑一个支持获取自动增量值而无需执行 INSERT(到主表)的数据库。以Firebird数据库为例,Firebird的FibPlus数据访问组件完全支持此功能。

You are absolutely right that a write transaction should be as short as possible, and it should not be alive all the time while a user is filling the form.

The general solution, as already answered, is to use an intermediate tier (a ClientDataSet). But the real problem with your scenario is that you cannot obtain an autoincrement value for the Master table without Master.Append and Master.Post, and consequently you start a write transaction long before it is actually required.

So if you don't want to use the intermediate tier and still use data-aware components with short write transactions you should think of a database that supports obtaining an autoincremented value without executing INSERT (to master table). The example is Firebird database, and FibPlus data access components for Firebird fully support this feature.

花开雨落又逢春i 2024-10-02 18:24:47

事务应该尽可能短需要。问题在于不同的数据库如何处理锁定。仅执行行级锁定并且无需等待即可立即从锁定返回的数据库发生死锁的可能性要小得多。通常插入问题较少(尽管其他用户在提交之前不会看到新行,具体取决于隔离级别),更新和删除问题较多。
过于频繁地承诺也可能是“坏事”。缓存更改并在单个操作中应用它们是另一种可能性 - 但您必须处理由于其他用户同时更改记录而导致的问题。没有“更好”的解决方案——一切取决于实际需求。对于某些应用程序(和某些数据库)来说,只要记录发生变化就保持记录锁定是可以的,但对于其他应用程序来说则可能不行。批量更新在某些情况下可能可行,但在其他情况下则不然。您必须选择最适合您的应用程序和数据库的模型。

Transactions should be as short as needed. The problem is how different databases handle locking. Databases that perform only row-level locking and can return immediately from a lock without waiting have far less probabilty to deadlock. Usually inserts are less problematic (although other user won't see new rows until committed, depending on the isolation level), updates and deletes are more problematic.
Committing too often could be "bad" as well. Caching changes and applying them in a single operations is another possibility - but you have to handle issues due to other users changing records meanwhile. There is no a "better" solution - everything depends on the actual needs. For some applications (and some databases) keeping the record locked as long as they are changing is ok, for others may not. Batch updates may be ok in some scenarios and not in others. You have to select the model that works best for your application and database.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文