只更新更改的字段还是全部字段?

发布于 2024-07-11 19:18:58 字数 254 浏览 6 评论 0原文

我想知道在更新记录以检索现有记录、循环遍历字段检查更改并仅将更改的字段放入更新查询中是否值得花费服务器时间? (我正在使用 MySQL 和 PHP。)

这样做的主要原因是为了减少更改日志的更新查询的大小。 通常查询可能有 15 个字段,但实际上只有 2 个字段被更改。 该查询还可以用于记录日志,因为它只包含更改的字段,因此更容易解析。

我关心的是检索现有记录所需的时间。

或者有没有办法从 MySQL 检索它更新了哪些字段?

I'm wondering if it's worth the server time when updating a record to retrieve the existing record, loop through the fields checking for changes and only putting the changed fields in the update query? (I am using MySQL & PHP.)

The main reason for doing this is to reduce the size of the update query for change log purposes. Normally the query might have 15 fields, but only 2 fields are actually being changed. This query can then also be used for logging as it will only contain the changed fields and therefore is easier to parse.

My concern is the time that it takes to retrieve the existing record.

Or is there a way to retrieve from MySQL which fields it updated?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

一念一轮回 2024-07-18 19:18:58

我认为值得改变 - 但可能不值得在插入之前进行选择。

我只更新已更改的字段,这是我的 DbEntity 类操作的一部分,该类遵循 activerecord 模式。 执行此操作几乎不需要额外费用,因为我保存当前记录和原始记录 - 只要加载记录即可复制。

原因很简单——并不是真正的性能。 您还可以通过在更新字段的旧值上添加 where 子句来检查并发修改,并抛出相应的错误。

在写入/更新方法中:

$s1 = "";

foreach ($this->record as $key => $value)
{
    // only update fields that have been changed
    if ($value != $this->orig_record[$key])
    {
        $s1 .= $comma."`$key`='".mysql_real_escape_string($value)."'";
        $comma = ", ";
    }
}

$query = "UPDATE ".$this->table." SET $s1 where {$this->id_field}='".$this->get_keyfield()."'";
$query .= $this->extra_sql_update;
mysql_query($query);

$ar = mysql_affected_rows();
//
// the number of affected rows is actually those changed by the update operation, which will 
// either be zero, or 1. If the query affects more than one row then we have a problem.
if ($ar < 0 || $ar > 1)
{
    cbf_error("cbf_dbentity: {$this->table} :: only one row (not $ar) must be affected by an insert operation. $query",
      E_USER_ERROR);
}
else
{
    $new_id = $this->get_keyfield();

    GlobalEventBus::notify_all(new AuditLogSQL($this->table, "update", $query));

}

$this->orig_record = Array();

foreach ($this->record as $key => $value)
    $this->orig_record[$key] = $value;


//
// sanity check - ensure that what we have just written is actually there.

$this->load($new_id);

foreach ($this->orig_record as $key => $value)
    if (trim($this->record[$key]) != trim($value) 
        && (!$this->record[$key] == "0" && $value=""))
        cbf_error("cbf_dbentity: {$this->table} :: record differs during write after reload: field $key was \"$value\", after write it is now \"".
              $this->record[$key]."\"",E_USER_ERROR);

在加载方法中

$this->orig_record = Array();
foreach ($this->record as $key => $value)
    $this->orig_record[$key] = $value;

I think it's worth changing - but probably not worth doing a select before insert.

I only update the fields that have changed, it's part of the operation of my DbEntity class which follows an activerecord pattern. It costs little extra to do this because I hold the current record and original records -simply copying whenever a record is loaded.

Reasons are brevity - not really performance. Also you can check for concurrent modification by adding a where clause on the old value of the updated fields and throw the appropriate error.

In the write/update method:

$s1 = "";

foreach ($this->record as $key => $value)
{
    // only update fields that have been changed
    if ($value != $this->orig_record[$key])
    {
        $s1 .= $comma."`$key`='".mysql_real_escape_string($value)."'";
        $comma = ", ";
    }
}

$query = "UPDATE ".$this->table." SET $s1 where {$this->id_field}='".$this->get_keyfield()."'";
$query .= $this->extra_sql_update;
mysql_query($query);

$ar = mysql_affected_rows();
//
// the number of affected rows is actually those changed by the update operation, which will 
// either be zero, or 1. If the query affects more than one row then we have a problem.
if ($ar < 0 || $ar > 1)
{
    cbf_error("cbf_dbentity: {$this->table} :: only one row (not $ar) must be affected by an insert operation. $query",
      E_USER_ERROR);
}
else
{
    $new_id = $this->get_keyfield();

    GlobalEventBus::notify_all(new AuditLogSQL($this->table, "update", $query));

}

$this->orig_record = Array();

foreach ($this->record as $key => $value)
    $this->orig_record[$key] = $value;


//
// sanity check - ensure that what we have just written is actually there.

$this->load($new_id);

foreach ($this->orig_record as $key => $value)
    if (trim($this->record[$key]) != trim($value) 
        && (!$this->record[$key] == "0" && $value=""))
        cbf_error("cbf_dbentity: {$this->table} :: record differs during write after reload: field $key was \"$value\", after write it is now \"".
              $this->record[$key]."\"",E_USER_ERROR);

In the load method

$this->orig_record = Array();
foreach ($this->record as $key => $value)
    $this->orig_record[$key] = $value;
波浪屿的海角声 2024-07-18 19:18:58

在最基本的层面上,如果我正确地阅读了您的问题,您通常不想盲目更新整个记录,以防其他用户已经更新了该记录中您实际上尚未更改的部分。 您会盲目且不必要地恢复他们的更新。

我相信您当前的算法可能会导致脏写,如果您要读取当前一次进行更新,允许在内存中进行更新,然后再次读取记录以允许您找出哪些字段已更新。 如果另一个用户在您背后更新了该记录,导致您的算法相信是更新该字段的人,会发生什么? 但首先,您不必为了执行一次更新而读取每条记录两次。

如果您的数据不经常导致冲突,那么您可能会从阅读乐观锁定中受益,即使您不选择实现它。

我们在这里实现了一种方法,您可以在表中添加更新时间戳或增量更新号列。 然后,在沙箱/内存中,您可以跟踪已修改的字段(旧值/新值),并且可以自由地为这些字段的记录发出更新 SQL,“UPDATE...WHERE UPDATENUM=the-original-number "(或 WHERE UPDATETS=the-original-timestamp),确保您的更新 SQL 也根据需要递增 UPDATENUM 或 UPDATETS。 如果受该更新 SQL 影响的记录为 0,您就知道其他人已经在后台修改了该记录,并且您现在遇到了冲突。 但至少您没有覆盖其他人的更改,然后您可以重新读取新数据或让您的用户解决冲突。

At the most basic level, if I'm reading your question right, you generally don't want to blindly update the entire record in case another user has already updated parts of that record that you've not actually changed. You would blindly and needlessly revert their updates.

I believe your current algorithm may lead to dirty writes, if you're going to read the current once for update, allow updates to be made in memory, then read the record again to allow you to figure out which fields have been updated. What happens if another user updated that record behind your back,leading your algorithm to believe that you were the one to update that field? But primarily, you shouldn't have to read every record twice to perform a single update.

If your data does not often result in conflicts, you may benefit from reading about optimistic locking, even if you don't choose to implement it.

We've implemented one method here whereby you add an update-timestamp or an incremental update-number column to your table. Then in your sandbox/memory you can keep track of which fields you have modified (oldvalue/newvalue), and you can freely issue your update SQL for that record for those fields, "UPDATE...WHERE UPDATENUM=the-original-number" (or WHERE UPDATETS=the-original-timestamp), making sure that your update SQL also increments the UPDATENUM or UPDATETS, as appropriate. If the records-affected by that update SQL is 0 you know that someone else has already modified that record in the background and you now have a conflict. But at least you didn't overwrite someone else's changes, and you can then reread the new data or have your user resolve the conflict.

死开点丶别碍眼 2024-07-18 19:18:58

应用程序中最慢的点始终是数据库访问,因此如果您可以加快速度,这是一个好主意。 也就是说,这实际上取决于您的数据库和记录有多大,以及它们可能增长到多大,是否值得以编程方式检查项目是否已更新。
如果您的数据库很小并且访问速度已经相当快,那么可能不值得花时间。 但是,如果速度可以提高,并且它可以为您的日志记录带来额外的好处,那么就去吧。

The slowest point in your application is always going to be your database access, so if you can speed that up, it’s a good idea. That said it really depends how big your database, and records are, and how big they are likely to grow to, as to whether it’s worth the effort to programmatically check whether the items have updated.
If your database is small and access is already quite quick, it may not be worth the time. But if speed can be improved, and it gives you an added benefit for your logging, then go for it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文