已挂载文件系统的 LVM 快照

发布于 2024-08-15 18:14:43 字数 262 浏览 8 评论 0原文

我想以编程方式制作 Linux 中实时文件系统的快照,最好使用 LVM。我不想卸载它,因为我打开了很多文件(我最常见的情况是我有一个繁忙的桌面,其中有很多程序)。

据我所知,由于内核缓冲区和一般文件系统活动,磁盘上的数据可能或多或少处于某种未定义的状态。

有什么方法可以“自动”卸载 FS,制作 LVM 快照并将其重新安装吗?如果操作系统会阻止所有活动几秒钟来执行此任务,那就没问题了。或者也许是某种原子“同步+快照”?内核调用?

我不知道这是否可能......

I'd like to programmatically make a snapshot of a live filesystem in Linux, preferably using LVM. I'd like not to unmount it because I've got lots of files opened (my most common scenario is that I've got a busy desktop with lots of programs).

I understand that because of kernel buffers and general filesystem activity, data on disk might be in some more-or-less undefined state.

Is there any way to "atomically" unmount an FS, make an LVM snapshot and mount it back? It will be ok if the OS will block all activity for few seconds to do this task. Or maybe some kind of atomic "sync+snapshot"? Kernel call?

I don't know if it is even possible...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

鹤舞 2024-08-22 18:14:43

对于大多数 Linux 文件系统,您不必执行任何操作。它应该可以正常工作,无需您付出任何努力。快照命令本身使用正在快照的卷来搜索已安装的文件系统,并调用一个特殊的挂钩,以一致的可安装状态对它们进行检查点,并自动执行快照。

旧版本的 LVM 附带了一组 VFS 锁定补丁,可以修补各种文件系统,以便为快照设置检查点。但是新内核应该已经内置到大多数 Linux 文件系统中。

这个快照简介也声称如此。

更多的研究表明,对于 2.6 系列的内核,ext 系列的文件系统应该都支持这一点。 ReiserFS 可能也是。如果我认识 btrfs 的人,那人可能也认识。

You shouldn't have to do anything for most Linux filesystems. It should just work without any effort at all on your part. The snapshot command itself hunts down mounted filesystems using the volume being snapshotted and calls a special hook that checkpoints them in a consistent, mountable state and does the snapshot atomically.

Older versions of LVM came with a set of VFS lock patches that would patch various filesystems so that they could be checkpointed for a snapshot. But with new kernels that should already be built into most Linux filesystems.

This intro on snapshots claims as much.

And a little more research reveals that for kernels in the 2.6 series the ext series of filesystems should all support this. ReiserFS probably also. And if I know the btrfs people, that one probably does as well.

彩扇题诗 2024-08-22 18:14:43

有什么方法可以“原子地”卸载 FS、制作 LVM 快照并将其重新挂载吗?

即使文件系统不在 LVM 卷上,也可以对已安装的文件系统进行快照。如果文件系统位于 LVM 上,或者具有内置快照工具(例如 btrfs 或 ZFS),则改用它们。

以下说明相当低级,但如果您希望能够对不在 LVM 卷上的文件系统进行快照,并且无法将其移动到新的 LVM 卷,那么它们可能会很有用。尽管如此,它们并不适合胆小的人:如果你犯了一个错误,你可能会损坏你的文件系统。请务必查阅官方文档dmsetup< /code> 手册页,三次检查您正在运行的命令,并进行备份

Linux 内核有一个很棒的工具,称为设备映射器,它可以做一些不错的事情,例如创建作为其他块设备“视图”的块设备,当然还有快照。这也是 LVM 在幕后用来完成繁重工作的方法。

在下面的示例中,我假设您想要对 /home 进行快照,它是位于 /dev/sda2 上的 ext4 文件系统。

首先,找到分区挂载的设备映射器设备的名称:

# mount | grep home
/dev/mapper/home on /home type ext4 (rw,relatime,data=ordered)

这里,设备映射器设备名称是home。如果块设备的路径不是以 /dev/mapper/ 开头,那么您将需要创建一个设备映射器设备,并重新挂载文件系统以使用该设备而不是 HDD 分区。您只需执行一次此操作。

# dmsetup create home --table "0 $(blockdev --getsz /dev/sda2) linear /dev/sda2 0"
# umount /home
# mount -t ext4 /dev/mapper/home /home

接下来,获取块设备的设备映射表:

# dmsetup table home
home: 0 3864024960 linear 9:2 0

您的数字可能会有所不同。设备目标应该是线性;如果您的不是,您可能需要特别考虑。如果最后一个数字(起始偏移量)不为 0,则需要创建一个中间块设备(与当前块设备具有相同的表)并使用它作为基础,而不是 /dev/sda2

在上面的示例中,home 使用带有线性 目标的单条目表。您需要将此表替换为一个使用snapshot 目标的新表。

设备映射器提供三个快照目标:

  • 快照目标,它将写入保存到指定的 COW 设备。 (请注意,尽管它被称为快照,但该术语具有误导性,因为快照将是可写的,但底层设备将保持不变。)

  • snapshot-origin 目标,发送写入到底层设备,但也将写入覆盖的旧数据发送到指定的 COW 设备。

通常,您会将 home 设为 snapshot-origin 目标,然后在其上创建一些 snapshot 目标。这就是 LVM 的作用。不过,更简单的方法是直接创建一个快照目标,这就是我将在下面展示的内容。

无论您选择哪种方法,您都不得写入底层设备 (/dev/sda2),否则快照将看到文件系统的损坏视图。因此,作为预防措施,您应该将底层块设备标记为只读:

# blockdev --setro /dev/sda2

这不会影响它支持的设备映射器设备,因此如果您已经重新挂载了 /home /dev/mapper/home,它应该不会有明显的影响。

接下来,您需要准备 COW 设备,该设备将存储自创建快照以来的更改。这必须是块设备,但可以由稀疏文件支持。如果您想使用例如 32GB 的稀疏文件:

# dd if=/dev/zero bs=1M count=0 seek=32768 of=/home_cow
# losetup --find --show /home_cow
/dev/loop0

显然,稀疏文件不应该位于您正在快照的文件系统上:)

现在您可以重新加载设备的表并将其转换为快照设备:

# dmsetup suspend home && \
  dmsetup reload home --table \
    "0 $(blockdev --getsz /dev/sda2) snapshot /dev/sda2 /dev/loop0 PO 8" && \
  dmsetup resume home

如果成功,则新建对 /home 的写入现在应该记录在 /home_cow 文件中,而不是写入 /dev/sda2。确保监视 COW 文件的大小以及其所在文件系统上的可用空间,以避免耗尽 COW 空间。

一旦不再需要快照,您可以合并它(将 COW 文件中的更改永久提交到底层设备),或者丢弃它。

  • 合并它:

    1. 将表替换为 snapshot-merge 目标,而不是 snapshot 目标:

      # dmsetup 暂停 home && \
        dmsetup 重新加载主页 --table \
          “0 $(blockdev --getsz /dev/sda2) 快照合并 /dev/sda2 /dev/loop0 P 8” && \
        dmsetup 恢复主页
      
    2. 接下来,监视合并状态,直到合并所有非元数据块:

      # 观看 dmsetup 状态主页
      ...
      0 3864024960 快照合并 281688/2097152 1104
      

      注意末尾的 3 个数字 (X/YZ)。当 X = Z 时合并完成。

    3. 接下来,再次用线性目标替换表格:

      # dmsetup 暂停 home && \
        dmsetup 重新加载主页 --table \
          “0 $(blockdev --getsz /dev/sda2) 线性 /dev/sda2 0” && \
        dmsetup 恢复主页
      
    4. 现在你可以拆除循环装置了:

      <前><代码># losetup -d /dev/loop0

    5. 最后,您可以删除 COW 文件。

      <前><代码># rm /home_cow

  • 要放弃快照,请卸载 /home,按照上面的步骤 3-5 进行操作,然后重新挂载 /home。虽然设备映射器允许您在不卸载 /home 的情况下执行此操作,但它没有意义(因为内存中正在运行的程序的状态将不再对应于文件系统状态),并且它可能会损坏您的文件系统。

Is there any way to "atomically" unmount an FS, make an LVM snapshot and mount it back?

It is possible to snapshot a mounted filesystem, even when the filesystem is not on an LVM volume. If the filesystem is on LVM, or it has built-in snapshot facilities (e.g. btrfs or ZFS), then use those instead.

The below instructions are fairly low-level, but they can be useful if you want the ability to snapshot a filesystem that is not on an LVM volume, and can't move it to a new LVM volume. Still, they're not for the faint-hearted: if you make a mistake, you may corrupt your filesystem. Make sure to consult the official documentation and dmsetup man page, triple-check the commands you're running, and have backups!

The Linux kernel has an awesome facility called the Device Mapper, which can do nice things such as create block devices that are "views" of other block devices, and of course snapshots. It is also what LVM uses under the hood to do the heavy lifting.

In the below examples I'll assume you want to snapshot /home, which is an ext4 filesystem located on /dev/sda2.

First, find the name of the device mapper device that the partition is mounted on:

# mount | grep home
/dev/mapper/home on /home type ext4 (rw,relatime,data=ordered)

Here, the device mapper device name is home. If the path to the block device does not start with /dev/mapper/, then you will need to create a device mapper device, and remount the filesystem to use that device instead of the HDD partition. You'll only need to do this once.

# dmsetup create home --table "0 $(blockdev --getsz /dev/sda2) linear /dev/sda2 0"
# umount /home
# mount -t ext4 /dev/mapper/home /home

Next, get the block device's device mapper table:

# dmsetup table home
home: 0 3864024960 linear 9:2 0

Your numbers will probably be different. The device target should be linear; if yours isn't, you may need to take special considerations. If the last number (start offset) is not 0, you will need to create an intermediate block device (with the same table as the current one) and use that as the base instead of /dev/sda2.

In the above example, home is using a single-entry table with the linear target. You will need to replace this table with a new one, which uses the snapshot target.

Device mapper provides three targets for snapshotting:

  • The snapshot target, which saves writes to the specified COW device. (Note that even though it's called a snapshot, the terminology is misleading, as the snapshot will be writable, but the underlying device will remain unchanged.)

  • The snapshot-origin target, which sends writes to the underlying device, but also sends the old data that the writes overwrote to the specified COW device.

Typically, you would make home a snapshot-origin target, then create some snapshot targets on top of it. This is what LVM does. However, a simpler method would be to simply create a snapshot target directly, which is what I'll show below.

Regardless of the method you choose, you must not write to the underlying device (/dev/sda2), or the snapshots will see a corrupted view of the filesystem. So, as a precaution, you should mark the underlying block device as read-only:

# blockdev --setro /dev/sda2

This won't affect device-mapper devices backed by it, so if you've already re-mounted /home on /dev/mapper/home, it should not have a noticeable effect.

Next, you will need to prepare the COW device, which will store changes since the snapshot was made. This has to be a block device, but can be backed by a sparse file. If you want to use a sparse file of e.g. 32GB:

# dd if=/dev/zero bs=1M count=0 seek=32768 of=/home_cow
# losetup --find --show /home_cow
/dev/loop0

Obviously, the sparse file shouldn't be on the filesystem you're snapshotting :)

Now you can reload the device's table and turn it into a snapshot device:

# dmsetup suspend home && \
  dmsetup reload home --table \
    "0 $(blockdev --getsz /dev/sda2) snapshot /dev/sda2 /dev/loop0 PO 8" && \
  dmsetup resume home

If that succeeds, new writes to /home should now be recorded in the /home_cow file, instead of being written to /dev/sda2. Make sure to monitor the size of the COW file, as well as the free space on the filesystem it's on, to avoid running out of COW space.

Once you no longer need the snapshot, you can merge it (to permanently commit the changes in the COW file to the underlying device), or discard it.

  • To merge it:

    1. replace the table with a snapshot-merge target instead of a snapshot target:

      # dmsetup suspend home && \
        dmsetup reload home --table \
          "0 $(blockdev --getsz /dev/sda2) snapshot-merge /dev/sda2 /dev/loop0 P 8" && \
        dmsetup resume home
      
    2. Next, monitor the status of the merge until all non-metadata blocks are merged:

      # watch dmsetup status home
      ...
      0 3864024960 snapshot-merge 281688/2097152 1104
      

      Note the 3 numbers at the end (X/Y Z). The merge is complete when X = Z.

    3. Next, replace the table with a linear target again:

      # dmsetup suspend home && \
        dmsetup reload home --table \
          "0 $(blockdev --getsz /dev/sda2) linear /dev/sda2 0" && \
        dmsetup resume home
      
    4. Now you can dismantle the loop device:

      # losetup -d /dev/loop0
      
    5. Finally, you can delete the COW file.

      # rm /home_cow
      
  • To discard the snapshot, unmount /home, follow steps 3-5 above, and remount /home. Although Device Mapper will allow you to do this without unmounting /home, it doesn't make sense (since the running programs' state in memory won't correspond to the filesystem state any more), and it will likely corrupt your filesystem.

长途伴 2024-08-22 18:14:43

我知道 RedHat Enterprise、Fedora 和 CentOS 中的 ext3 和 ext4 在创建 LVM 快照时会自动设置检查点。这意味着安装快照永远不会出现任何问题,因为它始终是干净的。

相信 XFS 具有相同的支持。我不确定其他文件系统。

I know that ext3 and ext4 in RedHat Enterprise, Fedora and CentOS automatically checkpoint when a LVM snapshot is created. That means there is never any problem mounting the snapshot because it is always clean.

I believe XFS has the same support. I am not sure about other filesystems.

死开点丶别碍眼 2024-08-22 18:14:43

这取决于您使用的文件系统。使用 XFS,您可以使用 xfs_freeze -f 同步和冻结 FS,并使用 xfs_freeze -u 再次激活它,这样您就可以从冻结的卷创建快照,这应该是保存状态。

It depends on the filesystem you are using. With XFS you can use xfs_freeze -f to sync and freeze the FS, and xfs_freeze -u to activate it again, so you can create your snapshot from the frozen volume, which should be a save state.

ま柒月 2024-08-22 18:14:43

我不确定这是否适合您,但您可以将文件系统重新安装为只读。 mount -o remount,ro /lvm (或类似的东西)就可以了。完成快照后,您可以使用 mount -o remount,rw /lvm 重新挂载读写。

I'm not sure if this will do the trick for you, but you can remount a file system as read-only. mount -o remount,ro /lvm (or something similar) will do the trick. After you are done your snapshot, you can remount read-write using mount -o remount,rw /lvm.

孤独患者 2024-08-22 18:14:43

只要您从未在任何专业环境中工作,FS 腐败“极不可能”。否则你就会面对现实,你可能会尝试归咎于“位腐烂”或“硬件”或其他什么,但这一切都归结为不负责任。 冻结/解冻(正如多次提到的,并且只有在正确调用的情况下)在数据库环境之外就足够了。对于数据库,您仍然不会拥有事务完整的备份,并且如果您认为恢复时回滚某些事务的备份很好:请参阅开头句子。
根据活动情况,如果您需要备份,您可能会再添加另外 5-10 分钟的停机时间。
我们大多数人都可以轻松负担得起,但这并不是一般性的建议。
诚实地面对缺点,伙计们。

FS corruption is "highly unlikely", as long as you never work in any kind of professional environment. otherwise you'll meet reality, and you might try blaming "bit rot" or "hardware" or whatever, but it all comes down to having been irresponsible. freeze/thaw (as mentioned a few times, and only if called properly) is sufficient outside of database environments. for databases, you still won't have a transaction-complete backup and if you think a backup that rolls back some transaction is fine when restored: see starting sentence.
depending on the activity you might just added another 5-10 mins of downtime if ever you need that backup.
Most of us can easily afford that, but it can not be general advice.
Be honest about downsides, guys.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文