[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <164846619932.251310.3668540533992131988.stgit@pro>
Date: Mon, 28 Mar 2022 14:18:16 +0300
From: Kirill Tkhai <kirill.tkhai@...nvz.org>
To: agk@...hat.com, snitzer@...hat.com, dm-devel@...hat.com,
song@...nel.org, linux-kernel@...r.kernel.org,
khorenko@...tuozzo.com, kirill.tkhai@...nvz.org
Subject: [PATCH 0/4] dm: Introduce dm-qcow2 driver to attach QCOW2 files as
block device
This patchset adds a new driver allowing to attach QCOW2 files
as block devices. Its idea is to implement in kernel only that
features, which affect runtime IO performance (IO requests
processing functionality). The maintenance operations are
synchronously processed in userspace, while device is suspended.
Userspace is allowed to do only that operations, which never
modifies virtual disk's data. It is only allowed to modify
QCOW2 file metadata providing that disk's data. The examples
of allowed operations is snapshot creation and resize.
Userspace part is handled by already existing utils (qemu-img).
For instance, snapshot creation on attached dm-qcow2 device looks like:
# dmsetup suspend $device
# qemu-img snapshot -c <snapshot_name> $device.qcow2
# dmsetup resume $device
1)Suspend flushes all pending IO and related metadata to file,
leaving the file in consistent QCOW2 format.
Driver .postsuspend throws out all images's cached metadata.
2)qemu-img creates snapshot: changes/moves metadata inside QCOW2 file.
3)Driver .preresume reads new version of metadata
from file (1 page is required), and the device is ready
to continue handling of IO requests.
This example shows the way of device-mapper infrastructure
allows to implement drivers following the idea of
kernel/userspace components demarcation. Thus, the driver
uses advantages of device-mapper instead of implementing
its own suspend/resume engine.
The below fio test was used to measure performance:
# fio --name=test --ioengine=libaio --direct=1 --bs=$bs --filename=$dev
--readwrite=$rw --runtime=60 --numjobs=2 --iodepth=8
The collected results consists of the both: fio measurement
and system load taken from /proc/loadavg. Since loadavg min
period is 60 seconds, fio's runtime is 60 too.
Here is average results of 5 runs (IO/loadavg is also average
of IO/loadavg of 5 runs):
-------------------------------------------+---------------------------------+--------------------------+
qemu-nbd (native aio) | dm-qcow2 | diff, % |
----+--------------------------------------+---------------------------------+--------------------------+
bs | RW | IO,MiB/s loadavg IO/loadavg| IO,MiB/s loadavg IO/loadavg|IO loadavg IO/loadavg|
------------|------------------------------+---------------------------------+--------------------------+
4K | READ | 279 1.986 147 | 512 2.088 248 |+83.7 +5.1 +68.4 |
4K | WRITE | 242 2.31 105 | 770 2.172 357 |+217.9 -5.9 +239.7 |
----+-------|------------------------------+---------------------------------+--------------------------+
64K | READ | 1199 1.794 691 | 1218 1.118 1217 |+1.6 -37.7 +76 |
64K | WRITE | 946 1.084 877 | 1003 0.466 2144 |+6.1 -57 +144.5 |
------------|------------------------------+---------------------------------+--------------------------+
512K| READ | 1741 1.142 1526 | 2196 0.546 4197 |+26.1 -52.2 +175.1 |
512K| WRITE | 1016 1.084 941 | 993 0.306 3267 |-2.2 -71.7 +246.9 |
----|-------|------------------------------+---------------------------------+--------------------------+
1M | READ | 1793 1.174 1542 | 2373 0.566 4384 |+32.4 -51.8 +184.2 |
1M | WRITE | 1037 0.894 1165 | 1068 0.892 1196 |+2.9 -0.2 +2.7 |
----|-------+------------------------------+---------------------------------+--------------------------+
2M | READ | 1784 1.084 1654 | 2431 0.788 3090 |+36.3 -27.3 +86.8 |
2M | WRITE | 1027 0.878 1172 | 1063 0.878 1212 |+3.6 0 +3.4 |
----+-------+------------------------------+---------------------------------+--------------------------+
(NBD attaching string: qemu-nbd -c $dev --aio=native --nocache file.qcow2)
As in diff column, dm-qcow2 driver has the best throughput
(the only exception is 512K WRITE), and the smallest
loadavg (the only exception is 4K READ). The density
of dm-qcow2 is significantly better.
(Note, that tests are made on preallocated images, when
all L2 table is allocated, since QEMU has lazy L2 allocation
feature, which is not implemented in dm-qcow2 yet).
So, one of the reasons of implementing the driver is providing
better performance and density than it's done in qemu-nbd.
The second reason is a possibility to unify virtual disks format
for VMs and containers, so a disk image can be used to start
both of them.
This patchset consists of 4 patches. Patches [1-2] make small
changes in dm code: [1] exports a function, while [2] makes
.io_hints be called for drivers not having .iterate_devices.
Patch [3] adds dm-qcow2, while patch [4] adds a userspace
wrapper for attaching such the devices.
---
Kirill Tkhai (4):
dm: Export dm_complete_request()
dm: Process .io_hints for drivers not having underlying devices
dm-qcow2: Introduce driver to create block devices over QCOW2 files
dm-qcow2: Add helper for working with dm-qcow2 devices
drivers/md/Kconfig | 17 +
drivers/md/Makefile | 2 +
drivers/md/dm-qcow2-cmd.c | 383 +++
drivers/md/dm-qcow2-map.c | 4256 ++++++++++++++++++++++++++++++++++
drivers/md/dm-qcow2-target.c | 1026 ++++++++
drivers/md/dm-qcow2.h | 368 +++
drivers/md/dm-rq.c | 3 +-
drivers/md/dm-rq.h | 2 +
drivers/md/dm-table.c | 5 +-
scripts/qcow2-dm.sh | 249 ++
10 files changed, 6309 insertions(+), 2 deletions(-)
create mode 100644 drivers/md/dm-qcow2-cmd.c
create mode 100644 drivers/md/dm-qcow2-map.c
create mode 100644 drivers/md/dm-qcow2-target.c
create mode 100644 drivers/md/dm-qcow2.h
create mode 100755 scripts/qcow2-dm.sh
--
Signed-off-by: Kirill Tkhai <kirill.tkhai@...nvz.org>
Powered by blists - more mailing lists