lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7e7c19f3-ec6a-9a82-5b6f-04edbdc7040d@openvz.org>
Date:   Wed, 30 Mar 2022 01:30:15 +0300
From:   Kirill Tkhai <kirill.tkhai@...nvz.org>
To:     Christoph Hellwig <hch@...radead.org>
Cc:     agk@...hat.com, snitzer@...hat.com, dm-devel@...hat.com,
        song@...nel.org, linux-kernel@...r.kernel.org,
        khorenko@...tuozzo.com, axboe@...nel.dk,
        linux-block@...r.kernel.org
Subject: Re: [dm-devel] [PATCH 3/4] dm-qcow2: Introduce driver to create block
 devices over QCOW2 files

On 29.03.2022 18:24, Kirill Tkhai wrote:
> On 29.03.2022 16:34, Christoph Hellwig wrote:
>> On Mon, Mar 28, 2022 at 02:18:35PM +0300, Kirill Tkhai wrote:
>>> The driver is request based, since this allows to use blk-mq
>>> merging of request. Driver splits requests itself, and every
>>> request (i.e., qio) after splitting fits a single cluster.
>>> (In some cases it is worth to create bigger splits, and this
>>> is a subject of further optimizations).
>>
>> Nak, please don't do that.  If you want finer grained processing use
>> a bio based driver, not a request based one.  This is just getting us
>> into tons of problems.
> 
> Could you explain what you mean? Why shouldn't I use generic bio merging
> code, but implementing my own merging? Which problems you point?
> 
> Generic blk-mq is well tested and lots of people work on its performance.
> It's not obvious reason I should better implement own realization.

In addition to this message. Possible, I confused you with patch description. Let I explain on example.

There are a lot cases, when block device receives sequential bios, which
consist of one page. The driver wants to have these bios are merged together because of
it's not a good idea to call call_write_iter() for each of such 1-page bios.

The same time, in QCOW2 format two sequential clusters may be placed inconsequently.
For instance, let we have 1Mb cluster size and mapping:

virtual device range      range inside QCOW2 file
[0Mb, 1Mb]            ->  [20Mb, 21Mb]
[1Mb, 2Mb]            ->  [30Mb, 31Mb].

This may happen because of cluster #1 were allocated later than cluster #0.
In case of a request going to [512K, 1.5Mb], it intersects cluster boundary
and we want to split it into two. Otherwise it impossible to call call_write_iter().

Solving the problem with 1-page bios, an implementing own elevator for only driver
does not look a good idea. Making an elevator with performance comparable with
generic block layer's is not a trivial task. At least it will require the same
amount of code that block layer has. This is not a solution for a single driver.

>From the driver side, it tells block layer about optimal IO size, so the number
of splits should gravitate to zero.

Looking at another drivers like loop and nbd, they both are request based.
Necessity to split a request to fit a filesystem extent is not a reason to stop
merging loop requests, isn't it?! Also, nbd merges bios despite they are splitted
by qemu-nbd in userspace level to fit 1 cluster boundary just like my driver does.
Because, this is the only way. I've already tried another way, when I started
writing this driver as bio-based. But then found that this is not a good solution,
and converted it in request-based.

Kirill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ