[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <93fbcd21-14e3-a46e-91ab-8cc7ca9ed550@veeam.com>
Date: Fri, 14 Apr 2023 15:39:59 +0200
From: Sergei Shtepa <sergei.shtepa@...am.com>
To: Donald Buczek <buczek@...gen.mpg.de>,
Christoph Hellwig <hch@...radead.org>
CC: <axboe@...nel.dk>, <corbet@....net>, <snitzer@...nel.org>,
<viro@...iv.linux.org.uk>, <brauner@...nel.org>,
<willy@...radead.org>, <kch@...dia.com>,
<martin.petersen@...cle.com>, <vkoul@...nel.org>,
<ming.lei@...hat.com>, <gregkh@...uxfoundation.org>,
<linux-block@...r.kernel.org>, <linux-doc@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH v3 02/11] block: Block Device Filtering Mechanism
On 4/12/23 21:59, Donald Buczek wrote:
> Subject:
> Re: [PATCH v3 02/11] block: Block Device Filtering Mechanism
> From:
> Donald Buczek <buczek@...gen.mpg.de>
> Date:
> 4/12/23, 21:59
>
> To:
> Sergei Shtepa <sergei.shtepa@...am.com>, Christoph Hellwig <hch@...radead.org>
> CC:
> axboe@...nel.dk, corbet@....net, snitzer@...nel.org, viro@...iv.linux.org.uk, brauner@...nel.org, willy@...radead.org, kch@...dia.com, martin.petersen@...cle.com, vkoul@...nel.org, ming.lei@...hat.com, gregkh@...uxfoundation.org, linux-block@...r.kernel.org, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
>
>
> On 4/12/23 12:43, Sergei Shtepa wrote:
>>
>>
>> On 4/11/23 08:25, Christoph Hellwig wrote:
>>> Subject:
>>> Re: [PATCH v3 02/11] block: Block Device Filtering Mechanism
>>> From:
>>> Christoph Hellwig <hch@...radead.org>
>>> Date:
>>> 4/11/23, 08:25
>>>
>>> To:
>>> Donald Buczek <buczek@...gen.mpg.de>
>>> CC:
>>> Sergei Shtepa <sergei.shtepa@...am.com>, axboe@...nel.dk, hch@...radead.org, corbet@....net, snitzer@...nel.org, viro@...iv.linux.org.uk, brauner@...nel.org, willy@...radead.org, kch@...dia.com, martin.petersen@...cle.com, vkoul@...nel.org, ming.lei@...hat.com, gregkh@...uxfoundation.org, linux-block@...r.kernel.org, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
>>>
>>>
>>> On Sat, Apr 08, 2023 at 05:30:19PM +0200, Donald Buczek wrote:
>>>> Maybe detach the old filter and attach the new one instead? An atomic replace might be usefull and it wouldn't complicate the code to do that instead. If its the same filter, maybe just return success and don't go through ops->detach and ops->attach?
>>> I don't think a replace makes any sense. We might want multiple
>>> filters eventually, but unless we have a good use case for even just
>>> more than a single driver we can deal with that once needed. The
>>> interface is prepared to support multiple attached filters already.
>>>
>>
>>
>> Thank you Donald for your comment. It got me thinking.
>>
>> Despite the fact that only one filter is currently offered for the kernel,
>> I think that out-of-tree filters of block devices may appear very soon.
>> It would be good to think about it in advance.
>> And, I agree with Christophe, we would not like to redo the blk-filter interface
>> when new filters appear in the tree.
>>
>> We can consider a block device as a resource that two actor want to take over.
>> There are two possible behavioral strategies:
>> 1. If one owner occupies a resource, then for other actors, the ownership
>> request will end with a refusal. The owner will not lose his resource.
>> 2. Any actor can take away a resource from the owner and inform him about its
>> loss using a callback.
>>
>> I think the first strategy is safer. When calling ioctl BLKFILTER_ATTACH, the
>> kernel informs the actor that the resource is busy.
>> Of course, there is still an option to grab someone else's occupied resource.
>> To do this, he will have to call ioctl BLKFILTER_DETACH, specifying the name
>> of the filter that needs to be detached. It is assumed that such detached
>> should be performed by the same actor that attached it there.
>>
>> If we replace the owner at each ioctl BLKFILTER_ATTACH, then we can get a
>> situation of competition between two actors. At the same time, they won't
>> even get a message that something is going wrong.
>>
>> An example from life. The user compares different backup tools. Install one,
>> then another. Each uses its own filter (And why not? this is technically
>> possible).
>> With the first strategy, the second tool will make it clear to the user that
>> it cannot work, since the resource is already occupied by another.
>> The user will have to experiment first with one tool, uninstall it, and then
>> experiment with another.
>> With the second strategy, both tools will unload each other's filters. In the
>> best case, this will lead to disruption of their work. At a minimum, blksnap,
>> when detached, will reset the change tracker and each backup will perform a
>> full read of the block device. As a result, the user will receive distorted
>> data, the system will not work as planned, although there will be no error
>> message.
>
> I had a more complicated scenario in mind. For example, some kind of live migration
> from one block device to another, when you switch from the filter which clones from the
> source device to the target device to the filter which just redirects from the source
> device to the target device as the last step.
'Live migration' - I've heard that idea before.
I think it makes sense to create a description of what 'live migration' should
look like for Linux. Describe the purpose, what cases it would be useful, the
general algorithm of work.
It seems to me that the implementation of 'live migration' may be similar to
what is implemented in blksnap. Perhaps I could be useful in such a project.
> OTOH, that may be a very distant vision. Plus, one single and simple filter, which
> redirects I/O into a DM stack, would be enough or better anyway to do the more
> complicated things using the DM features, which include atomic replacement and
> stacking and everything.
>
> I don't have a strong opinion.
About one single and simple filter, which redirects I/O into a DM stack.
Yes, at first glance, such an implementation looks simple and obvious.
I tried to go this way - I failed. Maybe someone can do it. I'll be glad.
Powered by blists - more mailing lists