lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3be63d9f-d8eb-7657-86dc-8d57187e5940@suse.de>
Date:   Wed, 23 Jun 2021 16:01:40 +0200
From:   Hannes Reinecke <hare@...e.de>
To:     Lennart Poettering <mzxreary@...inter.de>,
        Matteo Croce <mcroce@...ux.microsoft.com>
Cc:     Christoph Hellwig <hch@...radead.org>, linux-block@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Luca Boccassi <bluca@...ian.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Damien Le Moal <damien.lemoal@....com>,
        Tejun Heo <tj@...nel.org>,
        Javier Gonz??lez <javier@...igon.com>,
        Niklas Cassel <niklas.cassel@....com>,
        Johannes Thumshirn <johannes.thumshirn@....com>,
        Matthew Wilcox <willy@...radead.org>,
        JeffleXu <jefflexu@...ux.alibaba.com>
Subject: Re: [PATCH v3 1/6] block: add disk sequence number

On 6/23/21 3:51 PM, Lennart Poettering wrote:
> On Mi, 23.06.21 15:10, Matteo Croce (mcroce@...ux.microsoft.com) wrote:
> 
>> On Wed, Jun 23, 2021 at 1:49 PM Christoph Hellwig <hch@...radead.org> wrote:
>>>
>>> On Wed, Jun 23, 2021 at 12:58:53PM +0200, Matteo Croce wrote:
>>>> +void inc_diskseq(struct gendisk *disk)
>>>> +{
>>>> +     static atomic64_t diskseq;
>>>
>>> Please don't hide file scope variables in functions.
>>>
>>
>> I just didn't want to clobber that file namespace, as that is the only
>> point where it's used.
>>
>>> Can you explain a little more why we need a global sequence count vs
>>> a per-disk one here?
>>
>> The point of the whole series is to have an unique sequence number for
>> all the disks.
>> Events can arrive to the userspace delayed or out-of-order, so this
>> helps to correlate events to the disk.
>> It might seem strange, but there isn't a way to do this yet, so I come
>> up with a global, monotonically incrementing number.
> 
> To extend on this and given an example why the *global* sequence number
> matters:
> 
> Consider you plug in a USB storage key, and it gets named
> /dev/sda. You unplug it, the kernel structures for that device all
> disappear. Then you plug in a different USB storage key, and since
> it's the only one it will too be called /dev/sda.
> 
> With the global sequence number we can still distinguish these two
> devices even though otherwise they can look pretty much identical. If
> we had per-device counters then this would fall flat because the
> counter would be flushed out when the device disappears and when a device
> reappears under the same generic name we couldn't assign it a
> different sequence number than before.
> 
> Thus: a global instead of local sequence number counter is absolutely
> *key* for the problem this is supposed to solve
> 
Well ... except that you'll need to keep track of the numbers (otherwise 
you wouldn't know if the numbers changed, right?).
And if you keep track of the numbers you probably will have to implement 
an uevent listener to get the events in time.
But if you have an uevent listener you will also get the add/remove 
events for these devices.
And if you get add and remove events you can as well implement sequence 
numbers in your application, seeing that you have all information 
allowing you to do so.
So why burden the kernel with it?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@...e.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ