lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YekhXENAEYJJNy7e@infradead.org>
Date:   Thu, 20 Jan 2022 00:46:20 -0800
From:   Christoph Hellwig <hch@...radead.org>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     "Darrick J. Wong" <djwong@...nel.org>,
        Shiyang Ruan <ruansy.fnst@...itsu.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-xfs <linux-xfs@...r.kernel.org>,
        Linux NVDIMM <nvdimm@...ts.linux.dev>,
        Linux MM <linux-mm@...ck.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        david <david@...morbit.com>,
        Christoph Hellwig <hch@...radead.org>,
        Jane Chu <jane.chu@...cle.com>
Subject: Re: [PATCH v9 02/10] dax: Introduce holder for dax_device

On Wed, Jan 05, 2022 at 04:12:04PM -0800, Dan Williams wrote:
> We ended up with explicit callbacks after hch balked at a notifier
> call-chain, but I think we're back to that now. The partition mistake
> might be unfixable, but at least bdev_dax_pgoff() is dead. Notifier
> call chains have their own locking so, Ruan, this still does not need
> to touch dax_read_lock().

I think we have a few options here:

 (1) don't allow error notifications on partitions.  And error return from
     the holder registration with proper error handling in the file
     system would give us that
 (2) extent the holder mechanism to cover a range
 (3) bite the bullet and create a new stacked dax_device for each
     partition

I think (1) is the best option for now.  If people really do need
partitions we'll have to go for (3)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ