[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhYQANQATz82ytl1@casper.infradead.org>
Date: Wed, 10 Apr 2024 05:05:20 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Luis Chamberlain <mcgrof@...nel.org>
Cc: John Garry <john.g.garry@...cle.com>,
Pankaj Raghav <p.raghav@...sung.com>,
Daniel Gomez <da.gomez@...sung.com>,
Javier González <javier.gonz@...sung.com>,
axboe@...nel.dk, kbusch@...nel.org, hch@....de, sagi@...mberg.me,
jejb@...ux.ibm.com, martin.petersen@...cle.com, djwong@...nel.org,
viro@...iv.linux.org.uk, brauner@...nel.org, dchinner@...hat.com,
jack@...e.cz, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-fsdevel@...r.kernel.org, tytso@....edu, jbongio@...gle.com,
linux-scsi@...r.kernel.org, ojaswin@...ux.ibm.com,
linux-aio@...ck.org, linux-btrfs@...r.kernel.org,
io-uring@...r.kernel.org, nilay@...ux.ibm.com,
ritesh.list@...il.com
Subject: Re: [PATCH v6 00/10] block atomic writes
On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote:
> On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
> > On 04/04/2024 17:48, Matthew Wilcox wrote:
> > > > > The thing is that there's no requirement for an interface as complex as
> > > > > the one you're proposing here. I've talked to a few database people
> > > > > and all they want is to increase the untorn write boundary from "one
> > > > > disc block" to one database block, typically 8kB or 16kB.
> > > > >
> > > > > So they would be quite happy with a much simpler interface where they
> > > > > set the inode block size at inode creation time,
> > > > We want to support untorn writes for bdev file operations - how can we set
> > > > the inode block size there? Currently it is based on logical block size.
> > > ioctl(BLKBSZSET), I guess? That currently limits to PAGE_SIZE, but I
> > > think we can remove that limitation with the bs>PS patches.
>
> I can say a bit more on this, as I explored that. Essentially Matthew,
> yes, I got that to work but it requires a set of different patches. We have
> what we tried and then based on feedback from Chinner we have a
> direction on what to try next. The last effort on that front was having the
> iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
> page cache limits. The crux on that front was that we end requiring
> disabling BUFFER_HEAD and that is pretty limitting, so my old
> implementation had dynamic aops so to let us use the buffer-head aops
> only when using filesystems which require it and use iomap aops
> otherwise. But as Chinner noted we learned through the DAX experience
> that's not a route we want to again try, so the real solution is to
> extend iomap bdev aops code with buffer-head compatibility.
Have you tried just using the buffer_head code? I think you heard bad
advice at last LSFMM. Since then I've landed a bunch of patches which
remove PAGE_SIZE assumptions throughout the buffer_head code, and while
I haven't tried it, it might work. And it might be easier to make work
than adding more BH hacks to the iomap code.
A quick audit for problems ...
__getblk_slow:
if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||
(size < 512 || size > PAGE_SIZE))) {
cont_expand_zero (not used by bdev code)
cont_write_begin (ditto)
That's all I spot from a quick grep for PAGE, offset_in_page() and kmap.
You can't do a lot of buffer_heads per folio, because you'll overrun
struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
in block_read_full_folio(), but you can certainly do _one_ buffer_head
per folio, and that's all you need for bs>PS.
> I suspect this is a use case where perhaps the max folio order could be
> set for the bdev in the future, the logical block size the min order,
> and max order the large atomic.
No, that's not what we want to do at all! Minimum writeback size needs
to be the atomic size, otherwise we have to keep track of which writes
are atomic and which ones aren't. So, just set the logical block size
to the atomic size, and we're done.
Powered by blists - more mailing lists