[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZXBm95si+j7lmalf@infradead.org>
Date: Wed, 6 Dec 2023 04:20:07 -0800
From: Christoph Hellwig <hch@...radead.org>
To: Dave Chinner <david@...morbit.com>
Cc: Christoph Hellwig <hch@...radead.org>, Baokun Li <libaokun1@...wei.com>,
Jan Kara <jack@...e.cz>, linux-mm@...ck.org,
linux-ext4@...r.kernel.org, tytso@....edu, adilger.kernel@...ger.ca,
willy@...radead.org, akpm@...ux-foundation.org,
ritesh.list@...il.com, linux-kernel@...r.kernel.org,
yi.zhang@...wei.com, yangerkun@...wei.com, yukuai3@...wei.com
Subject: Re: [PATCH -RFC 0/2] mm/ext4: avoid data corruption when extending
DIO write race with buffered read
On Wed, Dec 06, 2023 at 09:34:49PM +1100, Dave Chinner wrote:
> Largely they were performance problems - unpredictable IO latency
> and CPU overhead for IO meant applications would randomly miss SLAs.
> The application would see IO suddenly lose all concurrency, go real
> slow and/or burn lots more CPU when the inode switched to buffered
> mode.
>
> I'm not sure that's a particularly viable model given the raw IO
> throughput even cheap modern SSDs largely exceeds the capability of
> buffered IO through the page cache. The differences in concurrency,
> latency and throughput between buffered and DIO modes will be even
> more stark itoday than they were 20 years ago....
The question is what's worse: random performance drops or random
corruption. I suspect the former is less bad, especially if we have
good tracepoints to pin it down.
Powered by blists - more mailing lists