[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74cceb67-2e71-455f-a4d4-6c5185ef775b@meta.com>
Date: Mon, 16 Sep 2024 10:47:10 +0200
From: Chris Mason <clm@...a.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Dave Chinner <david@...morbit.com>
Cc: Jens Axboe <axboe@...nel.dk>, Matthew Wilcox <willy@...radead.org>,
Christian Theune <ct@...ingcircus.io>, linux-mm@...ck.org,
"linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Daniel Dao <dqminh@...udflare.com>, regressions@...ts.linux.dev,
regressions@...mhuis.info
Subject: Re: Known and unfixed active data loss bug in MM + XFS with large
folios since Dec 2021 (any kernel from 6.1 upwards)
On 9/16/24 12:20 AM, Linus Torvalds wrote:
> On Mon, 16 Sept 2024 at 02:00, Dave Chinner <david@...morbit.com> wrote:
>>
>> I don't think this is a data corruption/loss problem - it certainly
>> hasn't ever appeared that way to me. The "data loss" appeared to be
>> in incomplete postgres dump files after the system was rebooted and
>> this is exactly what would happen when you randomly crash the
>> system.
>
> Ok, that sounds better, indeed.
I think Dave is right because in practice most filesystems have enough
files of various sizes that we're likely to run into the lockups or BUGs
already mentioned.
But, if the impacted files are relatively small (say 16K), and all
exactly the same size, we could probably share pages between them and
give the wrong data to applications.
It should crash eventually, that's probably the nrpages > 0 assertions
we hit during inode eviction on 6.9, but it seems like there's a window
to return the wrong data.
filemap_fault() has:
if (unlikely(folio->mapping != mapping)) {
So I think we're probably in better shape on mmap.
>
> Of course, "hang due to internal xarray corruption" isn't _much_
> better, but still..
>
>> All the hangs seem to be caused by folio lookup getting stuck
>> on a rogue xarray entry in truncate or readahead. If we find an
>> invalid entry or a folio from a different mapping or with a
>> unexpected index, we skip it and try again.
>
> We *could* perhaps change the "retry the optimistic lookup forever" to
> be a "retry and take lock after optimistic failure". At least in the
> common paths.
>
> That's what we do with some dcache locking, because the "retry on
> race" caused some potential latency issues under ridiculous loads.
>
> And if we retry with the lock, at that point we can actually notice
> corruption, because at that point we can say "we have the lock, and we
> see a bad folio with the wrong mapping pointer, and now it's not some
> possible race condition due to RCU".
>
> That, in turn, might then result in better bug reports. Which would at
> least be forward progress rather than "we have this bug".
>
> Let me think about it. Unless somebody else gets to it before I do
> (hint hint to anybody who is comfy with that filemap_read() path etc).
I've got a bunch of assertions around incorrect folio->mapping and I'm
trying to bash on the ENOMEM for readahead case. There's a GFP_NOWARN
on those, and our systems do run pretty short on ram, so it feels right
at least. We'll see.
-chris
Powered by blists - more mailing lists