[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e8f8872-f51b-4a09-a92c-49218748dd62@meta.com>
Date: Fri, 13 Sep 2024 12:37:49 -0400
From: Chris Mason <clm@...a.com>
To: David Howells <dhowells@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>, Matthew Wilcox <willy@...radead.org>,
Christian Theune <ct@...ingcircus.io>, linux-mm@...ck.org,
"linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Daniel Dao <dqminh@...udflare.com>, Dave Chinner <david@...morbit.com>,
regressions@...ts.linux.dev, regressions@...mhuis.info
Subject: Re: Known and unfixed active data loss bug in MM + XFS with large
folios since Dec 2021 (any kernel from 6.1 upwards)
On 9/13/24 12:04 PM, David Howells wrote:
> Chris Mason <clm@...a.com> wrote:
>
>> I've mentioned this in the past to both Willy and Dave Chinner, but so
>> far all of my attempts to reproduce it on purpose have failed.
>
> Could it be a splice bug?
I really wanted it to be a splice bug, but I believe the 6.9 workload I
mentioned isn't using splice. I didn't 100% verify though.
-chris
Powered by blists - more mailing lists