lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZuuqPEtIliUJejvw@casper.infradead.org>
Date: Thu, 19 Sep 2024 05:36:12 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Jens Axboe <axboe@...nel.dk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
	Dave Chinner <david@...morbit.com>, Chris Mason <clm@...a.com>,
	Christian Theune <ct@...ingcircus.io>, linux-mm@...ck.org,
	"linux-xfs@...r.kernel.org" <linux-xfs@...r.kernel.org>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	Daniel Dao <dqminh@...udflare.com>, regressions@...ts.linux.dev,
	regressions@...mhuis.info
Subject: Re: Known and unfixed active data loss bug in MM + XFS with large
 folios since Dec 2021 (any kernel from 6.1 upwards)

On Wed, Sep 18, 2024 at 09:38:41PM -0600, Jens Axboe wrote:
> On 9/18/24 9:12 PM, Linus Torvalds wrote:
> > On Thu, 19 Sept 2024 at 05:03, Linus Torvalds
> > <torvalds@...ux-foundation.org> wrote:
> >>
> >> I think we should just do the simple one-liner of adding a
> >> "xas_reset()" to after doing xas_split_alloc() (or do it inside the
> >> xas_split_alloc()).
> > 
> > .. and obviously that should be actually *verified* to fix the issue
> > not just with the test-case that Chris and Jens have been using, but
> > on Christian's real PostgreSQL load.
> > 
> > Christian?
> > 
> > Note that the xas_reset() needs to be done after the check for errors
> > - or like Willy suggested, xas_split_alloc() needs to be re-organized.
> > 
> > So the simplest fix is probably to just add a
> > 
> >                         if (xas_error(&xas))
> >                                 goto error;
> >                 }
> > +               xas_reset(&xas);
> >                 xas_lock_irq(&xas);
> >                 xas_for_each_conflict(&xas, entry) {
> >                         old = entry;
> > 
> > in __filemap_add_folio() in mm/filemap.c
> > 
> > (The above is obviously a whitespace-damaged pseudo-patch for the
> > pre-6758c1128ceb state. I don't actually carry a stable tree around on
> > my laptop, but I hope it's clear enough what I'm rambling about)
> 
> I kicked off a quick run with this on 6.9 with my debug patch as well,
> and it still fails for me... I'll double check everything is sane. For
> reference, below is the 6.9 filemap patch.
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 30de18c4fd28..88093e2b7256 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -883,6 +883,7 @@ noinline int __filemap_add_folio(struct address_space *mapping,
>  		if (order > folio_order(folio))
>  			xas_split_alloc(&xas, xa_load(xas.xa, xas.xa_index),
>  					order, gfp);
> +		xas_reset(&xas);
>  		xas_lock_irq(&xas);
>  		xas_for_each_conflict(&xas, entry) {
>  			old = entry;

My brain is still mushy, but I think there is still a problem (both with
the simple fix for 6.9 and indeed with 6.10).

For splitting a folio, we have the folio locked, so we know it's not
going anywhere.  The tree may get rearranged around it while we don't
have the xa_lock, but we're somewhat protected.

In this case we're splitting something that was, at one point, a shadow
entry.  There's no struct there to lock.  So I think we can have a
situation where we replicate 'old' (in 6.10) or xa_load() (in 6.9)
into the nodes we allocate in xas_split_alloc().  In 6.10, that's at
least guaranteed to be a shadow entry, but in 6.9, it might already be a
folio by this point because we've raced with something else also doing a
split.

Probably xas_split_alloc() needs to just do the alloc, like the name
says, and drop the 'entry' argument.  ICBW, but I think it explains
what you're seeing?  Maybe it doesn't?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ