[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240529190059.GM8631@suse.cz>
Date: Wed, 29 May 2024 21:00:59 +0200
From: David Sterba <dsterba@...e.cz>
To: David Hildenbrand <david@...hat.com>
Cc: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>,
Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>,
David Sterba <dsterba@...e.com>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Matthew Wilcox <willy@...radead.org>,
linux-btrfs <linux-btrfs@...r.kernel.org>
Subject: Re: 6.9/BUG: Bad page state in process kswapd0 pfn:d6e840
On Wed, May 29, 2024 at 08:57:48AM +0200, David Hildenbrand wrote:
> On 28.05.24 16:24, David Hildenbrand wrote:
> > Hmm, your original report mentions kswapd, so I'm getting the feeling someone
> > does one folio_put() too much and we are freeing a pageache folio that is still
> > in the pageache and, therefore, has folio->mapping set ... bisecting would
> > really help.
>
> A little bird just told me that I missed an important piece in the dmesg
> output: "aops:btree_aops ino:1" from dump_mapping():
>
> This is btrfs, i_ino is 1, and we don't have a dentry. Is that
> BTRFS_BTREE_INODE_OBJECTID?
Yes, that's right, inode with number 1 is representing metadata.
> Summarizing what we know so far:
> (1) Freeing an order-0 btrfs folio where folio->mapping
> is still set
> (2) Triggered by kswapd and kcompactd; not triggered by other means of
> page freeing so far
>
> Possible theories:
> (A) folio->mapping not cleared when freeing the folio. But shouldn't
> this also happen on other freeing paths? Or are we simply lucky to
> never trigger that for that folio?
> (B) Messed-up refcounting: freeing a folio that is still in use (and
> therefore has folio-> mapping still set)
>
> I was briefly wondering if large folio splitting could be involved.
We do not have large folios enabled for btrfs, the conversion from pages
to folios is still ongoing.
With increased number of strange reports either from syzbot or others,
it seems that something got wrong in the 6.10-rc update or maybe
earlier.
Powered by blists - more mailing lists