[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220312222351.89844f74d3cf10212f308caf@linux-foundation.org>
Date: Sat, 12 Mar 2022 22:23:51 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Ryusuke Konishi <konishi.ryusuke@...il.com>
Cc: Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-nilfs <linux-nilfs@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: nilfs: WARNING: CPU: 2 PID: 1510 at
include/linux/backing-dev.h:269 __folio_mark_dirty+0x31d/0x3b0
On Sun, 13 Mar 2022 15:09:27 +0900 Ryusuke Konishi <konishi.ryusuke@...il.com> wrote:
> Hi Matthew, and Andrew,
>
> On Sat, Mar 12, 2022 at 7:56 AM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Fri, Mar 11, 2022 at 08:43:57PM +0100, David Hildenbrand wrote:
> > > Hi,
> > >
> > > playing with swapfiles on random file systems, I stumbled over the
> > > following nilfs issue (and reproduced it on latest greatest
> > > linux/master -- v5.17-rc7+). I did not try finding out when this
> > > was introduced and I did not run into this issue on other file
> > > systems I tried.
> >
> > It's a known bug in NILFS, and I think yours is the fifth report
> > of it dating back eight months.
>
> The root cause of this issue is that NILFS uses two page caches
> per inode, one for data blocks and another for b-tree node blocks.
>
> Even though __folio_end_writeback(), __folio_start_writeback(), and
> __folio_mark_dirty() acquire lock for mapping->i_pages,
> inode_to_wb(inode) inside them performs lockdep test for the former one
> (i.e. inode->i_mapping->i_pages.xa_lock).
>
> So, mark_buffer_dirty(), end_page_writeback(), and set_page_writeback()
> for pages in the latter NILFS specific page cache hit the LOCKDEP warning.
>
> I tried to find a way to resolve this, but have no good idea so far.
If things are set up appropriately, inode_to_wb() should be able to
test inode->i_mapping->host->i_mapping->i_pages.xa_lock and get the
desired result.
At least, that's the case with blockdevs. I don't know if nilfs2 sets
things up that way.
Powered by blists - more mailing lists