[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <156093866933.31375.12797765093948100374@skylake-alporthouse-com>
Date: Wed, 19 Jun 2019 11:04:29 +0100
From: Chris Wilson <chris@...is-wilson.co.uk>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Hugh Dickins <hughd@...gle.com>,
Jan Kara <jack@...e.cz>, Song Liu <liu.song.a23@...il.com>
Subject: Re: [PATCH v4] page cache: Store only head pages in i_pages
Quoting Chris Wilson (2019-06-12 08:42:05)
> Quoting Kirill A. Shutemov (2019-06-12 02:46:34)
> > On Sun, Jun 02, 2019 at 10:47:35PM +0100, Chris Wilson wrote:
> > > Quoting Matthew Wilcox (2019-03-07 15:30:51)
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index 404acdcd0455..aaf88f85d492 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -2456,6 +2456,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > > if (IS_ENABLED(CONFIG_SHMEM) && PageSwapBacked(head))
> > > > shmem_uncharge(head->mapping->host, 1);
> > > > put_page(head + i);
> > > > + } else if (!PageAnon(page)) {
> > > > + __xa_store(&head->mapping->i_pages, head[i].index,
> > > > + head + i, 0);
> > >
> > > Forgiving the ignorant copy'n'paste, this is required:
> > >
> > > + } else if (PageSwapCache(page)) {
> > > + swp_entry_t entry = { .val = page_private(head + i) };
> > > + __xa_store(&swap_address_space(entry)->i_pages,
> > > + swp_offset(entry),
> > > + head + i, 0);
> > > }
> > > }
> > >
> > > The locking is definitely wrong.
> >
> > Does it help with the problem, or it's just a possible lead?
>
> It definitely solves the problem we encountered of the bad VM_PAGE
> leading to RCU stalls in khugepaged. The locking is definitely wrong
> though :)
I notice I'm not the only one to have bisected a swap related VM_PAGE_BUG
to this patch. Do we have a real fix I can put through our CI to confirm
the issue is resolved before 5.2?
-Chris
Powered by blists - more mailing lists