[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnD_mcxk_PCyWNmQ@casper.infradead.org>
Date: Tue, 18 Jun 2024 04:31:37 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Zhaoyang Huang <huangzhaoyang@...il.com>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, steve.kang@...soc.com
Subject: Re: [PATCH] mm: fix hard lockup in __split_huge_page
On Tue, Jun 18, 2024 at 11:27:12AM +0800, Zhaoyang Huang wrote:
> On Tue, Jun 18, 2024 at 11:19 AM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Tue, Jun 18, 2024 at 10:09:26AM +0800, zhaoyang.huang wrote:
> > > Hard lockup[2] is reported which should be caused by recursive
> > > lock contention of lruvec->lru_lock[1] within __split_huge_page.
> > >
> > > [1]
> > > static void __split_huge_page(struct page *page, struct list_head *list,
> > > pgoff_t end, unsigned int new_order)
> > > {
> > > /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> > > //1st lock here
> > > lruvec = folio_lruvec_lock(folio);
> > >
> > > for (i = nr - new_nr; i >= new_nr; i -= new_nr) {
> > > __split_huge_page_tail(folio, i, lruvec, list, new_order);
> > > /* Some pages can be beyond EOF: drop them from page cache */
> > > if (head[i].index >= end) {
> > > folio_put(tail);
> > > __page_cache_release
> > > //2nd lock here
> > > folio_lruvec_relock_irqsave
> >
> > Why doesn't lockdep catch this?
> It is reported by a regression test of the fix patch which aims at the
> find_get_entry livelock issue as below. I don't know the details of
> the kernel configuration.
>
> https://lore.kernel.org/linux-mm/5f989315-e380-46aa-80d1-ce8608889e5f@marcinwanat.pl/
Go away.
Powered by blists - more mailing lists