[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240703202842.e9e50fbeba1ea0cd3a4605f1@linux-foundation.org>
Date: Wed, 3 Jul 2024 20:28:42 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>, Nhat Pham
<nphamcs@...il.com>, Yang Shi <shy828301@...il.com>, Zi Yan
<ziy@...dia.com>, Barry Song <baohua@...nel.org>, Kefeng Wang
<wangkefeng.wang@...wei.com>, David Hildenbrand <david@...hat.com>, Matthew
Wilcox <willy@...radead.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH hotfix] mm: fix crashes from deferred split racing folio
migration
On Wed, 3 Jul 2024 20:21:22 -0700 (PDT) Hugh Dickins <hughd@...gle.com> wrote:
> On Wed, 3 Jul 2024, Andrew Morton wrote:
> > On Tue, 2 Jul 2024 00:40:55 -0700 (PDT) Hugh Dickins <hughd@...gle.com> wrote:
> >
> > > Even on 6.10-rc6, I've been seeing elusive "Bad page state"s (often on
> > > flags when freeing, yet the flags shown are not bad: PG_locked had been
> > > set and cleared??), and VM_BUG_ON_PAGE(page_ref_count(page) == 0)s from
> > > deferred_split_scan()'s folio_put(), and a variety of other BUG and WARN
> > > symptoms implying double free by deferred split and large folio migration.
> > >
> > > 6.7 commit 9bcef5973e31 ("mm: memcg: fix split queue list crash when large
> > > folio migration") was right to fix the memcg-dependent locking broken in
> > > 85ce2c517ade ("memcontrol: only transfer the memcg data for migration"),
> > > but missed a subtlety of deferred_split_scan(): it moves folios to its own
> > > local list to work on them without split_queue_lock, during which time
> > > folio->_deferred_list is not empty, but even the "right" lock does nothing
> > > to secure the folio and the list it is on.
> > >
> > > Fortunately, deferred_split_scan() is careful to use folio_try_get(): so
> > > folio_migrate_mapping() can avoid the race by folio_undo_large_rmappable()
> > > while the old folio's reference count is temporarily frozen to 0 - adding
> > > such a freeze in the !mapping case too (originally, folio lock and
> > > unmapping and no swap cache left an anon folio unreachable, so no freezing
> > > was needed there: but the deferred split queue offers a way to reach it).
> >
> > There's a conflict when applying Kefeng's "mm: refactor
> > folio_undo_large_rmappable()"
> > (https://lkml.kernel.org/r/20240521130315.46072-1-wangkefeng.wang@huawei.com)
> > on top of this hotfix.
>
> Yes, anticipated in my "below the --- line" comments:
> sorry for giving you this nuisance.
np
> And perhaps a conflict with another one of Kefeng's, which deletes a hunk
> in mm/migrate.c just above where I add a hunk: and that's indeed how it
> should end up, hunk deleted by Kefeng, hunk added by me.
Sorted, I hope.
> >
> > --- mm/memcontrol.c~mm-refactor-folio_undo_large_rmappable
> > +++ mm/memcontrol.c
> > @@ -7832,8 +7832,7 @@ void mem_cgroup_migrate(struct folio *ol
> > * In addition, the old folio is about to be freed after migration, so
> > * removing from the split queue a bit earlier seems reasonable.
> > */
> > - if (folio_test_large(old) && folio_test_large_rmappable(old))
> > - folio_undo_large_rmappable(old);
> > + folio_undo_large_rmappable(old);
> > old->memcg_data = 0;
> > }
> >
> > I'm resolving this by simply dropping the above hunk. So Kefeng's
> > patch is now as below. Please check.
>
> Checked, and that is correct, thank you Andrew.
great.
> Correct, but not quite
> complete: because I'm sure that if Kefeng had written his patch after
> mine, he would have made the equivalent change in mm/migrate.c:
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -443,8 +443,7 @@ int folio_migrate_mapping(struct address_space *mapping,
> }
>
> /* Take off deferred split queue while frozen and memcg set */
> - if (folio_test_large(folio) && folio_test_large_rmappable(folio))
> - folio_undo_large_rmappable(folio);
> + folio_undo_large_rmappable(folio);
>
> /*
> * Now we know that no one else is looking at the folio:
>
> But there's no harm done if you push out a tree without that additional
> mod: we can add it as a fixup afterwards, it's no more than a cleanup.
OK, someone please send that along? I'll queue it as a -fix so a
single line of changelog is all that I shall retain (but more is
welcome! People can follow the Link:)
> (I'm on the lookout for an mm.git update, hope to give it a try when it
> appears.)
12 seconds ago.
Powered by blists - more mailing lists