[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a3ee697-591d-dc3c-7c53-5965da219062@redhat.com>
Date: Thu, 16 Feb 2023 18:04:53 +0100
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
regressions@...mhuis.info, Nick Bowler <nbowler@...conx.ca>
Subject: Re: [PATCH] mm/migrate: Fix wrongly apply write bit after mkdirty on
sparc64
On 16.02.23 16:30, Peter Xu wrote:
> Nick Bowler reported another sparc64 breakage after the young/dirty
> persistent work for page migration (per "Link:" below). That's after a
> similar report [2].
>
> It turns out page migration was overlooked, and it wasn't failing before
> because page migration was not enabled in the initial report test environment.
>
> David proposed another way [2] to fix this from sparc64 side, but that
> patch didn't land somehow. Neither did I check whether there's any other
> arch that has similar issues.
>
> Let's fix it for now as simple as moving the write bit handling to be after
> dirty, like what we did before.
>
> Note: this is based on mm-unstable, because the breakage was since 6.1 and
> we're at a very late stage of 6.2 (-rc8), so I assume for this specific
> case we should target this at 6.3.
>
> [1] https://lore.kernel.org/all/20221021160603.GA23307@u164.east.ru/
> [2] https://lore.kernel.org/all/20221212130213.136267-1-david@redhat.com/
>
> Cc: regressions@...mhuis.info
> Fixes: 2e3468778dbe ("mm: remember young/dirty bit for page migrations")
> Link: https://lore.kernel.org/all/CADyTPExpEqaJiMGoV+Z6xVgL50ZoMJg49B10LcZ=8eg19u34BA@mail.gmail.com/
> Reported-by: Nick Bowler <nbowler@...conx.ca>
> Tested-by: Nick Bowler <nbowler@...conx.ca>
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
> mm/huge_memory.c | 6 ++++--
> mm/migrate.c | 2 ++
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 1343a7d88299..4fc43859e59a 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3274,8 +3274,6 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot));
> if (pmd_swp_soft_dirty(*pvmw->pmd))
> pmde = pmd_mksoft_dirty(pmde);
> - if (is_writable_migration_entry(entry))
> - pmde = maybe_pmd_mkwrite(pmde, vma);
> if (pmd_swp_uffd_wp(*pvmw->pmd))
> pmde = pmd_mkuffd_wp(pmde);
> if (!is_migration_entry_young(entry))
> @@ -3283,6 +3281,10 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> /* NOTE: this may contain setting soft-dirty on some archs */
> if (PageDirty(new) && is_migration_entry_dirty(entry))
> pmde = pmd_mkdirty(pmde);
> + if (is_writable_migration_entry(entry))
> + pmde = maybe_pmd_mkwrite(pmde, vma);
> + else
> + pmde = pmd_wrprotect(pmde);
>
> if (PageAnon(new)) {
> rmap_t rmap_flags = RMAP_COMPOUND;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index ef68a1aff35c..40c63e77e91f 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -225,6 +225,8 @@ static bool remove_migration_pte(struct folio *folio,
> pte = maybe_mkwrite(pte, vma);
> else if (pte_swp_uffd_wp(*pvmw.pte))
> pte = pte_mkuffd_wp(pte);
> + else
> + pte = pte_wrprotect(pte);
>
> if (folio_test_anon(folio) && !is_readable_migration_entry(entry))
> rmap_flags |= RMAP_EXCLUSIVE;
I'd really rather focus on fixing the root cause instead, anyhow if my
patch won't make it:
Acked-by: David Hildenbrand <david@...hat.com>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists