[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef40d6bf-f471-430f-972d-2e88dc167032@redhat.com>
Date: Thu, 17 Apr 2025 10:09:55 +0200
From: David Hildenbrand <david@...hat.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Gavin Guo <gavinguo@...lia.com>, linux-mm@...ck.org,
akpm@...ux-foundation.org, willy@...radead.org, ziy@...dia.com,
linmiaohe@...wei.com, revest@...gle.com, kernel-dev@...lia.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/huge_memory: fix dereferencing invalid pmd migration
entry
On 17.04.25 10:07, David Hildenbrand wrote:
> On 17.04.25 09:18, David Hildenbrand wrote:
>> On 17.04.25 07:36, Hugh Dickins wrote:
>>> On Wed, 16 Apr 2025, David Hildenbrand wrote:
>>>>
>>>> Why not something like
>>>>
>>>> struct folio *entry_folio;
>>>>
>>>> if (folio) {
>>>> if (is_pmd_migration_entry(*pmd))
>>>> entry_folio = pfn_swap_entry_folio(pmd_to_swp_entry(*pmd)));
>>>> else
>>>> entry_folio = pmd_folio(*pmd));
>>>>
>>>> if (folio != entry_folio)
>>>> return;
>>>> }
>>>
>>> My own preference is to not add unnecessary code:
>>> if folio and pmd_migration entry, we're not interested in entry_folio.
>>> But yes it could be written in lots of other ways.
>>
>> While I don't disagree about "not adding unnecessary code" in general,
>> in this particular case just looking the folio up properly might be the
>> better alternative to reasoning about locking rules with conditional
>> input parameters :)
>>
>
> FWIW, I was wondering if we can rework that code, letting the caller to the
> checking and getting rid of the folio parameter. Something like this (incomplete, just to
> discuss if we could move the TTU_SPLIT_HUGE_PMD handling).
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2a47682d1ab77..754aa3103e8bf 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3075,22 +3075,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmd, bool freeze, struct folio *folio)
> {
> - VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
> VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
> - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
> - VM_BUG_ON(freeze && !folio);
>
> - /*
> - * When the caller requests to set up a migration entry, we
> - * require a folio to check the PMD against. Otherwise, there
> - * is a risk of replacing the wrong folio.
> - */
> if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
> - is_pmd_migration_entry(*pmd)) {
> - if (folio && folio != pmd_folio(*pmd))
> - return;
> + is_pmd_migration_entry(*pmd))
> __split_huge_pmd_locked(vma, pmd, address, freeze);
> - }
> }
>
> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 67bb273dfb80d..bf0320b03d615 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2291,13 +2291,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> if (flags & TTU_SYNC)
> pvmw.flags = PVMW_SYNC;
>
> - /*
> - * unmap_page() in mm/huge_memory.c is the only user of migration with
> - * TTU_SPLIT_HUGE_PMD and it wants to freeze.
> - */
> - if (flags & TTU_SPLIT_HUGE_PMD)
> - split_huge_pmd_address(vma, address, true, folio);
> -
> /*
> * For THP, we have to assume the worse case ie pmd for invalidation.
> * For hugetlb, it could be much worse if we need to do pud
> @@ -2326,6 +2319,14 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> /* PMD-mapped THP migration entry */
> if (!pvmw.pte) {
> + if (flags & TTU_SPLIT_HUGE_PMD) {
> + split_huge_pmd_locked(vma, pmvw.address, pvmw.pmd,
> + true, NULL);
> + ret = false;
> + page_vma_mapped_walk_done(&pvmw);
> + break;
> + }
> +
> subpage = folio_page(folio,
> pmd_pfn(*pvmw.pmd) - folio_pfn(folio));
> VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) ||
>
>
Likely, we'd have to adjust the CONFIG_ARCH_ENABLE_THP_MIGRATION
coverage here, for TTU_SPLIT_HUGE_PMD to get handled even without that.
Just an idea.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists