lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez2yPVjpPoAPmitrdaig-dF7j9THN=CZd6QD7to=tF2=NQ@mail.gmail.com>
Date: Thu, 17 Oct 2024 20:00:32 +0200
From: Jann Horn <jannh@...gle.com>
To: Qi Zheng <zhengqi.arch@...edance.com>
Cc: david@...hat.com, hughd@...gle.com, willy@...radead.org, mgorman@...e.de, 
	muchun.song@...ux.dev, vbabka@...nel.org, akpm@...ux-foundation.org, 
	zokeefe@...gle.com, rientjes@...gle.com, peterx@...hat.com, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH v1 1/7] mm: khugepaged: retract_page_tables() use pte_offset_map_lock()

On Thu, Oct 17, 2024 at 11:47 AM Qi Zheng <zhengqi.arch@...edance.com> wrote:
> In retract_page_tables(), we may modify the pmd entry after acquiring the
> pml and ptl, so we should also check whether the pmd entry is stable.
> Using pte_offset_map_lock() to do it, and then we can also remove the
> calling of the pte_lockptr().
>
> Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
> ---
>  mm/khugepaged.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 94feb85ce996c..b4f49d323c8d9 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1721,6 +1721,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
>                 spinlock_t *pml;
>                 spinlock_t *ptl;
>                 bool skipped_uffd = false;
> +               pte_t *pte;
>
>                 /*
>                  * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> @@ -1757,9 +1758,15 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
>                 mmu_notifier_invalidate_range_start(&range);
>
>                 pml = pmd_lock(mm, pmd);
> -               ptl = pte_lockptr(mm, pmd);
> +               pte = pte_offset_map_lock(mm, pmd, addr, &ptl);

This takes the lock "ptl" on the success path...

> +               if (!pte) {
> +                       spin_unlock(pml);
> +                       mmu_notifier_invalidate_range_end(&range);
> +                       continue;
> +               }
>                 if (ptl != pml)
>                         spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);

... and this takes the same lock again, right? I think this will
deadlock on kernels with CONFIG_SPLIT_PTE_PTLOCKS=y. Did you test this
on a machine with less than 4 CPU cores, or something like that? Or am
I missing something?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ