lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7000523-7b2b-06ed-6273-886978efaab5@suse.cz>
Date:   Thu, 8 Jun 2017 11:04:05 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Will Deacon <will.deacon@....com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     mark.rutland@....com, akpm@...ux-foundation.org,
        kirill.shutemov@...ux.intel.com, Punit.Agrawal@....com,
        mgorman@...e.de, steve.capper@....com
Subject: Re: [PATCH 1/3] mm: numa: avoid waiting on freed migrated pages

On 06/06/2017 07:58 PM, Will Deacon wrote:
> From: Mark Rutland <mark.rutland@....com>
> 
> In do_huge_pmd_numa_page(), we attempt to handle a migrating thp pmd by
> waiting until the pmd is unlocked before we return and retry. However,
> we can race with migrate_misplaced_transhuge_page():
> 
> // do_huge_pmd_numa_page                // migrate_misplaced_transhuge_page()
> // Holds 0 refs on page                 // Holds 2 refs on page
> 
> vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
> /* ... */
> if (pmd_trans_migrating(*vmf->pmd)) {
>         page = pmd_page(*vmf->pmd);
>         spin_unlock(vmf->ptl);
>                                         ptl = pmd_lock(mm, pmd);
>                                         if (page_count(page) != 2)) {
>                                                 /* roll back */
>                                         }
>                                         /* ... */
>                                         mlock_migrate_page(new_page, page);
>                                         /* ... */
>                                         spin_unlock(ptl);
>                                         put_page(page);
>                                         put_page(page); // page freed here
>         wait_on_page_locked(page);
>         goto out;
> }
> 
> This can result in the freed page having its waiters flag set
> unexpectedly, which trips the PAGE_FLAGS_CHECK_AT_PREP checks in the
> page alloc/free functions. This has been observed on arm64 KVM guests.
> 
> We can avoid this by having do_huge_pmd_numa_page() take a reference on
> the page before dropping the pmd lock, mirroring what we do in
> __migration_entry_wait().
> 
> When we hit the race, migrate_misplaced_transhuge_page() will see the
> reference and abort the migration, as it may do today in other cases.
> 
> Acked-by: Steve Capper <steve.capper@....com>
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Signed-off-by: Will Deacon <will.deacon@....com>

Nice catch! Stable candidate? Fixes: the commit that added waiters flag?
Assuming it was harmless before that?

Acked-by: Vlastimil Babka <vbabka@...e.cz>

> ---
>  mm/huge_memory.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index a84909cf20d3..88c6167f194d 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1426,8 +1426,11 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
>  	 */
>  	if (unlikely(pmd_trans_migrating(*vmf->pmd))) {
>  		page = pmd_page(*vmf->pmd);
> +		if (!get_page_unless_zero(page))
> +			goto out_unlock;
>  		spin_unlock(vmf->ptl);
>  		wait_on_page_locked(page);
> +		put_page(page);
>  		goto out;
>  	}
>  
> @@ -1459,9 +1462,12 @@ int do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
>  
>  	/* Migration could have started since the pmd_trans_migrating check */
>  	if (!page_locked) {
> +		page_nid = -1;
> +		if (!get_page_unless_zero(page))
> +			goto out_unlock;
>  		spin_unlock(vmf->ptl);
>  		wait_on_page_locked(page);
> -		page_nid = -1;
> +		put_page(page);
>  		goto out;
>  	}
>  
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ