lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMImP5fXTDns47jn@t490s>
Date:   Thu, 10 Jun 2021 10:48:31 -0400
From:   Peter Xu <peterx@...hat.com>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Yang Shi <shy828301@...il.com>,
        Wang Yugui <wangyugui@...-tech.com>,
        Matthew Wilcox <willy@...radead.org>,
        Alistair Popple <apopple@...dia.com>,
        Ralph Campbell <rcampbell@...dia.com>, Zi Yan <ziy@...dia.com>,
        Will Deacon <will@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 05/11] mm: page_vma_mapped_walk(): prettify
 PVMW_MIGRATION block

On Wed, Jun 09, 2021 at 11:42:12PM -0700, Hugh Dickins wrote:
> page_vma_mapped_walk() cleanup: rearrange the !pmd_present() block to
> follow the same "return not_found, return not_found, return true" pattern
> as the block above it (note: returning not_found there is never premature,
> since existence or prior existence of huge pmd guarantees good alignment).
> 
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> Cc: <stable@...r.kernel.org>
> ---
>  mm/page_vma_mapped.c | 30 ++++++++++++++----------------
>  1 file changed, 14 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 81000dd0b5da..b96fae568bc2 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -201,24 +201,22 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>  			if (pmd_page(pmde) != page)
>  				return not_found(pvmw);
>  			return true;
> -		} else if (!pmd_present(pmde)) {
> -			if (thp_migration_supported()) {
> -				if (!(pvmw->flags & PVMW_MIGRATION))
> -					return not_found(pvmw);
> -				if (is_migration_entry(pmd_to_swp_entry(pmde))) {
> -					swp_entry_t entry = pmd_to_swp_entry(pmde);
> +		}
> +		if (!pmd_present(pmde)) {
> +			swp_entry_t entry;
>  
> -					if (migration_entry_to_page(entry) != page)
> -						return not_found(pvmw);
> -					return true;
> -				}
> -			}
> -			return not_found(pvmw);
> -		} else {
> -			/* THP pmd was split under us: handle on pte level */
> -			spin_unlock(pvmw->ptl);
> -			pvmw->ptl = NULL;
> +			if (!thp_migration_supported() ||
> +			    !(pvmw->flags & PVMW_MIGRATION))
> +				return not_found(pvmw);
> +			entry = pmd_to_swp_entry(pmde);
> +			if (!is_migration_entry(entry) ||
> +			    migration_entry_to_page(entry) != page)

We'll need to do s/migration_entry_to_page/pfn_swap_entry_to_page/, depending
on whether Alistair's series lands first or not I guess (as you mentioned in
the cover letter).

Thanks for the change, it does look much better.

Reviewed-by: Peter Xu <peterx@...hat.com>

> +				return not_found(pvmw);
> +			return true;
>  		}
> +		/* THP pmd was split under us: handle on pte level */
> +		spin_unlock(pvmw->ptl);
> +		pvmw->ptl = NULL;
>  	} else if (!pmd_present(pmde)) {
>  		/*
>  		 * If PVMW_SYNC, take and drop THP pmd lock so that we
> -- 
> 2.26.2
> 

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ