lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 5 May 2020 13:18:55 -0700
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     Dave Hansen <dave.hansen@...ux.intel.com>,
        linux-kernel@...r.kernel.org
Cc:     npiggin@...il.com, akpm@...ux-foundation.org, willy@...radead.org,
        linux-mm@...ck.org
Subject: Re: [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check



On 5/1/20 2:05 PM, Dave Hansen wrote:
> From: Dave Hansen <dave.hansen@...ux.intel.com>
>
> This is not a bug fix.  It was found by inspection, but I believe
> that it is confusing as it stands.
>
> First, page_ref_freeze() is implemented internally with:
>
> 	atomic_cmpxchg(&page->_refcount, expected, 0) == expected
>
> The "cmp" part of cmpxchg is making sure that _refcount==expected
> which means that there's an implicit check here, equivalent to:
>
> 	page_count(page) == expected_count
>
> This appears to have originated in "e286781: mm: speculative page
> references", which is pretty ancient.  This check is also somewhat
> dangerous to have here because it might lead someone to think that
> page_ref_freeze() *doesn't* do its own page_count() checking.
>
> Remove the unnecessary check.

Make sense to me. Acked-by: Yang Shi <yang.shi@...ux.alibaba.com>

>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: Nicholas Piggin <npiggin@...il.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: Yang Shi <yang.shi@...ux.alibaba.com>
> Cc: linux-mm@...ck.org
> Cc: linux-kernel@...r.kernel.org
> ---
>
>   b/mm/migrate.c |    3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c
> --- a/mm/migrate.c~remove_extra_page_count_check	2020-05-01 14:00:42.331525924 -0700
> +++ b/mm/migrate.c	2020-05-01 14:00:42.336525924 -0700
> @@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add
>   	newzone = page_zone(newpage);
>   
>   	xas_lock_irq(&xas);
> -	if (page_count(page) != expected_count || xas_load(&xas) != page) {
> +	if (xas_load(&xas) != page) {
>   		xas_unlock_irq(&xas);
>   		return -EAGAIN;
>   	}
>   
> +	/* Freezing will fail if page_count()!=expected_count */
>   	if (!page_ref_freeze(page, expected_count)) {
>   		xas_unlock_irq(&xas);
>   		return -EAGAIN;
> _

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ