lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k0p5sh7h.fsf@yhuang6-desk1.ccr.corp.intel.com>
Date:   Wed, 14 Apr 2021 11:00:50 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Yang Shi <shy828301@...il.com>
Cc:     mgorman@...e.de, kirill.shutemov@...ux.intel.com, ziy@...dia.com,
        mhocko@...e.com, hughd@...gle.com, gerald.schaefer@...ux.ibm.com,
        hca@...ux.ibm.com, gor@...ux.ibm.com, borntraeger@...ibm.com,
        akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH 6/7] mm: migrate: check mapcount for THP instead of
 ref count

Yang Shi <shy828301@...il.com> writes:

> The generic migration path will check refcount, so no need check refcount here.
> But the old code actually prevents from migrating shared THP (mapped by multiple
> processes), so bail out early if mapcount is > 1 to keep the behavior.

What prevents us from migrating shared THP?  If no, why not just remove
the old refcount checking?

Best Regards,
Huang, Ying

> Signed-off-by: Yang Shi <shy828301@...il.com>
> ---
>  mm/migrate.c | 16 ++++------------
>  1 file changed, 4 insertions(+), 12 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index a72994c68ec6..dc7cc7f3a124 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2067,6 +2067,10 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  
>  	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
>  
> +	/* Do not migrate THP mapped by multiple processes */
> +	if (PageTransHuge(page) && page_mapcount(page) > 1)
> +		return 0;
> +
>  	/* Avoid migrating to a node that is nearly full */
>  	if (!migrate_balanced_pgdat(pgdat, compound_nr(page)))
>  		return 0;
> @@ -2074,18 +2078,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  	if (isolate_lru_page(page))
>  		return 0;
>  
> -	/*
> -	 * migrate_misplaced_transhuge_page() skips page migration's usual
> -	 * check on page_count(), so we must do it here, now that the page
> -	 * has been isolated: a GUP pin, or any other pin, prevents migration.
> -	 * The expected page count is 3: 1 for page's mapcount and 1 for the
> -	 * caller's pin and 1 for the reference taken by isolate_lru_page().
> -	 */
> -	if (PageTransHuge(page) && page_count(page) != 3) {
> -		putback_lru_page(page);
> -		return 0;
> -	}
> -
>  	page_lru = page_is_file_lru(page);
>  	mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru,
>  				thp_nr_pages(page));

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ