lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05027092-a43e-756f-4fee-78f29a048ca1@suse.cz>
Date:   Mon, 24 Feb 2020 16:29:09 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org,
        riel@...com
Cc:     kernel-team@...com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        mhocko@...nel.org, mgorman@...hsingularity.net,
        rientjes@...gle.com, aarcange@...hat.com
Subject: Re: [PATCH 2/2] mm,thp,compaction,cma: allow THP migration for CMA
 allocations

On 2/21/20 10:53 PM, Rik van Riel wrote:
> @@ -981,7 +981,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		if (__isolate_lru_page(page, isolate_mode) != 0)
>  			goto isolate_fail;
>  
> -		VM_BUG_ON_PAGE(PageCompound(page), page);
> +		/* The whole page is taken off the LRU; skip the tail pages. */
> +		if (PageCompound(page))
> +			low_pfn += compound_nr(page) - 1;
>  
>  		/* Successfully isolated */
>  		del_page_from_lru_list(page, lruvec, page_lru(page));

This continues by:
inc_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page));


I think it now needs to use mod_node_page_state() with
hpage_nr_pages(page) otherwise the counter will underflow after the
migration?

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a36736812596..38c8ddfcecc8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8253,14 +8253,16 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>  
>  		/*
>  		 * Hugepages are not in LRU lists, but they're movable.
> +		 * THPs are on the LRU, but need to be counted as #small pages.
>  		 * We need not scan over tail pages because we don't
>  		 * handle each tail page individually in migration.
>  		 */
> -		if (PageHuge(page)) {
> +		if (PageTransHuge(page)) {

Hmm, PageTransHuge() has VM_BUG_ON() for tail pages, while this code is
written so that it can encounter a tail page and skip the rest of the
compound page properly. So I would be worried about this.

Also PageTransHuge() is basically just a PageHead() so for each
non-hugetlbfs compound page this will assume it's a THP, while correctly
it should reach the __PageMovable() || PageLRU(page) tests below.

So probably this should do something like.

if (PageHuge(page) || PageTransCompound(page)) {
...
   if (PageHuge(page) && !hpage_migration_supported)) return page.
   if (!PageLRU(head) && !__PageMovable(head)) return page
...

>  			struct page *head = compound_head(page);
>  			unsigned int skip_pages;
>  
> -			if (!hugepage_migration_supported(page_hstate(head)))
> +			if (PageHuge(page) &&
> +			    !hugepage_migration_supported(page_hstate(head)))
>  				return page;
>  
>  			skip_pages = compound_nr(head) - (page - head);
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ