[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a61312d7-8235-fe4d-6411-d3143d965f81@suse.cz>
Date: Tue, 15 Jan 2019 13:39:28 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Mel Gorman <mgorman@...hsingularity.net>,
Linux-MM <linux-mm@...ck.org>
Cc: David Rientjes <rientjes@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>, ying.huang@...el.com,
kirill@...temov.name, Andrew Morton <akpm@...ux-foundation.org>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 09/25] mm, compaction: Use the page allocator bulk-free
helper for lists of pages
On 1/4/19 1:49 PM, Mel Gorman wrote:
> release_pages() is a simpler version of free_unref_page_list() but it
> tracks the highest PFN for caching the restart point of the compaction
> free scanner. This patch optionally tracks the highest PFN in the core
> helper and converts compaction to use it. The performance impact is
> limited but it should reduce lock contention slightly in some cases.
> The main benefit is removing some partially duplicated code.
>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
...
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2876,18 +2876,26 @@ void free_unref_page(struct page *page)
> /*
> * Free a list of 0-order pages
> */
> -void free_unref_page_list(struct list_head *list)
> +void __free_page_list(struct list_head *list, bool dropref,
> + unsigned long *highest_pfn)
> {
> struct page *page, *next;
> unsigned long flags, pfn;
> int batch_count = 0;
>
> + if (highest_pfn)
> + *highest_pfn = 0;
> +
> /* Prepare pages for freeing */
> list_for_each_entry_safe(page, next, list, lru) {
> + if (dropref)
> + WARN_ON_ONCE(!put_page_testzero(page));
I've thought about it again and still think it can cause spurious
warnings. We enter this function with one page pin, which means somebody
else might be doing pfn scanning and get_page_unless_zero() with
success, so there are two pins. Then we do the put_page_testzero() above
and go back to one pin, and warn. You said "this function simply does
not expect it and the callers do not violate the rule", but this is
rather about potential parallel pfn scanning activity and not about this
function's callers. Maybe there really is no parallel pfn scanner that
would try to pin a page with a state the page has when it's processed by
this function, but I wouldn't bet on it (any state checks preceding the
pin might also be racy etc.).
> pfn = page_to_pfn(page);
> if (!free_unref_page_prepare(page, pfn))
> list_del(&page->lru);
> set_page_private(page, pfn);
> + if (highest_pfn && pfn > *highest_pfn)
> + *highest_pfn = pfn;
> }
>
> local_irq_save(flags);
>
Powered by blists - more mailing lists