[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YV9eueky+lBfSWA3@casper.infradead.org>
Date: Thu, 7 Oct 2021 21:55:21 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: Optimise put_pages_list()
On Thu, Oct 07, 2021 at 12:31:09PM -0700, Andrew Morton wrote:
> On Thu, 7 Oct 2021 20:21:37 +0100 "Matthew Wilcox (Oracle)" <willy@...radead.org> wrote:
>
> > Instead of calling put_page() one page at a time, pop pages off
> > the list if their refcount was too high and pass the remainder to
> > put_unref_page_list(). This should be a speed improvement, but I have
> > no measurements to support that. Current callers do not care about
> > performance, but I hope to add some which do.
>
> Don't you think it would actually be slower to take an additional pass
> across the list? If the list is long enough to cause cache thrashing.
> Maybe it's faster for small lists.
My first response is an appeal to authority -- release_pages() does
this same thing. Only it takes an array, constructs a list and passes
that to put_unref_page_list(). So if that's slower (and lists _are_
slower than arrays), we should have a put_unref_page_array().
Second, we can follow through the code paths and reason about it.
Before:
while (!list_empty(pages)) {
put_page(victim);
page = compound_head(page);
if (put_page_testzero(page))
__put_page(page);
__put_single_page(page)
__page_cache_release(page);
mem_cgroup_uncharge(page);
<---
free_unref_page(page, 0);
free_unref_page_prepare()
local_lock_irqsave(&pagesets.lock, flags);
free_unref_page_commit(page, pfn, migratetype, order);
local_unlock_irqrestore(&pagesets.lock, flags);
After:
free_unref_page_list(pages);
list_for_each_entry_safe(page, next, list, lru) {
if (!free_unref_page_prepare(page, pfn, 0)) {
}
local_lock_irqsave(&pagesets.lock, flags);
list_for_each_entry_safe(page, next, list, lru) {
free_unref_page_commit()
}
local_unlock_irqrestore(&pagesets.lock, flags);
So the major win here is that we disable/enable interrupts once per
batch rather than once per page.
Powered by blists - more mailing lists