[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171018101547.mjycw7zreb66jzpa@techsingularity.net>
Date: Wed, 18 Oct 2017 11:15:47 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>,
Linux-FSDevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, Jan Kara <jack@...e.cz>,
Andi Kleen <ak@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH 1/8] mm, page_alloc: Enable/disable IRQs once when
freeing a list of pages
On Wed, Oct 18, 2017 at 11:02:18AM +0200, Vlastimil Babka wrote:
> On 10/18/2017 09:59 AM, Mel Gorman wrote:
> > Freeing a list of pages current enables/disables IRQs for each page freed.
> > This patch splits freeing a list of pages into two operations -- preparing
> > the pages for freeing and the actual freeing. This is a tradeoff - we're
> > taking two passes of the list to free in exchange for avoiding multiple
> > enable/disable of IRQs.
>
> There's also some overhead of storing pfn in page->private, but all that
> seems negligible compared to irq disable/enable...
>
Exactly and it's cheaper than doing a second page to pfn lookup.
> <SNIP>
> Looks good.
>
> > Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
>
Thanks.
> A nit below.
>
> > @@ -2647,11 +2663,25 @@ void free_hot_cold_page(struct page *page, bool cold)
> > void free_hot_cold_page_list(struct list_head *list, bool cold)
> > {
> > struct page *page, *next;
> > + unsigned long flags, pfn;
> > +
> > + /* Prepare pages for freeing */
> > + list_for_each_entry_safe(page, next, list, lru) {
> > + pfn = page_to_pfn(page);
> > + if (!free_hot_cold_page_prepare(page, pfn))
> > + list_del(&page->lru);
> > + page->private = pfn;
>
> We have (set_)page_private() helpers so better to use them (makes it a
> bit easier to check for all places where page->private is used to e.g.
> avoid a clash)?
>
Agreed and it's trivial to do so
---8<---
mm, page_alloc: Enable/disable IRQs once when freeing a list of page -fix
Use page_private and set_page_private helpers.
Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
---
mm/page_alloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 167e163cf733..092973014c1e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2670,14 +2670,14 @@ void free_hot_cold_page_list(struct list_head *list, bool cold)
pfn = page_to_pfn(page);
if (!free_hot_cold_page_prepare(page, pfn))
list_del(&page->lru);
- page->private = pfn;
+ set_page_private(page, pfn);
}
local_irq_save(flags);
list_for_each_entry_safe(page, next, list, lru) {
- unsigned long pfn = page->private;
+ unsigned long pfn = page_private(page);
- page->private = 0;
+ set_page_private(page, 0);
trace_mm_page_free_batched(page, cold);
free_hot_cold_page_commit(page, pfn, cold);
}
Powered by blists - more mailing lists