[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b343cf1a-ea15-b70e-ff5a-e08d3dc5354d@suse.cz>
Date: Mon, 22 Oct 2018 11:37:53 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Aaron Lu <aaron.lu@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Huang Ying <ying.huang@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Kemi Wang <kemi.wang@...el.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Matthew Wilcox <willy@...radead.org>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Tariq Toukan <tariqt@...lanox.com>,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [RFC v4 PATCH 3/5] mm/rmqueue_bulk: alloc without touching
individual page structure
On 10/17/18 8:33 AM, Aaron Lu wrote:
> Profile on Intel Skylake server shows the most time consuming part
> under zone->lock on allocation path is accessing those to-be-returned
> page's "struct page" on the free_list inside zone->lock. One explanation
> is, different CPUs are releasing pages to the head of free_list and
> those page's 'struct page' may very well be cache cold for the allocating
> CPU when it grabs these pages from free_list' head. The purpose here
> is to avoid touching these pages one by one inside zone->lock.
What about making the pages cache-hot first, without zone->lock, by
traversing via page->lru. It would need some safety checks obviously
(maybe based on page_to_pfn + pfn_valid, or something) to make sure we
only read from real struct pages in case there's some update racing. The
worst case would be not populating enough due to race, and thus not
gaining the performance when doing the actual rmqueueing under lock.
Vlastimil
Powered by blists - more mailing lists