[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ebd85b6-61da-c868-240d-0ea99c8e147d@linux.intel.com>
Date: Thu, 11 May 2023 06:07:07 -0700
From: Arjan van de Ven <arjan@...ux.intel.com>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Huang Ying <ying.huang@...el.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC 0/6] mm: improve page allocator scalability via splitting
zones
On 5/11/2023 3:30 AM, Jonathan Cameron wrote:
> Hi,
>
> Interesting idea. I'm curious though on whether this can suffer from
> imbalance problems where due to uneven allocations from particular CPUs
> you can end up with all page faults happening in one zone and the original
> contention problem coming back? Or am I missing some process that will
> result in that imbalance being corrected?
>
> Jonathan
Well, the first line of defense is the per cpu page lists...
it can well be that a couple of cpus all in the same zone hit some high frequency
pattern... that by itself isn't the real issue. Note the "a couple".
It gets to be a problem if "a high number" start hitting this...
And by splitting the total into smaller pieces, this is going to be much much
less likely, since the total number per zone is just less.
Powered by blists - more mailing lists