[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e314748f-b4df-c65d-7acc-45c21abf31ce@intel.com>
Date: Thu, 11 May 2023 07:23:51 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Arjan Van De Ven <arjan@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC 0/6] mm: improve page allocator scalability via splitting
zones
On 5/10/23 23:56, Huang Ying wrote:
> To improve the scalability of the page allocation, in this series, we
> will create one zone instance for each about 256 GB memory of a zone
> type generally. That is, one large zone type will be split into
> multiple zone instances.
A few anecdotes for why I think _some_ people will like this:
Some Intel hardware has a "RAM" caching mechanism. It either caches
DRAM in High-Bandwidth Memory or Persistent Memory in DRAM. This cache
is direct-mapped and can have lots of collisions. One way to prevent
collisions is to chop up the physical memory into cache-sized zones and
let users choose to allocate from one zone. That fixes the conflicts.
Some other Intel hardware a ways to chop a NUMA node representing a
single socket into slices. Usually one slice gets a memory controller
and its closest cores. Intel calls these approaches Cluster on Die or
Sub-NUMA Clustering and users can select it from the BIOS.
In both of these cases, users have reported scalability improvements.
We've gone as far as to suggest the socket-splitting options to folks
today who are hitting zone scalability issues on that hardware.
That said, those _same_ users sometimes come back and say something
along the lines of: "So... we've got this app that allocates a big hunk
of memory. It's going slower than before." They're filling up one of
the chopped-up zones, hitting _some_ kind of undesirable reclaim
behavior and they want their humpty-dumpty zones put back together again
... without hurting scalability. Some people will never be happy. :)
Anyway, _if_ you do this, you might also consider being able to
dynamically adjust a CPU's zonelists somehow. That would relieve
pressure on one zone for those uneven allocations. That wasn't an
option in the two cases above because users had ulterior motives for
sticking inside a single zone. But, in your case, the zones really do
have equivalent performance.
Powered by blists - more mailing lists