[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <72066bef-866a-c2a4-d536-4212c3344045@intel.com>
Date: Thu, 4 Jun 2020 13:00:55 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Daniel Jordan <daniel.m.jordan@...cle.com>,
David Hildenbrand <david@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Sistare <steven.sistare@...cle.com>
Subject: Re: [PATCH] x86/mm: use max memory block size with unaligned memory
end
On 6/4/20 11:12 AM, Daniel Jordan wrote:
>> E.g., on powerpc that's 16MB so they have *a lot* of memory blocks.
>> That's why that's not papering over the problem. Increasing the memory
>> block size isn't always the answer.
> Ok. If you don't mind, what's the purpose of hotplugging at that granularity?
> I'm simply curious.
FWIW, the 128MB on x86 came from the original sparsemem/hotplug
implementation. It was the size of the smallest DIMM that my server
system at the time would take. ppc64's huge page size was and is 16MB
and that's also the granularity with which hypervisors did hot-add way
back then. I'm not actually sure what they do now.
My belief at the time was that the section size would grow over time as
DIMMs and hotplug units grew. I was young and naive. :)
I actually can't think of anything that's *keeping* it at 128MB on x86
though. We don't, for instance, require a whole section to be
pfn_valid().
Powered by blists - more mailing lists