[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b30fff52-31ba-5064-cc95-62ec49423b6b@redhat.com>
Date: Fri, 5 Jun 2020 09:44:58 +0200
From: David Hildenbrand <david@...hat.com>
To: Dave Hansen <dave.hansen@...el.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Sistare <steven.sistare@...cle.com>
Subject: Re: [PATCH] x86/mm: use max memory block size with unaligned memory
end
On 04.06.20 22:00, Dave Hansen wrote:
> On 6/4/20 11:12 AM, Daniel Jordan wrote:
>>> E.g., on powerpc that's 16MB so they have *a lot* of memory blocks.
>>> That's why that's not papering over the problem. Increasing the memory
>>> block size isn't always the answer.
>> Ok. If you don't mind, what's the purpose of hotplugging at that granularity?
>> I'm simply curious.
>
> FWIW, the 128MB on x86 came from the original sparsemem/hotplug
> implementation. It was the size of the smallest DIMM that my server
> system at the time would take. ppc64's huge page size was and is 16MB
> and that's also the granularity with which hypervisors did hot-add way
> back then. I'm not actually sure what they do now.
>
> My belief at the time was that the section size would grow over time as
> DIMMs and hotplug units grew. I was young and naive. :)
BTW, I recently studied your old hotplug papers and they are highly
appreciated :)
>
> I actually can't think of anything that's *keeping* it at 128MB on x86
> though. We don't, for instance, require a whole section to be
> pfn_valid().
Well, sub-section hotadd is only done for vmemmap and we only use it for
!(memory block devices) stuff, a.k.a. ZONE_DEVICE. IIRC, sub-section
hotadd works in granularity of 2M.
AFAIK:
- The lower limit for a section is MAX_ORDER - 1 / pageblock_order
- The smaller the section, the more bits are wasted to store the section
number in page->flags for page_to_pfn() (!vmemmap IIRC)
- The smaller the section, the bigger the section array(s)
- We want to make sure the section memmap always spans full pages
(IIRC, not always the case e.g., arm64 with 256k page size. But arm64
is weird either way - 512MB (transparent) huge pages with 64k base
pages ...)
Changing the section size to get rid of sub-section memory hotadd does
not seem to be easily possible. I assume we don't want to create memory
block devices for something as small as current sub-section memory
hotadd size (e.g., 2MB). So having significantly smaller sections might
not make too much sense and your initial section size might have been a
very good, initial pick :)
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists