[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YFMtuKZ8Ho66D8hN@localhost.localdomain>
Date: Thu, 18 Mar 2021 11:38:48 +0100
From: Oscar Salvador <osalvador@...e.de>
To: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Pavel Tatashin <pasha.tatashin@...een.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/5] mm,memory_hotplug: Allocate memmap from the added
memory range
On Thu, Mar 18, 2021 at 09:27:48AM +0100, Oscar Salvador wrote:
> > If we check for
> >
> > IS_ALIGNED(nr_vmemmap_pages, PMD_SIZE), please add a proper TODO comment
> > that this is most probably the wrong place to take care of this.
>
> Sure, I will stuff the check in there and place a big TODO comment so we
> do not forget about addressing this issue the right way.
Ok, I realized something while working on v5.
Here is what I have right now:
bool mhp_supports_memmap_on_memory(unsigned long size)
{
/*
* Note: We calculate for a single memory section. The calculation
* implicitly covers memory blocks that span multiple sections.
*
* Not all archs define SECTION_SIZE, but MIN_MEMORY_BLOCK_SIZE always
* equals SECTION_SIZE, so use that instead.
*/
unsigned long nr_vmemmap_pages = MIN_MEMORY_BLOCK_SIZE / PAGE_SIZE;
unsigned long vmemmap_size = nr_vmemmap_pages * sizeof(struct page);
unsigned long remaining_size = size - vmemmap_size;
/*
* Besides having arch support and the feature enabled at runtime, we
* need a few more assumptions to hold true:
*
* a) We span a single memory block: memory onlining/offlinin;g happens
* in memory block granularity. We don't want the vmemmap of online
* memory blocks to reside on offline memory blocks. In the future,
* we might want to support variable-sized memory blocks to make the
* feature more versatile.
*
* b) The vmemmap pages span complete PMDs: We don't want vmemmap code
* to populate memory from the altmap for unrelated parts (i.e.,
* other memory blocks)
*
* c) The vmemmap pages (and thereby the pages that will be exposed to
* the buddy) have to cover full pageblocks: memory onlining/offlining
* code requires applicable ranges to be page-aligned, for example, to
* set the migratetypes properly.
*
* TODO: Although we have a check here to make sure that vmemmap pages
* fully populate a PMD, it is not the right place to check for
* this. A much better solution involves improving vmemmap code
* to fallback to base pages when trying to populate vmemmap using
* altmap as an alternative source of memory, and we do not exactly
* populate a single PMD.
*/
return memmap_on_memory &&
IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
size == memory_block_size_bytes() &&
remaining_size &&
IS_ALIGNED(remaining_size, pageblock_size) &&
IS_ALIGNED(vmemmap_size, PMD_SIZE);
}
Assume we are on x86_64 to simplify the case.
Above, nr_vmemmap_pages would be 32768 and vmemmap_size 2MB (exactly a
PMD).
Now, although correct, this nr_vmemmap_pages does not match with the
altmap->alloc.
static void * __meminit altmap_alloc_block_buf(unsigned long size,
struct altmap)
{
...
...
nr_pfns = size >> PAGE_SHIFT; //size is PMD_SIZE
altmap->alloc += nr_pfns;
}
altmap->alloc will be 512, 512 * 4K pages = 2MB.
Of course, the reason they do not match is because in one case, we are
saying a) how many pfns we need to cover a PMD_SIZE, while in the
other case we say b) how many pages we need to cover SECTION_SIZE
Then b) multiply for page_size to get the current size of it.
So, I have mixed feeling about this.
Would it be more clear to just do:
bool mhp_supports_memmap_on_memory(unsigned long size)
{
/*
* Note: We calculate for a single memory section. The calculation
* implicitly covers memory blocks that span multiple sections.
*/
unsigned long nr_vmemmap_pages = PMD_SIZE / PAGE_SIZE;
unsigned long vmemmap_size = nr_vmemmap_pages * PAGE_SIZE;
unsigned long remaining_size = size - vmemmap_size;
...
...
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists