lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f600451e-48aa-184f-ae71-94e0abe9d6b1@redhat.com>
Date:   Thu, 11 Mar 2021 20:06:53 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Oscar Salvador <osalvador@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Pavel Tatashin <pasha.tatashin@...een.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/5] mm,memory_hotplug: Allocate memmap from the added
 memory range

This looks essentially good to me, except some parts in 
mhp_supports_memmap_on_memory()

>   
> +bool mhp_supports_memmap_on_memory(unsigned long size)
> +{
> +	unsigned long pageblock_size = PFN_PHYS(pageblock_nr_pages);
> +	unsigned long remaining_mem = size - PMD_SIZE;

This looks weird. I think what you want to test is that


a) "nr_vmemmap_pages * sizeof(struct page)" spans complete PMDs (IOW, we 
won't map too much via the altmap when populating a PMD in the vmemmap)

b) "remaining = size - nr_vmemmap_pages * sizeof(struct page)" spans 
complete pageblock.

right?

> +
> +	/*
> +	 * Besides having CONFIG_MHP_MEMMAP_ON_MEMORY, we need a few more
> +	 * assumptions to hold true:
> +	 * - we are working on a single memory block granularity
> +	 * - the size of struct page is multiple of PMD.

That's not what you are checking. (and the way it is phrase, I don;t 
think it makes sense)

> +	 * - the remaining memory after having used part of the range
> +	 *   for vmemmap pages is pageblock aligned.
> +	 *
> +	 * The reason for the struct page to be multiple of PMD is because
> +	 * otherwise two sections would intersect a PMD. This would go against
> +	 * memmap-on-memory premise, as memory would stay pinned until both
> +	 * sections were removed.
> +	 *
> +	 * And the reason for the remaining memory to be pageblock aligned is
> +	 * because mm core functions work on pageblock granularity in order to
> +	 * change page's migratetype.
> +	 */
> +	return memmap_on_memory &&
> +	       size == memory_block_size_bytes() &&
> +	       IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
> +	       !(PMD_SIZE % sizeof(struct page)) &&
> +	       remaining_mem &&
> +	       IS_ALIGNED(remaining_mem, pageblock_size);
> +}
> +

I suggest a restructuring, compressing the information like:

"
Besides having arch support and the feature enabled at runtime, we need 
a few more assumptions to hold true:

a) We span a single memory block: memory onlining/offlining happens in 
memory block granularity. We don't want the vmemmap of online memory 
blocks to reside on offline memory blocks. In the future, we might want 
to support variable-sized memory blocks to make the feature more versatile.

b) The vmemmap pages span complete PMDs: We don't want vmemmap code to 
populate memory from the altmap for unrelated parts (i.e., other memory 
blocks).

c) The vmemmap pages (and thereby the pages that will be exposed to the 
buddy) have to cover full pageblocks: memory onlining/offlining code 
requires applicable ranges to be page-aligned, for example, to set the 
migratetypes properly.
"

Do we have to special case / protect against the vmemmap optimization 
for hugetlb pages? Or is that already blocked somehow and I missed it?

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ