lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c41ea8ac-e99d-6d23-c7b9-5ca25ffb72bb@redhat.com>
Date:   Fri, 19 Mar 2021 11:20:19 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Oscar Salvador <osalvador@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Pavel Tatashin <pasha.tatashin@...een.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/5] mm,memory_hotplug: Allocate memmap from the added
 memory range


Tow NITs

> +bool mhp_supports_memmap_on_memory(unsigned long size)
> +{
> +	unsigned long nr_vmemmap_pages = size / PAGE_SIZE;
> +	unsigned long vmemmap_size = nr_vmemmap_pages * sizeof(struct page);
> +	unsigned long remaining_size = size - vmemmap_size;
> +
> +	/*
> +	 * Besides having arch support and the feature enabled at runtime, we
> +	 * need a few more assumptions to hold true:
> +	 *
> +	 * a) We span a single memory block: memory onlining/offlinin;g happens

s/offlinin;g/offlining;/

> +	 *    in memory block granularity. We don't want the vmemmap of online
> +	 *    memory blocks to reside on offline memory blocks. In the future,
> +	 *    we might want to support variable-sized memory blocks to make the
> +	 *    feature more versatile.
> +	 *
> +	 * b) The vmemmap pages span complete PMDs: We don't want vmemmap code
> +	 *    to populate memory from the altmap for unrelated parts (i.e.,
> +	 *    other memory blocks)
> +	 *
> +	 * c) The vmemmap pages (and thereby the pages that will be exposed to
> +	 *    the buddy) have to cover full pageblocks: memory onlining/offlining
> +	 *    code requires applicable ranges to be page-aligned, for example, to
> +	 *    set the migratetypes properly.
> +	 *
> +	 * TODO: Although we have a check here to make sure that vmemmap pages
> +	 *	 fully populate a PMD, it is not the right place to check for
> +	 *	 this. A much better solution involves improving vmemmap code
> +	 *	 to fallback to base pages when trying to populate vmemmap using
> +	 *	 altmap as an alternative source of memory, and we do not exactly
> +	 *	 populate a single PMD.
> +	 */
> +	return memmap_on_memory &&
> +	       IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
> +	       size == memory_block_size_bytes() &&
> +	       IS_ALIGNED(vmemmap_size, PMD_SIZE) &&
> +	       IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT));

IS_ALIGNED(remaining_size, pageblock_nr_pages << PAGE_SHIFT);

LGTM, thanks!

(another pair of eyes certainly wouldn't hurt :) )

Reviewed-by: David Hildenbrand <david@...hat.com>

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ