lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17e666ec-93ae-c0f3-47cf-67e8a6df6afc@redhat.com>
Date:   Thu, 25 Mar 2021 17:55:37 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Oscar Salvador <osalvador@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Pavel Tatashin <pasha.tatashin@...een.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/5] mm,memory_hotplug: Allocate memmap from the added
 memory range

On 25.03.21 17:47, Michal Hocko wrote:
> On Thu 25-03-21 17:36:22, Michal Hocko wrote:
>> If all it takes is to make pfn_to_online_page work (and my
>> previous attempt is incorrect because it should consult block rather
>> than section pfn range)
> 
> This should work.
> 
> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
> index 9697acfe96eb..e50d685be8ab 100644
> --- a/drivers/base/memory.c
> +++ b/drivers/base/memory.c
> @@ -510,6 +510,23 @@ static struct memory_block *find_memory_block_by_id(unsigned long block_id)
>   	return mem;
>   }
>   
> +struct page *is_vmemmap_page(unsigned long pfn)
> +{
> +	unsigned long nr = pfn_to_section_nr(pfn);
> +	struct memory_block *mem;
> +	unsigned long block_pfn;
> +
> +	mem = find_memory_block_by_id(memory_block_id(nr));
> +	if (!mem || !mem->nr_vmemmap_pages)
> +		return NULL;
> +
> +	block_pfn = section_nr_to_pfn(mem->start_section_nr);
> +	if (pfn - block_pfn > mem->nr_vmemmap_pages)
> +		return NULL;
> +
> +	return pfn_to_page(pfn);
> +}
> +
>   /*
>    * Called under device_hotplug_lock.
>    */
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 754026a9164d..760bf3ad48d5 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -304,8 +304,16 @@ struct page *pfn_to_online_page(unsigned long pfn)
>   		return NULL;
>   
>   	ms = __nr_to_section(nr);
> -	if (!online_section(ms))
> +	if (!online_section(ms)) {
> +		/*
> +		 * vmemmap reserved space can eat up a whole section which then
> +		 * never gets onlined because it doesn't contain any memory to
> +		 * online.
> +		 */
> +		if (memmap_on_memory)
> +			return is_vmemmap_page(pfn);
>   		return NULL;
> +	}
>   
>   	/*
>   	 * Save some code text when online_section() +
> 

It should take care of discussed zone shrinking as well, at least as 
long as the granularity is not smaller than sub-sections.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ