lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 1 Jul 2020 10:32:58 +0800
From:   Wei Yang <richard.weiyang@...ux.alibaba.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Wei Yang <richard.weiyang@...ux.alibaba.com>,
        dan.j.williams@...el.com, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/sparse: only sub-section aligned range would be
 populated

On Tue, Jun 30, 2020 at 02:52:35PM +0200, David Hildenbrand wrote:
>On 30.06.20 04:14, Wei Yang wrote:
>> There are two code path which invoke __populate_section_memmap()
>> 
>>   * sparse_init_nid()
>>   * sparse_add_section()
>> 
>> For both case, we are sure the memory range is sub-section aligned.
>> 
>>   * we pass PAGES_PER_SECTION to sparse_init_nid()
>>   * we check range by check_pfn_span() before calling
>>     sparse_add_section()
>> 
>> Also, the counterpart of __populate_section_memmap(), we don't do such
>> calculation and check since the range is checked by check_pfn_span() in
>> __remove_pages().
>> 
>> Clear the calculation and check to keep it simple and comply with its
>> counterpart.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
>> ---
>>  mm/sparse-vmemmap.c | 16 ++--------------
>>  1 file changed, 2 insertions(+), 14 deletions(-)
>> 
>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>> index 0db7738d76e9..24b01ebae111 100644
>> --- a/mm/sparse-vmemmap.c
>> +++ b/mm/sparse-vmemmap.c
>> @@ -247,20 +247,8 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
>>  struct page * __meminit __populate_section_memmap(unsigned long pfn,
>>  		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
>>  {
>> -	unsigned long start;
>> -	unsigned long end;
>> -
>> -	/*
>> -	 * The minimum granularity of memmap extensions is
>> -	 * PAGES_PER_SUBSECTION as allocations are tracked in the
>> -	 * 'subsection_map' bitmap of the section.
>> -	 */
>> -	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
>> -	pfn &= PAGE_SUBSECTION_MASK;
>> -	nr_pages = end - pfn;
>> -
>> -	start = (unsigned long) pfn_to_page(pfn);
>> -	end = start + nr_pages * sizeof(struct page);
>> +	unsigned long start = (unsigned long) pfn_to_page(pfn);
>> +	unsigned long end = start + nr_pages * sizeof(struct page);
>>  
>>  	if (vmemmap_populate(start, end, nid, altmap))
>>  		return NULL;
>> 
>
>Can we add a WARN_ON_ONCE to catch mis-use in the future?
>
>if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>                 !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))
>	return NULL;

How about to add this into both population and depopulation?

>
>-- 
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ