lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c55ece1-6c83-b59f-eadc-53e70862192d@redhat.com>
Date:   Fri, 3 Jul 2020 09:14:58 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Wei Yang <richard.weiyang@...ux.alibaba.com>,
        akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v2] mm/sparse: only sub-section aligned range would be
 populated

On 03.07.20 05:18, Wei Yang wrote:
> There are two code path which invoke __populate_section_memmap()
> 
>   * sparse_init_nid()
>   * sparse_add_section()
> 
> For both case, we are sure the memory range is sub-section aligned.
> 
>   * we pass PAGES_PER_SECTION to sparse_init_nid()
>   * we check range by check_pfn_span() before calling
>     sparse_add_section()
> 
> Also, the counterpart of __populate_section_memmap(), we don't do such
> calculation and check since the range is checked by check_pfn_span() in
> __remove_pages().
> 
> Clear the calculation and check to keep it simple and comply with its
> counterpart.
> 
> Signed-off-by: Wei Yang <richard.weiyang@...ux.alibaba.com>
> 
> ---
> v2:
>   * add a warn on once for unaligned range, suggested by David
> ---
>  mm/sparse-vmemmap.c | 20 ++++++--------------
>  1 file changed, 6 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 0db7738d76e9..8d3a1b6287c5 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
>  struct page * __meminit __populate_section_memmap(unsigned long pfn,
>  		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
>  {
> -	unsigned long start;
> -	unsigned long end;
> -
> -	/*
> -	 * The minimum granularity of memmap extensions is
> -	 * PAGES_PER_SUBSECTION as allocations are tracked in the
> -	 * 'subsection_map' bitmap of the section.
> -	 */
> -	end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
> -	pfn &= PAGE_SUBSECTION_MASK;
> -	nr_pages = end - pfn;
> -
> -	start = (unsigned long) pfn_to_page(pfn);
> -	end = start + nr_pages * sizeof(struct page);
> +	unsigned long start = (unsigned long) pfn_to_page(pfn);
> +	unsigned long end = start + nr_pages * sizeof(struct page);
> +
> +	if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
> +		!IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION)))
> +		return NULL;

Nit: indentation of both IS_ALIGNED should match.

Acked-by: David Hildenbrand <david@...hat.com>

>  
>  	if (vmemmap_populate(start, end, nid, altmap))
>  		return NULL;
> 


-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ