lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200312142535.GK22433@bombadil.infradead.org>
Date:   Thu, 12 Mar 2020 07:25:35 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Wei Yang <richard.weiyang@...il.com>
Cc:     Baoquan He <bhe@...hat.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, mhocko@...e.com, akpm@...ux-foundation.org,
        david@...hat.com
Subject: Re: [PATCH v2] mm/sparse.c: Use kvmalloc_node/kvfree to alloc/free
 memmap for the classic sparse

On Thu, Mar 12, 2020 at 02:18:26PM +0000, Wei Yang wrote:
> On Thu, Mar 12, 2020 at 06:34:16AM -0700, Matthew Wilcox wrote:
> >On Thu, Mar 12, 2020 at 09:08:22PM +0800, Baoquan He wrote:
> >> This change makes populate_section_memmap()/depopulate_section_memmap
> >> much simpler.
> >> 
> >> Suggested-by: Michal Hocko <mhocko@...nel.org>
> >> Signed-off-by: Baoquan He <bhe@...hat.com>
> >> ---
> >> v1->v2:
> >>   The old version only used __get_free_pages() to replace alloc_pages()
> >>   in populate_section_memmap().
> >>   http://lkml.kernel.org/r/20200307084229.28251-8-bhe@redhat.com
> >> 
> >>  mm/sparse.c | 27 +++------------------------
> >>  1 file changed, 3 insertions(+), 24 deletions(-)
> >> 
> >> diff --git a/mm/sparse.c b/mm/sparse.c
> >> index bf6c00a28045..362018e82e22 100644
> >> --- a/mm/sparse.c
> >> +++ b/mm/sparse.c
> >> @@ -734,35 +734,14 @@ static void free_map_bootmem(struct page *memmap)
> >>  struct page * __meminit populate_section_memmap(unsigned long pfn,
> >>  		unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> >>  {
> >> -	struct page *page, *ret;
> >> -	unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION;
> >> -
> >> -	page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size));
> >> -	if (page)
> >> -		goto got_map_page;
> >> -
> >> -	ret = vmalloc(memmap_size);
> >> -	if (ret)
> >> -		goto got_map_ptr;
> >> -
> >> -	return NULL;
> >> -got_map_page:
> >> -	ret = (struct page *)pfn_to_kaddr(page_to_pfn(page));
> >> -got_map_ptr:
> >> -
> >> -	return ret;
> >> +	return kvmalloc_node(sizeof(struct page) * PAGES_PER_SECTION,
> >> +			     GFP_KERNEL|__GFP_NOWARN, nid);
> >
> >Use of NOWARN here is inappropriate, because there's no fallback.
> 
> Hmm... this replacement is a little tricky.
> 
> When you look into kvmalloc_node(), it will do the fallback if the size is
> bigger than PAGE_SIZE. This means the change here may not be equivalent as
> before if memmap_size is less than PAGE_SIZE.
> 
> For example if :
>   PAGE_SIZE = 64K 
>   SECTION_SIZE = 128M
> 
> would lead to memmap_size = 2K, which is less than PAGE_SIZE.

Yes, I thought about that.  I decided it wasn't a problem, as long as
the struct page remains aligned, and we now have a guarantee that allocations
above 512 bytes in size are aligned.  With a 64 byte struct page, as long
as we're allocating at least 8 pages, we know it'll be naturally aligned.

Your calculation doesn't take into account the size of struct page.
128M / 64k is indeed 2k, but you forgot to multiply by 64, which takes
us to 128kB.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ