lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 May 2021 16:39:00 +0200
From:   Uladzislau Rezki <urezki@...il.com>
To:     Christoph Hellwig <hch@...radead.org>, Mel Gorman <mgorman@...e.de>
Cc:     "Uladzislau Rezki (Sony)" <urezki@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        LKML <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...e.de>,
        Matthew Wilcox <willy@...radead.org>,
        Nicholas Piggin <npiggin@...il.com>,
        Hillf Danton <hdanton@...a.com>,
        Michal Hocko <mhocko@...e.com>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
        Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in
 __vmalloc_area_node()

On Wed, May 19, 2021 at 02:44:08PM +0100, Christoph Hellwig wrote:
> > +	if (!page_order) {
> > +		area->nr_pages = alloc_pages_bulk_array_node(
> > +			gfp_mask, node, nr_small_pages, area->pages);
> > +	} else {
> > +		/*
> > +		 * Careful, we allocate and map page_order pages, but tracking is done
> > +		 * per PAGE_SIZE page so as to keep the vm_struct APIs independent of
> 
> Comments over 80 lines are completely unreadable, so please avoid them.
> 
That i can fix by separate patch.

> > +		 * the physical/mapped size.
> > +		 */
> > +		while (area->nr_pages < nr_small_pages) {
> > +			struct page *page;
> > +			int i;
> > +
> > +			/* Compound pages required for remap_vmalloc_page */
> > +			page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order);
> > +			if (unlikely(!page))
> > +				break;
> >  
> > +			for (i = 0; i < (1U << page_order); i++)
> > +				area->pages[area->nr_pages + i] = page + i;
> >  
> > +			if (gfpflags_allow_blocking(gfp_mask))
> > +				cond_resched();
> > +
> > +			area->nr_pages += 1U << page_order;
> > +		}
> 
> In fact splitting this whole high order allocation logic into a little
> helper would massivel benefit the function by ordering it more logical
> and reducing a level of indentation.
> 
I can put it into separate function. Actually i was thinking about it.

> > +	/*
> > +	 * If not enough pages were obtained to accomplish an
> > +	 * allocation request, free them via __vfree() if any.
> > +	 */
> > +	if (area->nr_pages != nr_small_pages) {
> > +		warn_alloc(gfp_mask, NULL,
> > +			"vmalloc size %lu allocation failure: "
> > +			"page order %u allocation failed",
> > +			area->nr_pages * PAGE_SIZE, page_order);
> > +		goto fail;
> > +	}
> 
> From reading __alloc_pages_bulk not allocating all pages is something
> that cn happen fairly easily.  Shouldn't we try to allocate the missing
> pages manually and/ore retry here?
> 
It is a good point. The bulk-allocator, as i see, only tries to access
to pcp-list and falls-back to a single allocator once it fails, so the
array may not be fully populated.

In that case probably it makes sense to manually populate it using
single page allocator.

Mel, could you please also comment on it?

> > +
> > +	if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) {
> 
> Another pointlessly long line.
Yep. Will fix it by a separate patch. Actually the checkpatch.pl also
complains on splitting the text like below:

    warn_alloc(gfp_mask, NULL,
        "vmalloc size %lu allocation failure: "
        "page order %u allocation failed",
        area->nr_pages * PAGE_SIZE, page_order);


Thanks for the comments!

--
Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ