lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 13 Dec 2011 11:49:57 +0100
From:	Johannes Weiner <hannes@...xchg.org>
To:	Uwe Kleine-König 
	<u.kleine-koenig@...gutronix.de>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH RFC] bootmem: micro optimize freeing pages in bulks

On Tue, Dec 13, 2011 at 11:06:01AM +0100, Johannes Weiner wrote:
> On Thu, Dec 01, 2011 at 11:10:55PM +0100, Uwe Kleine-König wrote:
> > The first entry of bdata->node_bootmem_map holds the data for
> > bdata->node_min_pfn up to bdata->node_min_pfn + BITS_PER_LONG - 1. So
> > the test for freeing all pages of a single map entry can be slightly
> > relaxed.
> 
> Agreed.  The optimization is tiny - we may lose one bulk order-5/6
> free per node and do it in 32/64 order-0 frees instead (we touch each
> page anyway one way or another), but the code makes more sense with
> your change.
> 
> [ Btw, what's worse is start being unaligned, because we won't do a
>   single batch free then.  The single-page loop should probably just
>   move to the next BITS_PER_WORD boundary and then retry the aligned
>   batch frees.  Oh, well... ]
> 
> > Moreover use DIV_ROUND_UP in another place instead of open coding it.
> 
> Agreed.
> 
> > Cc: Johannes Weiner <hannes@...urebad.de>
> > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > Signed-off-by: Uwe Kleine-König <u.kleine-koenig@...gutronix.de>
> > ---
> > Hello,
> > 
> > I'm not sure the current code is correct (and my patch doesn't fix it):
> > 
> > If
> > 
> > 	aligned && vec == ~0UL
> > 
> > evalutates to true, but
> > 
> > 	start + BITS_PER_LONG <= end
> > 
> > does not (or "< end" resp.) the else branch still frees all BITS_PER_LONG
> > pages. Is this intended? If yes, the last check can better be omitted
> > resulting in the pages being freed in a bulk.
> > If not, the loop in the else branch should only do something like:
> > 
> > 	while (vec && off < min(BITS_PER_LONG, end - start)) {
> > 		...
> 
> I would think this is fine because node_bootmem_map, which is where
> vec points to, is sized in multiples of pages, and zeroed word-wise.
> So even if end is not aligned, we can rely on !vec.

And putting two and two together (watch out, I'm on fire!) for the
other branch, this means that the whole

	start + BITS_PER_LONG <= end

check is actually unnecessary.  Your patch is still correct, of
course, but we could as well just remove the whole thing and rely on
vec not having bits set for the address range past end.

> > @@ -197,7 +197,7 @@ static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata)
> >  		idx = start - bdata->node_min_pfn;
> >  		vec = ~map[idx / BITS_PER_LONG];
> >  
> > -		if (aligned && vec == ~0UL && start + BITS_PER_LONG < end) {
> > +		if (aligned && vec == ~0UL && start + BITS_PER_LONG <= end) {
> >  			int order = ilog2(BITS_PER_LONG);
> >  
> >  			__free_pages_bootmem(pfn_to_page(start), order);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ