lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091012135600.GB15605@Krystal>
Date:	Mon, 12 Oct 2009 09:56:00 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Peter Zijlstra <peterz@...radead.org>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org,
	Pekka Enberg <penberg@...helsinki.fi>,
	Tejun Heo <tj@...nel.org>, Mel Gorman <mel@....ul.ie>,
	mingo@...e.hu
Subject: Re: [this_cpu_xx V5 19/19] SLUB: Experimental new fastpath w/o
	interrupt disable

* Christoph Lameter (cl@...ux-foundation.org) wrote:
> On Thu, 8 Oct 2009, Mathieu Desnoyers wrote:
> 
> > > Index: linux-2.6/mm/slub.c
> > > ===================================================================
> > > --- linux-2.6.orig/mm/slub.c	2009-10-08 11:35:59.000000000 -0500
> > > +++ linux-2.6/mm/slub.c	2009-10-08 14:03:22.000000000 -0500
> > > @@ -1606,7 +1606,14 @@ static void *__slab_alloc(struct kmem_ca
> > >  			  unsigned long addr)
> > >  {
> > >  	void **object;
> > > -	struct page *page = __this_cpu_read(s->cpu_slab->page);
> > > +	struct page *page;
> > > +	unsigned long flags;
> > > +	int hotpath;
> > > +
> > > +	local_irq_save(flags);
> >
> > (Recommend adding)
> >
> > 	preempt_enable_no_resched();
> >
> >
> > The preempt enable right in the middle of a big function is adding an
> > unnecessary barrier(), which will restrain gcc from doing its
> > optimizations.  This might hurt performances.
> 
> In the middle of the function we have determine that we have to go to the
> page allocator to get more memory. There is not much the compiler can do
> to speed that up.

Indeed, the compiler cannot do much about it. However, the programer
(you) can move the preempt_enable_no_resched() part of the
preempt_enable() to the beginning of the function.

> 
> > I still recommend the preempt_enable_no_resched() at the beginning of
> > __slab_alloc(), and simply putting a check_resched() here (which saves
> > us the odd compiler barrier in the middle of function).
> 
> Then preemption would be unnecessarily disabled for the page allocator
> call?

No ?
preempt_enable_no_resched() enables preemption.

> 
> > >  	if (gfpflags & __GFP_WAIT)
> > >  		local_irq_enable();
> > >
> > > +	preempt_enable();
> >
> > We could replace the above by:
> >
> > if (gfpflags & __GFP_WAIT) {
> > 	local_irq_enable();
> > 	preempt_check_resched();
> > }
> 
> Which would leave preempt off for the page allocator.

Not if you do preempt_enable_no_resched() at the beginnig of the
function, after disabling interrupts.

> 
> > > +	irqsafe_cpu_inc(s->cpu_slab->active);
> > > +	barrier();
> > >  	object = __this_cpu_read(s->cpu_slab->freelist);
> > > -	if (unlikely(!object || !node_match(s, node)))
> > > +	if (unlikely(!object || !node_match(s, node) ||
> > > +			__this_cpu_read(s->cpu_slab->active)))
> >
> > Missing a barrier() here ?
> 
> The modifications of the s->cpu_slab->freelist in __slab_alloc() are only
> done after interrupts have been disabled and after the slab has been locked.

I was concerned about a potential race between
cpu_slab->active/cpu_slab->freelist if an interrupt came in. I
understand that as soon as you get a hint that you must hit the slow
path, you don't care about the order in which these operations have been
done.

> 
> > The idea is to let gcc know that "active" inc/dec and "freelist" reads
> > must never be reordered. Even when the decrement is done in the slow
> > path branch.
> 
> Right. How could that occur with this code?
> 

__slab_alloc calls __this_cpu_dec(s->cpu_slab->active); without any
compiler barrier. But I get that when __slab_alloc is executed, we don't
care about "active" dec to be reordered, because we're not altering fast
path data anymore.

> > > +		preempt_enable();
> > >  		stat(s, FREE_FASTPATH);
> > > -	} else
> > > +	} else {
> >
> > Perhaps missing a barrier() in the else ?
> 
> Not sure why that would be necessary. __slab_free() does not even touch
> s->cpu_slab->freelist if you have the same reasons as in the alloc path.

My intent was to order __this_cpu_read(s->cpu_slab->page) and
irqsafe_cpu_dec(s->cpu_slab->active), but I get that if you run the slow
path, you don't care about some spilling of the slow path over the slab
active critical section.

Thanks,

Mathieu

> 
> 

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ