lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0711201240160.26745@schroedinger.engr.sgi.com>
Date:	Tue, 20 Nov 2007 12:44:56 -0800 (PST)
From:	Christoph Lameter <clameter@....com>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
cc:	ak@...e.de, akpm@...ux-foundation.org, travis@....com,
	linux-kernel@...r.kernel.org
Subject: Re: [rfc 04/45] cpu alloc: Use in SLUB

On Tue, 20 Nov 2007, Mathieu Desnoyers wrote:

> >  {
> > -#ifdef CONFIG_SMP
> >  	on_each_cpu(flush_cpu_slab, s, 1, 1);
> > -#else
> > -	unsigned long flags;
> > -
> > -	local_irq_save(flags);
> > -	flush_cpu_slab(s);
> > -	local_irq_restore(flags);
> > -#endif
> >  }
> >  
> 
> Normally :
> 
> You can't use on_each_cpu if interrupts are already disabled. Therefore,
> the implementation using "local_irq_disable/enable" in smp.h for the UP
> case is semantically correct and there is no need to use a save/restore.
> So just using on_each_cpu should be enough here.

The on_each_cpu is only used in the smp case.

> I also wonder about flush_cpu_slab execution on _other_ cpus. I am not
> convinced interrupts are disabled when it executes.. have I missing
> something ?

flush_cpu_slab only flushes the local cpu slab. keventd threads are pinned 
to each processor so it should be safe.

> > -#ifdef CONFIG_FAST_CMPXCHG_LOCAL
> > -	c = get_cpu_slab(s, get_cpu());
> > +#ifdef CONFIG_FAST_CPU_OPS
> 
> I wonder.. are there some architectures that would provide fast
> cmpxchg_local but not fast cpu ops ?

If you have a fast cmpxchg_local then you can use that to implement fast 
cpu ops.

> > +	} while (CPU_CMPXCHG(c->freelist, object,
> > +			object[__CPU_READ(c->offset)]) != object);
> 
> Hrm. What happens here if we call __slab_alloc, get a valid object, then
> have a CPU_CMPXCHG that fails, restart the loop.. is this case taken
> care of or do we end up having an unreferenced object ? Maybe there is
> some logic in freelist that takes care of it ?

If we fail then we have done nothing....

> Also, we have to be aware that we can now change CPU between the
> 
> __CPU_READ and the CPU_CMPXCHG. (also : should it be a __CPU_CMPXCHG ?)
> 
> But since "object" contains information specific to the local CPU, the
> cmpxchg should fail if we are migrated and everything should be ok.

Right.
 
> Hrm, actually, the 
> 
> c = s->cpu_slab;
> 
> should probably be after the object = __CPU_READ(c->freelist); ?

No. c is invariant in respect to cpus.

> The cpu_read acts as a safeguard checking that we do not change CPU
> between the read and the cmpxchg. If we are preempted between the "c"
> read and the cpu_read, we could do a !cpu_node_match(c, node) check that
> would apply to the wrong cpu.

C is not pointing to a specific cpu. It can only be used in CPU_xx ops to
address the currrent cpu.

> > @@ -1800,19 +1792,19 @@ static void __always_inline slab_free(st
> >  		 * then any change of cpu_slab will cause the cmpxchg to fail
> >  		 * since the freelist pointers are unique per slab.
> >  		 */
> > -		if (unlikely(page != c->page || c->node < 0)) {
> > -			__slab_free(s, page, x, addr, c->offset);
> > +		if (unlikely(page != __CPU_READ(c->page) ||
> > +					__CPU_READ(c->node) < 0)) {
> > +			__slab_free(s, page, x, addr, __CPU_READ(c->offset));
> 
> And same question as above : what happens if we fail after executing the
> __slab_free.. is it valid to do it twice ?

__slab_free is always successful and will never cause a repeat of the 
loop.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ