lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 04 Dec 2006 00:08:01 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Oleg Nesterov <oleg@...sign.ru>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Dipankar Sarma <dipankar@...ibm.com>,
	Andrew Morton <akpm@...l.org>,
	Alan Stern <stern@...land.harvard.edu>,
	linux-kernel@...r.kernel.org
Subject: Re: PATCH? rcu_do_batch: fix a pure theoretical memory ordering race

Oleg Nesterov a écrit :
> On 12/03, Eric Dumazet wrote:
>> Oleg Nesterov a ?crit :
>>
>> Yes, but how is it related to RCU ?
>> I mean, rcu_do_batch() is just a loop like others in kernel.
>> The loop itself is not buggy, but can call a buggy function, you are right.
> 
> 	int start_me_again;
> 
> 	struct rcu_head rcu_head;
> 
> 	void rcu_func(struct rcu_head *rcu)
> 	{
> 		start_me_again = 1;
> 	}
> 
> 	// could be called on arbitrary CPU
> 	void check_start_me_again(void)
> 	{
> 		static spinlock_t lock;
> 
> 		spin_lock(lock);
> 		if (start_me_again) {
> 			start_me_again = 0;
> 			call_rcu(&rcu_head, rcu_func);
> 		}
> 		spin_unlock(lock);
> 	}
> 
> I'd say this code is not buggy.

Are you sure ? Can you prove it ? :)

I do think your rcu_func() misses some sync primitive, *after* start_me_again=1;
You seem to rely on some undocumented side effect.
Adding smp_rmb() before calling rcu_func() wont help.

> 
> In case it was not clear. I do not claim we need this patch (I don't know).
> And yes, I very much doubt we can hit this problem in practice (even if I am
> right).
> 
> What I don't agree with is that it is callback which should take care of this
> problem.
> 
>> A smp_rmb() wont avoid all possible bugs...
> 
> For example?

A smp_rmb() wont avoid stores pending on this CPU to be committed to memory 
after another cpu takes the object for itself. Those stores could overwrite 
stores done by the other cpu as well.

So in theory you could design a buggy callback function even after your patch 
applied.

Any function that can transfer an object from CPU A scope to CPU B scope must 
take care of memory barrier by itself. The caller *cannot* possibly do the 
job, especially if it used an indirect call. However, in some cases it is 
possible some clever algos are doing the reverse, ie doing the memory barrier 
in the callers.

Kernel is full of such constructs :

for (ptr = head; ptr != NULL ; ptr = next) {
	next = ptr->next;
	some_subsys_delete(ptr);
}

And we dont need to add smp_rmb() before the call to some_subsys_delete(), it 
would be a nightmare, and would slow down modern cpus.

When the callback function dont need a memory barrier, why should we force a 
generic one in the caller ?

AFAIK most kfree() calls dont need a barrier, since slab just queue the 
pointer into the cpu's array_cache if there is one available slot.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ