[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090209153305.GA6802@linux.vnet.ibm.com>
Date: Mon, 9 Feb 2009 07:33:05 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Mathieu Desnoyers <compudj@...stal.dyndns.org>
Cc: ltt-dev@...ts.casi.polymtl.ca, linux-kernel@...r.kernel.org
Subject: Re: [ltt-dev] [RFC git tree] Userspace RCU (urcu) for Linux
(repost)
On Mon, Feb 09, 2009 at 02:03:17AM -0500, Mathieu Desnoyers wrote:
[ . . . ]
> I just added modified rcutorture.h and api.h from your git tree
> specifically for an urcutorture program to the repository. Some results :
>
> 8-way x86_64
> E5405 @2 GHZ
>
> ./urcutorture 8 perf
> n_reads: 1937650000 n_updates: 3 nreaders: 8 nupdaters: 1 duration: 1
> ns/read: 4.12871 ns/update: 3.33333e+08
>
> ./urcutorture 8 uperf
> n_reads: 0 n_updates: 4413892 nreaders: 0 nupdaters: 8 duration: 1
> ns/read: nan ns/update: 1812.46
>
> n_reads: 98844204 n_updates: 10 n_mberror: 0
> rcu_stress_count: 98844171 33 0 0 0 0 0 0 0 0 0
>
> However, I've tried removing the second switch_qparity() call, and the
> rcutorture test did not detect anything wrong. I also did a variation
> which calls the "sched_yield" version of the urcu, "urcutorture-yield".
My confusion -- I was testing my old approach where the memory barriers
are in rcu_read_lock() and rcu_read_unlock(). To force the failures in
your signal-handler-memory-barrier approach, I suspect that you are
going to need a bigger hammer. In this case, one such bigger hammer
would be:
o Just before exit from the signal handler, do a
pthread_cond_wait() under a pthread_mutex().
o In force_mb_all_threads(), refrain from sending a signal to self.
Then it should be safe in force_mb_all_threads() to do a
pthread_cond_broadcast() under the same pthread_mutex().
This should raise the probability of seeing the failure in the case
where there is a single switch_qparity().
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists