[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061202212517.GA1199@oleg>
Date: Sun, 3 Dec 2006 00:25:17 +0300
From: Oleg Nesterov <oleg@...sign.ru>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Dipankar Sarma <dipankar@...ibm.com>
Cc: Andrew Morton <akpm@...l.org>,
Alan Stern <stern@...land.harvard.edu>,
Eric Dumazet <dada1@...mosbay.com>,
linux-kernel@...r.kernel.org
Subject: PATCH? rcu_do_batch: fix a pure theoretical memory ordering race
On top of rcu-add-a-prefetch-in-rcu_do_batch.patch
rcu_do_batch:
struct rcu_head *next, *list;
while (list) {
next = list->next; <------ [1]
list->func(list);
list = next;
}
We can't trust *list after list->func() call, that is why we load list->next
beforehand. However I suspect in theory this is not enough, suppose that
- [1] is stalled
- list->func() marks *list as unused in some way
- another CPU re-uses this rcu_head and dirties it
- [1] completes and gets a wrong result
This means we need a barrier in between. mb() looks more suitable, but I think
rmb() should suffice.
Signed-off-by: Oleg Nesterov <oleg@...sign.ru>
--- 19-rc6/kernel/rcupdate.c~rdp 2006-12-02 20:46:03.000000000 +0300
+++ 19-rc6/kernel/rcupdate.c 2006-12-02 21:04:12.000000000 +0300
@@ -236,6 +236,8 @@ static void rcu_do_batch(struct rcu_data
list = rdp->donelist;
while (list) {
next = list->next;
+ /* complete the load above before we call ->func() */
+ smp_rmb();
prefetch(next);
list->func(list);
list = next;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists