[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150616111620.GC18673@twins.programming.kicks-ass.net>
Date: Tue, 16 Jun 2015 13:16:20 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, tj@...nel.org,
mingo@...hat.com, linux-kernel@...r.kernel.org, der.herr@...r.at,
dave@...olabs.net, torvalds@...ux-foundation.org,
josh@...htriplett.org
Subject: Re: ring_buffer_attach && cond_synchronize_rcu (Was: percpu-rwsem:
Optimize readers and reduce global impact)
I made that the below.
Are you OK with the Changelog edits and the added SoB ?
---
Subject: perf: Fix ring_buffer_attach() RCU sync, again.
From: Oleg Nesterov <oleg@...hat.com>
Date: Sat, 30 May 2015 22:04:25 +0200
While looking for other users of get_state/cond_sync. I Found
ring_buffer_attach() and it looks obviously buggy?
Don't we need to ensure that we have "synchronize" _between_
list_del() and list_add() ?
IOW. Suppose that ring_buffer_attach() preempts right_after
get_state_synchronize_rcu() and gp completes before spin_lock().
In this case cond_synchronize_rcu() does nothing and we reuse
->rb_entry without waiting for gp in between?
It also moves the ->rcu_pending check under "if (rb)", to make it
more readable imo.
Cc: tj@...nel.org
Cc: mingo@...hat.com
Cc: der.herr@...r.at
Cc: dave@...olabs.net
Cc: torvalds@...ux-foundation.org
Cc: josh@...htriplett.org
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Fixes: b69cf53640da ("perf: Fix a race between ring_buffer_detach() and ring_buffer_attach()")
Signed-off-by: Oleg Nesterov <oleg@...hat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: http://lkml.kernel.org/r/20150530200425.GA15748@redhat.com
---
kernel/events/core.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4307,20 +4307,20 @@ static void ring_buffer_attach(struct pe
WARN_ON_ONCE(event->rcu_pending);
old_rb = event->rb;
- event->rcu_batches = get_state_synchronize_rcu();
- event->rcu_pending = 1;
-
spin_lock_irqsave(&old_rb->event_lock, flags);
list_del_rcu(&event->rb_entry);
spin_unlock_irqrestore(&old_rb->event_lock, flags);
- }
- if (event->rcu_pending && rb) {
- cond_synchronize_rcu(event->rcu_batches);
- event->rcu_pending = 0;
+ event->rcu_batches = get_state_synchronize_rcu();
+ event->rcu_pending = 1;
}
if (rb) {
+ if (event->rcu_pending) {
+ cond_synchronize_rcu(event->rcu_batches);
+ event->rcu_pending = 0;
+ }
+
spin_lock_irqsave(&rb->event_lock, flags);
list_add_rcu(&event->rb_entry, &rb->event_list);
spin_unlock_irqrestore(&rb->event_lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists