[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181022145241.GA7488@linux.ibm.com>
Date: Mon, 22 Oct 2018 07:52:41 -0700
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: oleg@...hat.com, peterz@...radead.org
Cc: linux-kernel@...r.kernel.org, josh@...htriplett.org,
rostedt@...dmis.org, mathieu.desnoyers@...icios.com,
jiangshanlai@...il.com
Subject: [PATCH RFC kenrel/rcu] Eliminate BUG_ON() for sync.c
The sync.c file has a number of calls to BUG_ON(), which panics the
kernel, which is not a good strategy for devices (like embedded) that
don't have a way to capture console output. This commit therefore
changes these BUG_ON() calls to WARN_ON_ONCE(), but does so quite naively.
Improved changes welcome. ;-)
Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.ibm.com>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
diff --git a/kernel/rcu/sync.c b/kernel/rcu/sync.c
index 3f943efcf61c..28ac78ba0656 100644
--- a/kernel/rcu/sync.c
+++ b/kernel/rcu/sync.c
@@ -125,12 +125,12 @@ void rcu_sync_enter(struct rcu_sync *rsp)
rsp->gp_state = GP_PENDING;
spin_unlock_irq(&rsp->rss_lock);
- BUG_ON(need_wait && need_sync);
-
if (need_sync) {
gp_ops[rsp->gp_type].sync();
rsp->gp_state = GP_PASSED;
wake_up_all(&rsp->gp_wait);
+ if (WARN_ON_ONCE(need_wait))
+ wait_event(rsp->gp_wait, rsp->gp_state == GP_PASSED);
} else if (need_wait) {
wait_event(rsp->gp_wait, rsp->gp_state == GP_PASSED);
} else {
@@ -139,7 +139,7 @@ void rcu_sync_enter(struct rcu_sync *rsp)
* Nobody has yet been allowed the 'fast' path and thus we can
* avoid doing any sync(). The callback will get 'dropped'.
*/
- BUG_ON(rsp->gp_state != GP_PASSED);
+ WARN_ON_ONCE(rsp->gp_state != GP_PASSED);
}
}
@@ -166,8 +166,8 @@ static void rcu_sync_func(struct rcu_head *rhp)
struct rcu_sync *rsp = container_of(rhp, struct rcu_sync, cb_head);
unsigned long flags;
- BUG_ON(rsp->gp_state != GP_PASSED);
- BUG_ON(rsp->cb_state == CB_IDLE);
+ WARN_ON_ONCE(rsp->gp_state != GP_PASSED);
+ WARN_ON_ONCE(rsp->cb_state == CB_IDLE);
spin_lock_irqsave(&rsp->rss_lock, flags);
if (rsp->gp_count) {
@@ -225,7 +225,7 @@ void rcu_sync_dtor(struct rcu_sync *rsp)
{
int cb_state;
- BUG_ON(rsp->gp_count);
+ WARN_ON_ONCE(rsp->gp_count);
spin_lock_irq(&rsp->rss_lock);
if (rsp->cb_state == CB_REPLAY)
@@ -235,6 +235,6 @@ void rcu_sync_dtor(struct rcu_sync *rsp)
if (cb_state != CB_IDLE) {
gp_ops[rsp->gp_type].wait();
- BUG_ON(rsp->cb_state != CB_IDLE);
+ WARN_ON_ONCE(rsp->cb_state != CB_IDLE);
}
}
Powered by blists - more mailing lists