[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141008165110.GA14547@worktop.programming.kicks-ass.net>
Date: Wed, 8 Oct 2014 18:51:10 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Fengguang Wu <fengguang.wu@...el.com>,
Jet Chen <jet.chen@...el.com>, Su Tao <tao.su@...el.com>,
Yuanhan Liu <yuanhan.liu@...el.com>, LKP <lkp@...org>,
linux-kernel@...r.kernel.org
Subject: [PATCH] trace: Robustify wait loop
On Wed, Oct 08, 2014 at 06:36:57PM +0200, Peter Zijlstra wrote:
>
> Right, I'll send the patch.
---
Subject: trace: Robustify wait loop
From: Peter Zijlstra (Intel) <peterz@...radead.org>
Date: Wed Oct 8 18:44:26 CEST 2014
The pending nested sleep debugging triggered on the potential stale
TASK_INTERRUPTIBLE in this code.
While there, fix the loop such that we won't revert to a while(1)
yield() 'spin' loop if we ever get a spurious wakeup.
And fix the actual issue by properly terminating the 'wait' loop by
setting TASK_RUNNING.
Reported-by: Fengguang Wu <fengguang.wu@...el.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/trace/trace_events.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2513,8 +2513,11 @@ static __init int event_test_thread(void
kfree(test_malloc);
set_current_state(TASK_INTERRUPTIBLE);
- while (!kthread_should_stop())
+ while (!kthread_should_stop()) {
schedule();
+ set_current_state(TASK_INTERRUPTIBLE);
+ }
+ __set_current_state(TASK_RUNNING);
return 0;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists