lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0903261032070.17665@gandalf.stny.rr.com>
Date:	Thu, 26 Mar 2009 10:40:47 -0400 (EDT)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	LKML <linux-kernel@...r.kernel.org>
cc:	Ingo Molnar <mingo@...e.hu>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Maneesh Soni <maneesh@...ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH][GIT PULL] tracing/wakeup: move access to wakeup_cpu into
 spinlock


Ingo,

I believe this is the fix for the oops that Maneesh saw. What he explains 
he did sounds like it would trigger the race. Reading the trace output
resets wakeup_task to NULL and wakeup_cpu to -1. And he hit the bug by 
just reading the trace in a while loop.

Please pull the latest tip/tracing/ftrace-1 tree, which can be found at:

  git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
tip/tracing/ftrace-1


Steven Rostedt (1):
      tracing/wakeup: move access to wakeup_cpu into spinlock

----
 kernel/trace/trace_sched_wakeup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)
---------------------------
commit 8ac070ca952d14c57e3d772e203fd117161cc596
Author: Steven Rostedt <srostedt@...hat.com>
Date:   Thu Mar 26 10:25:24 2009 -0400

    tracing/wakeup: move access to wakeup_cpu into spinlock
    
    Impact: fix for race condition
    
    The code had the following outside the lock:
    
            if (next != wakeup_task)
                    return;
    
            pc = preempt_count();
    
            /* The task we are waiting for is waking up */
            data = wakeup_trace->data[wakeup_cpu];
    
    On initialization, wakeup_task is NULL and wakeup_cpu -1. This code
    is not under a lock. If wakeup_task is set on another CPU as that
    task is waking up, we can see the wakeup_task before wakeup_cpu is
    set. If we read wakeup_cpu while it is still -1 then we will have
    a bad data pointer.
    
    This patch moves the reading of wakeup_cpu within the protection of
    the spinlock used to protect the writing of wakeup_cpu and wakeup_task.
    
    Reported-by: Maneesh Soni <maneesh@...ibm.com>
    Signed-off-by: Steven Rostedt <srostedt@...hat.com>

diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 3c5ad6b..9e4ce4c 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -138,9 +138,6 @@ probe_wakeup_sched_switch(struct rq *rq, struct task_struct *prev,
 
 	pc = preempt_count();
 
-	/* The task we are waiting for is waking up */
-	data = wakeup_trace->data[wakeup_cpu];
-
 	/* disable local data, not wakeup_cpu data */
 	cpu = raw_smp_processor_id();
 	disabled = atomic_inc_return(&wakeup_trace->data[cpu]->disabled);
@@ -154,6 +151,9 @@ probe_wakeup_sched_switch(struct rq *rq, struct task_struct *prev,
 	if (unlikely(!tracer_enabled || next != wakeup_task))
 		goto out_unlock;
 
+	/* The task we are waiting for is waking up */
+	data = wakeup_trace->data[wakeup_cpu];
+
 	trace_function(wakeup_trace, CALLER_ADDR1, CALLER_ADDR2, flags, pc);
 	tracing_sched_switch_trace(wakeup_trace, prev, next, flags, pc);
 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ