[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090317061754.GD3314@in.ibm.com>
Date: Tue, 17 Mar 2009 11:47:54 +0530
From: Bharata B Rao <bharata@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Li Zefan <lizf@...fujitsu.com>,
Paul Menage <menage@...gle.com>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: [PATCH -tip] cpuacct: Make cpuacct hierarchy walk in
cpuacct_charge() safe when rcupreempt is used.
cpuacct: Make cpuacct hierarchy walk in cpuacct_charge() safe when
rcupreempt is used.
cpuacct_charge() obtains task's ca and does a hierarchy walk upwards.
This can race with the task's movement between cgroups. This race
can cause an access to freed ca pointer in cpuacct_charge(). This will not
happen with rcu or tree rcu as cpuacct_charge() is called with preemption
disabled. However if rcupreempt is used, the race still exists. Thanks to
Li Zefan for explaining this.
Fix this race by explicitly protecting ca and the hierarchy walk with
rcu_read_lock().
Signed-off-by: Bharata B Rao <bharata@...ux.vnet.ibm.com>
---
kernel/sched.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -9891,6 +9891,13 @@ static void cpuacct_charge(struct task_s
return;
cpu = task_cpu(tsk);
+
+ /*
+ * preemption is already disabled here, but to be safe with
+ * rcupreempt, take rcu_read_lock(). This protects ca and
+ * hence the hierarchy walk.
+ */
+ rcu_read_lock();
ca = task_ca(tsk);
do {
@@ -9898,6 +9905,7 @@ static void cpuacct_charge(struct task_s
*cpuusage += cputime;
ca = ca->parent;
} while (ca);
+ rcu_read_unlock();
}
struct cgroup_subsys cpuacct_subsys = {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists