[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4DCE17AB.4090002@linux.vnet.ibm.com>
Date: Sat, 14 May 2011 13:48:27 +0800
From: Cheng Xu <chengxu@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...e.hu>,
Paul Mckenney <paulmck@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched: rt_rq runtime leakage bug fix
On 2011-5-12 18:12, Peter Zijlstra wrote:
>
> it would be nice to know why the , operator version
> doesn't work though, since that looks to be the more conventional way to
> write it.
>
I did some investigation, it looks that,
1 #define for_each_rt_rq(rt_rq, iter, rq) \
2 for (iter = list_entry_rcu(task_groups.next, typeof(*iter), list), \
3 rt_rq = iter->rt_rq[cpu_of(rq)]; &iter->list != &task_groups; \
4 iter = list_entry_rcu(iter->list.next, typeof(*iter), list), \
5 rt_rq = iter->rt_rq[cpu_of(rq)])
in for loop, when task_groups (as sentinel node of the doubly linked circular list) is reached after the final iteration, a fake iter (of struct task_group *) is calculated at line 4 via container_of(&task_groups, struct task_group, list). By "fake", as we know, it is just an address, with &iter->list == &task_groups, but not pointing to a true struct task_group object. Accessing other members of fake iter might be the cause of page fault.
In my JS22 blade, cpu_of(rq)=1 and fake iter->rt_rq happens to be 0x100000000, value of another global variable near task_groups. Kernel tries to take it plus 8 as address, to retrieve iter->rt_rq[1]. and then page fault happens at address 0x100000008.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists