[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8531650.WA9Z5RdLht@vostro.rjw.lan>
Date: Thu, 06 Mar 2014 02:27:51 +0100
From: "Rafael J. Wysocki" <rjw@...ysocki.net>
To: Saravana Kannan <skannan@...eaurora.org>
Cc: Viresh Kumar <viresh.kumar@...aro.org>,
linaro-kernel@...ts.linaro.org, cpufreq@...r.kernel.org,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
srivatsa.bhat@...ux.vnet.ibm.com
Subject: Re: [PATCH V2 3/3] cpufreq: initialize governor for a new policy under policy->rwsem
On Wednesday, March 05, 2014 05:10:01 PM Saravana Kannan wrote:
> On 03/05/2014 05:06 PM, Rafael J. Wysocki wrote:
> > On Thursday, March 06, 2014 02:04:39 AM Rafael J. Wysocki wrote:
> >> On Tuesday, March 04, 2014 11:44:01 AM Viresh Kumar wrote:
> >>> policy->rwsem is used to lock access to all parts of code modifying struct
> >>> cpufreq_policy but wasn't used on a new policy created from __cpufreq_add_dev().
> >>>
> >>> Because of which if we call cpufreq_update_policy() repeatedly on one CPU and do
> >>> offline/online of another CPU then we might see these crashes:
> >>>
> >>> Unable to handle kernel NULL pointer dereference at virtual address 00000020
> >>> pgd = c0003000
> >>> [00000020] *pgd=80000000004003, *pmd=00000000
> >>> Internal error: Oops: 206 [#1] PREEMPT SMP ARM
> >>>
> >>> PC is at __cpufreq_governor+0x10/0x1ac
> >>> LR is at cpufreq_update_policy+0x114/0x150
> >>>
> >>> ---[ end trace f23a8defea6cd706 ]---
> >>> Kernel panic - not syncing: Fatal exception
> >>> CPU0: stopping
> >>> CPU: 0 PID: 7136 Comm: mpdecision Tainted: G D W 3.10.0-gd727407-00074-g979ede8 #396
> >>>
> >>> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58)
> >>> [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58) from [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c)
> >>> [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c) from [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8)
> >>> [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8) from [<c0803e7c>] (cpufreq_init_policy+0x30/0x98)
> >>> [<c0803e7c>] (cpufreq_init_policy+0x30/0x98) from [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4)
> >>> [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4) from [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84)
> >>> [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84) from [<c0afe180>] (notifier_call_chain+0x40/0x68)
> >>> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02812dc>] (__cpu_notify+0x28/0x44)
> >>> [<c02812dc>] (__cpu_notify+0x28/0x44) from [<c0aeed90>] (_cpu_up+0xf4/0x1dc)
> >>> [<c0aeed90>] (_cpu_up+0xf4/0x1dc) from [<c0aeeed4>] (cpu_up+0x5c/0x78)
> >>> [<c0aeeed4>] (cpu_up+0x5c/0x78) from [<c0aec808>] (store_online+0x44/0x74)
> >>> [<c0aec808>] (store_online+0x44/0x74) from [<c03a40f4>] (sysfs_write_file+0x108/0x14c)
> >>> [<c03a40f4>] (sysfs_write_file+0x108/0x14c) from [<c03517d4>] (vfs_write+0xd0/0x180)
> >>> [<c03517d4>] (vfs_write+0xd0/0x180) from [<c0351ca8>] (SyS_write+0x38/0x68)
> >>> [<c0351ca8>] (SyS_write+0x38/0x68) from [<c0205de0>] (ret_fast_syscall+0x0/0x30)
> >>>
> >>> Fix these by taking locks at appropriate places in __cpufreq_add_dev() as well.
> >>>
> >>> Reported-by: Saravana Kannan <skannan@...eaurora.org>
> >>> Suggested-by: Srivatsa S. Bhat <srivatsa.bhat@...ux.vnet.ibm.com>
> >>> Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>
> >>
> >> I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.
> >>
> >> Please check the bleeding-edge branch for the result.
> >
> > Actually, I think I'll queue up [2-3/3] for 3.14-rc6 instead.
> >
>
> Pretty close to having this tested and reported back. So, if you can
> wait, that would be better. Should probably see an email by Fri evening PST.
OK
It won't hurt if they stay in bleeding-edge/linux-next till then, though.
Thanks!
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists