lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 May 2013 15:06:01 +0800
From:	Michael Wang <wangyun@...ux.vnet.ibm.com>
To:	Borislav Petkov <bp@...en8.de>
CC:	Tejun Heo <tj@...nel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Jiri Kosina <jkosina@...e.cz>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Tony Luck <tony.luck@...el.com>, linux-kernel@...r.kernel.org,
	x86@...nel.org, Thomas Gleixner <tglx@...utronix.de>, rjw@...k.pl,
	Viresh Kumar <viresh.kumar@...aro.org>,
	cpufreq@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: NOHZ: WARNING: at arch/x86/kernel/smp.c:123 native_smp_send_reschedule,
 round 2

On 05/20/2013 02:58 PM, Michael Wang wrote:
> On 05/20/2013 02:47 PM, Borislav Petkov wrote:
>> On Mon, May 20, 2013 at 02:23:37PM +0800, Michael Wang wrote:
>>> On 05/20/2013 12:50 PM, Borislav Petkov wrote:
>>>> On Mon, May 20, 2013 at 11:16:33AM +0800, Michael Wang wrote:
>>>>> I suppose the reason is that the cpu we passed to
>>>>> mod_delayed_work_on() has a chance to become offline before we
>>>>> disabled irq, what about check it before send resched ipi? like:
>>>>
>>>> I think this is only addressing the symptoms - what we should be doing
>>>> instead is asking ourselves why are we even scheduling work on a cpu if
>>>> the machine goes offline?
>>>>
>>>> I don't know though who should be responsible for killing all that
>>>> work - the workqueue itself or the guy who created it, i.e. cpufreq
>>>> governor...
>>>
>>> So there are two questions here:
>>> 1. Is gov_queue_work() want to queue the work on offline cpu?
>>> 2. Is mod_delayed_work_on() allow offline cpu?
>>>
>>> I guess both should be false?
>>
>> Well, if we don't allow queueing work on a cpu which goes offline, i.e.
>> #2, the problem should be solved.
> 
> I've take a look at the usage of queue_delayed_work_on() and
> mod_delayed_work_on(), mostly passed this_cpu, or those in online mask,
> I think offline cpu is not by designed.
> 
> Besides, the cpu gov_queue_work() is using 'policy->cpus' which seems to
> be updated during UP DOWN notify, I think they are supposed to be online.
> 
> But we need expert in cpufreq to confirm all these...

And I guess this may help to reduce the chance to trigger WARN:

diff --git a/drivers/cpufreq/cpufreq_governor.c
b/drivers/cpufreq/cpufreq_governor.c
index 443442d..0f96013 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -180,7 +180,7 @@ void gov_queue_work(struct dbs_data *dbs_data,
struct cpufreq_policy *policy,
        if (!all_cpus) {
                __gov_queue_work(smp_processor_id(), dbs_data, delay);
        } else {
-               for_each_cpu(i, policy->cpus)
+               for_each_cpu_and(i, policy->cpus, cpu_online_mask)
                        __gov_queue_work(i, dbs_data, delay);
        }
 }

Well, disable irq will be better, anyway...still need folks who own that
driver to make the decision, so let's CC them :)

Regards,
Michael Wang


> 
> Regards,
> Michael Wang
> 
>>
>> Tejun?
>>
>> Here are the splats: http://marc.info/?l=linux-kernel&m=136879901425951
>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ