lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKohpo=qLikJXWydsQoTHLk=Vn_aYkROOtYnkAKWfd1mMGZ2kw@mail.gmail.com>
Date:	Wed, 26 Sep 2012 16:51:57 +0530
From:	Viresh Kumar <viresh.kumar@...aro.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-kernel@...r.kernel.org, pjt@...gle.com,
	paul.mckenney@...aro.org, tglx@...utronix.de,
	suresh.b.siddha@...el.com, venki@...gle.com, mingo@...hat.com,
	peterz@...radead.org, robin.randhawa@....com,
	Steve.Bannister@....com, Arvind.Chauhan@....com,
	amit.kucheria@...aro.org, vincent.guittot@...aro.org,
	linaro-dev@...ts.linaro.org, patches@...aro.org
Subject: Re: [PATCH 3/3] workqueue: Schedule work on non-idle cpu instead of
 current one

On 25 September 2012 23:26, Tejun Heo <tj@...nel.org> wrote:
> On Tue, Sep 25, 2012 at 04:06:08PM +0530, Viresh Kumar wrote:
>> +config MIGRATE_WQ
>> +     bool "(EXPERIMENTAL) Migrate Workqueues to non-idle cpu"
>> +     depends on SMP && EXPERIMENTAL
>> +     help
>> +       Workqueues queues work on current cpu, if the caller haven't passed a
>> +       preferred cpu. This may wake up an idle CPU, which is actually not
>> +       required. This work can be processed by any CPU and so we must select
>> +       a non-idle CPU here.  This patch adds in support in workqueue
>> +       framework to get preferred CPU details from the scheduler, instead of
>> +       using current CPU.
>
> I don't think it's a good idea to make behavior like this a config
> option.  The behavior difference is subtle and may induce incorrect
> behavior.

Ok. Will remove it.

>> @@ -1066,8 +1076,9 @@ int queue_work(struct workqueue_struct *wq, struct work_struct *work)
>>  {
>>       int ret;
>>
>> -     ret = queue_work_on(get_cpu(), wq, work);
>> -     put_cpu();
>> +     preempt_disable();
>> +     ret = queue_work_on(wq_select_cpu(), wq, work);
>> +     preempt_enable();
>
> First of all, I'm not entirely sure this is safe.  queue_work() used
> to *guarantee* that the work item would execute on the local CPU.  I
> don't think there are many which depend on that but I'd be surprised
> if this doesn't lead to some subtle problems somewhere.  It might not
> be realistic to audit all users and we might have to just let it
> happen and watch for the fallouts.  Dunno, still wanna see some level
> of auditing.

Ok.

> Also, I'm wondering why this is necessary at all for workqueues.  For
> schedule/queue_work(), you pretty much know the current cpu is not
> idle.  For delayed workqueue, sure but for immediate scheduling, why?

This was done for below scenario:
- A cpu has programmed a timer and is IDLE now.
- CPU gets into interrupt handler due to timer and queues a work. As the
CPU is currently IDLE, we should queue this work to some other CPU.

I know this patch did migrate works in all cases. Will fix it by queuing work
only for this case in V2.

--
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ