[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52fa1d81-e585-37eb-55e5-0ed07ce7adc0@oracle.com>
Date: Mon, 29 Jun 2020 08:11:34 +0800
From: Bob Liu <bob.liu@...cle.com>
To: Lai Jiangshan <jiangshanlai+lkml@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Tejun Heo <tj@...nel.org>,
martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
open-iscsi@...glegroups.com, lduncan@...e.com,
michael.christie@...cle.com
Subject: Re: [PATCH 1/2] workqueue: don't always set __WQ_ORDERED implicitly
On 6/28/20 11:54 PM, Lai Jiangshan wrote:
> On Thu, Jun 11, 2020 at 6:29 PM Bob Liu <bob.liu@...cle.com> wrote:
>>
>> Current code always set 'Unbound && max_active == 1' workqueues to ordered
>> implicitly, while this may be not an expected behaviour for some use cases.
>>
>> E.g some scsi and iscsi workqueues(unbound && max_active = 1) want to be bind
>> to different cpu so as to get better isolation, but their cpumask can't be
>> changed because WQ_ORDERED is set implicitly.
>
> Hello
>
> If I read the code correctly, the reason why their cpumask can't
> be changed is because __WQ_ORDERED_EXPLICIT, not __WQ_ORDERED.
>
>>
>> This patch adds a flag __WQ_ORDERED_DISABLE and also
>> create_singlethread_workqueue_noorder() to offer an new option.
>>
>> Signed-off-by: Bob Liu <bob.liu@...cle.com>
>> ---
>> include/linux/workqueue.h | 4 ++++
>> kernel/workqueue.c | 4 +++-
>> 2 files changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
>> index e48554e..4c86913 100644
>> --- a/include/linux/workqueue.h
>> +++ b/include/linux/workqueue.h
>> @@ -344,6 +344,7 @@ enum {
>> __WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
>> __WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */
>> __WQ_ORDERED_EXPLICIT = 1 << 19, /* internal: alloc_ordered_workqueue() */
>> + __WQ_ORDERED_DISABLE = 1 << 20, /* internal: don't set __WQ_ORDERED implicitly */
>>
>> WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
>> WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */
>> @@ -433,6 +434,9 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
>> #define create_singlethread_workqueue(name) \
>> alloc_ordered_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, name)
>>
>> +#define create_singlethread_workqueue_noorder(name) \
>> + alloc_workqueue("%s", WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | \
>> + WQ_UNBOUND | __WQ_ORDERED_DISABLE, 1, (name))
>
> I think using __WQ_ORDERED without __WQ_ORDERED_EXPLICIT is what you
> need, in which case cpumask is allowed to be changed.
>
I don't think so, see function workqueue_apply_unbound_cpumask():
wq_unbound_cpumask_store()
> workqueue_set_unbound_cpumask()
> workqueue_apply_unbound_cpumask() {
...
5276 /* creating multiple pwqs breaks ordering guarantee */
5277 if (wq->flags & __WQ_ORDERED)
5278 continue;
^^^^
Here will skip apply cpumask if only __WQ_ORDERED is set.
5280 ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs);
}
Thanks for your review.
Bob
> Just use alloc_workqueue() with __WQ_ORDERED and max_active=1. It can
> be wrapped as a new function or macro, but I don't think> create_singlethread_workqueue_noorder() is a good name for it.
>
>> extern void destroy_workqueue(struct workqueue_struct *wq);
>>
>> struct workqueue_attrs *alloc_workqueue_attrs(void);
>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
>> index 4e01c44..2167013 100644
>> --- a/kernel/workqueue.c
>> +++ b/kernel/workqueue.c
>> @@ -4237,7 +4237,9 @@ struct workqueue_struct *alloc_workqueue(const char *fmt,
>> * on NUMA.
>> */
>> if ((flags & WQ_UNBOUND) && max_active == 1)
>> - flags |= __WQ_ORDERED;
>> + /* the caller may don't want __WQ_ORDERED to be set implicitly. */
>> + if (!(flags & __WQ_ORDERED_DISABLE))
>> + flags |= __WQ_ORDERED;
>>
>> /* see the comment above the definition of WQ_POWER_EFFICIENT */
>> if ((flags & WQ_POWER_EFFICIENT) && wq_power_efficient)
>> --
>> 2.9.5
>>
Powered by blists - more mailing lists