[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <92d8b57f-db37-e4bf-b69f-3ab5c4440ea0@linux.intel.com>
Date: Tue, 2 Oct 2018 13:49:22 -0700
From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
To: Tejun Heo <tj@...nel.org>
Cc: linux-nvdimm@...ts.01.org, gregkh@...uxfoundation.org,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, len.brown@...el.com,
dave.jiang@...el.com, rafael@...nel.org, vishal.l.verma@...el.com,
jiangshanlai@...il.com, pavel@....cz, zwisler@...nel.org,
dan.j.williams@...el.com
Subject: Re: [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide
queue_work_near to queue work near a given NUMA node
On 10/2/2018 11:41 AM, Tejun Heo wrote:
> Hello,
>
> On Tue, Oct 02, 2018 at 11:23:26AM -0700, Alexander Duyck wrote:
>>> Yeah, it's all in wq_select_unbound_cpu(). Right now, if the
>>> requested cpu isn't in wq_unbound_cpumask, it falls back to dumb
>>> round-robin. We can probably do better there and find the nearest
>>> node considering topology.
>>
>> Well if we could get wq_select_unbound_cpu doing the right thing
>> based on node topology that would be most of my work solved right
>> there. Basically I could just pass WQ_CPU_UNBOUND with the correct
>> node and it would take care of getting to the right CPU.
>
> Yeah, sth like that. It might be better to keep the function to take
> cpu for consistency as everything else passes around cpu.
>
>>>> The question I have then is what should I do about workqueues that
>>>> aren't WQ_UNBOUND if they attempt to use queue_work_near? In that
>>>
>>> Hmm... yeah, let's just use queue_work_on() for now. We can sort it
>>> out later and users could already do that anyway.
>>
>> So are you saying I should just return an error for now if somebody
>> tries to use something other than an unbound workqueue with
>> queue_work_near, and expect everyone else to just use queue_work_on
>> for the other workqueue types?
>
> Oh, I meant that let's not add a new interface for now and just use
> queue_work_on() for your use case too.
>
> Thanks.
So the only issue is that I was hoping to get away with not having to
add additional preemption. That was the motivation behind doing
queue_work_near as I could just wrap it all in the same local_irq_save
that way I don't have to worry about the CPU I am on changing.
What I may look at doing is just greatly reducing the
workqueue_select_unbound_cpu_near function to essentially just perform a
few tests and then will just use the results from a cpumask_any_and of
the cpumask_of_node and the cpu_online_mask. I'll probably rename it
while I am at it since I am going to probably be getting away from the
"unbound" checks in the logic.
- Alex
Powered by blists - more mailing lists