[<prev] [next>] [day] [month] [year] [list]
Message-ID: <e61d3c07-7db0-28d8-03a0-cae13698a634@kernel.dk>
Date: Thu, 22 Oct 2020 20:24:16 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Hillf Danton <hdanton@...a.com>
Cc: Zhang Qiang <qiang.zhang@...driver.com>, viro@...iv.linux.org.uk,
io-uring@...r.kernel.org, Pavel Begunkov <asml.silence@...il.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: Question on io-wq
On 10/22/20 8:05 PM, Hillf Danton wrote:
> On Thu, 22 Oct 2020 08:08:09 -0600 Jens Axboe wrote:
>> On 10/22/20 3:02 AM, Zhang,Qiang wrote:
>>>
>>> Hi Jens Axboe
>>>
>>> There are some problem in 'io_wqe_worker' thread, when the
>>> 'io_wqe_worker' be create and Setting the affinity of CPUs in NUMA
>>> nodes, due to CPU hotplug, When the last CPU going down, the
>>> 'io_wqe_worker' thread will run anywhere. when the CPU in the node goes
>>> online again, we should restore their cpu bindings?
>>
>> Something like the below should help in ensuring affinities are
>> always correct - trigger an affinity set for an online CPU event. We
>> should not need to do it for offlining. Can you test it?
>
> CPU affinity is intact because of nothing to do on offline, and scheduler
> will move the stray workers on to the correct NUMA node if any CPU goes
> online, so it's a bit hard to see what is going to be tested.
Test it yourself:
- Boot with > 1 NUMA node
- Start an io_uring, you now get 2 workers, each affinitized to a node
- Now offline all CPUs in one node
- Online one or more of the CPU in that same node
The end result is that the worker on the node that was offlined now
has a mask of the other node, plus the newly added CPU.
So your last statement isn't correct, which is what the original
reporter stated.
--
Jens Axboe
Powered by blists - more mailing lists