[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iK30VVUggQn6-ULOhKnLxz4Ogjw8fMZxbqWiOztURdccA@mail.gmail.com>
Date: Fri, 2 Oct 2020 09:56:31 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Wei Wang <weiwan@...gle.com>,
"David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Paolo Abeni <pabeni@...hat.com>, Felix Fietkau <nbd@....name>
Subject: Re: [PATCH net-next 0/5] implement kthread based napi poll
On Thu, Oct 1, 2020 at 10:26 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Thu, 1 Oct 2020 09:52:45 +0200 Eric Dumazet wrote:
> > The unique work queue is a problem on server class platforms, with
> > NUMA placement.
> > We now have servers with NIC on different NUMA nodes.
>
> Are you saying that the wq code is less NUMA friendly than unpinned
> threads?
Yes this is what I am saying.
Using a single and shared wq wont allow you to make sure :
- work for NIC0 attached on NUMA node#0 will be using CPUS belonging to node#0
- work for NIC1 attached on NUMA node#1 will be using CPUS belonging to node#1
The only way you can tune things with a single wq is tweaking a single cpumask,
that we can change with /sys/devices/virtual/workqueue/{wqname}/cpumask
The same for the nice value with /sys/devices/virtual/workqueue/{wqname}/nice.
In contrast, having kthreads let you tune things independently, if needed.
Even with a single NIC, you can still need isolation between queues.
We have queues dedicated to a certain kind of traffic/application.
The work queue approach would need to be able to create/delete
independent workqueues.
But we tested the workqueue with a single NIC and our results gave to
kthreads a win over the work queue.
Really, wq concept might be a nice abstraction when each work can be
running for arbitrary durations,
and arbitrary numbers of cpus, but with the NAPI model of up to 64
packets at a time, and a fixed number of queues,
we should not add the work queue overhead.
Powered by blists - more mailing lists