[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQKdwB9ZnBnyqJG7kysBnwFr+BYBAEF=sqHj-=VRr-j34Q@mail.gmail.com>
Date: Fri, 2 Oct 2020 16:15:33 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: David Miller <davem@...emloft.net>
Cc: Wei Wang <weiwan@...gle.com>,
Network Development <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Paolo Abeni <pabeni@...hat.com>, nbd@....name
Subject: Re: [PATCH net-next 0/5] implement kthread based napi poll
On Fri, Oct 2, 2020 at 4:02 PM David Miller <davem@...emloft.net> wrote:
>
> From: Wei Wang <weiwan@...gle.com>
> Date: Wed, 30 Sep 2020 12:21:35 -0700
>
> ...
> > And the reason we prefer 1 kthread per napi, instead of 1 workqueue
> > entity per host, is that kthread is more configurable than workqueue,
> > and we could leverage existing tuning tools for threads, like taskset,
> > chrt, etc to tune scheduling class and cpu set, etc. Another reason is
> > if we eventually want to provide busy poll feature using kernel threads
> > for napi poll, kthread seems to be more suitable than workqueue.
> ...
>
> I think we still need to discuss this some more.
>
> Jakub has some ideas and I honestly think the whole workqueue
> approach hasn't been fully considered yet.
I want to point out that it's not kthread vs wq. I think the mechanism
has to be pluggable. The kernel needs to support both kthread and wq.
Or maybe even the 3rd option. Whatever it might be.
Via sysctl or something.
I suspect for some production workloads wq will perform better.
For the others it will be kthread.
Clearly kthread is more tunable, but not everyone would have
knowledge and desire to do the tunning.
We can argue what should be the default, but that's secondary.
> If this wan't urgent years ago (when it was NACK'd btw), it isn't
> urgent for 5.10 so I don't know why we are pushing so hard for
> this patch series to go in as-is right now.
>
> Please be patient and let's have a full discussion on this.
+1. This is the biggest change to the kernel networking in years.
Let's make it right.
Powered by blists - more mailing lists