lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 2 Oct 2020 16:15:33 -0700
From:   Alexei Starovoitov <>
To:     David Miller <>
Cc:     Wei Wang <>,
        Network Development <>,
        Eric Dumazet <>,
        Jakub Kicinski <>,
        Hannes Frederic Sowa <>,
        Paolo Abeni <>,
Subject: Re: [PATCH net-next 0/5] implement kthread based napi poll

On Fri, Oct 2, 2020 at 4:02 PM David Miller <> wrote:
> From: Wei Wang <>
> Date: Wed, 30 Sep 2020 12:21:35 -0700
>  ...
> > And the reason we prefer 1 kthread per napi, instead of 1 workqueue
> > entity per host, is that kthread is more configurable than workqueue,
> > and we could leverage existing tuning tools for threads, like taskset,
> > chrt, etc to tune scheduling class and cpu set, etc. Another reason is
> > if we eventually want to provide busy poll feature using kernel threads
> > for napi poll, kthread seems to be more suitable than workqueue.
> ...
> I think we still need to discuss this some more.
> Jakub has some ideas and I honestly think the whole workqueue
> approach hasn't been fully considered yet.

I want to point out that it's not kthread vs wq. I think the mechanism
has to be pluggable. The kernel needs to support both kthread and wq.
Or maybe even the 3rd option. Whatever it might be.
Via sysctl or something.
I suspect for some production workloads wq will perform better.
For the others it will be kthread.
Clearly kthread is more tunable, but not everyone would have
knowledge and desire to do the tunning.
We can argue what should be the default, but that's secondary.

> If this wan't urgent years ago (when it was NACK'd btw), it isn't
> urgent for 5.10 so I don't know why we are pushing so hard for
> this patch series to go in as-is right now.
> Please be patient and let's have a full discussion on this.

+1. This is the biggest change to the kernel networking in years.
Let's make it right.

Powered by blists - more mailing lists