lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iLJSHhb=7mibKTBF2bbceFqSM0kNOANdFZ3rTaM3kwj7w@mail.gmail.com>
Date:   Sat, 3 Oct 2020 05:54:38 +0200
From:   Eric Dumazet <edumazet@...gle.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     David Miller <davem@...emloft.net>, Wei Wang <weiwan@...gle.com>,
        Network Development <netdev@...r.kernel.org>,
        Jakub Kicinski <kuba@...nel.org>,
        Hannes Frederic Sowa <hannes@...essinduktion.org>,
        Paolo Abeni <pabeni@...hat.com>, Felix Fietkau <nbd@....name>
Subject: Re: [PATCH net-next 0/5] implement kthread based napi poll

On Sat, Oct 3, 2020 at 1:15 AM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Fri, Oct 2, 2020 at 4:02 PM David Miller <davem@...emloft.net> wrote:
> >
> > From: Wei Wang <weiwan@...gle.com>
> > Date: Wed, 30 Sep 2020 12:21:35 -0700
> >
> >  ...
> > > And the reason we prefer 1 kthread per napi, instead of 1 workqueue
> > > entity per host, is that kthread is more configurable than workqueue,
> > > and we could leverage existing tuning tools for threads, like taskset,
> > > chrt, etc to tune scheduling class and cpu set, etc. Another reason is
> > > if we eventually want to provide busy poll feature using kernel threads
> > > for napi poll, kthread seems to be more suitable than workqueue.
> > ...
> >
> > I think we still need to discuss this some more.
> >
> > Jakub has some ideas and I honestly think the whole workqueue
> > approach hasn't been fully considered yet.
>
> I want to point out that it's not kthread vs wq. I think the mechanism
> has to be pluggable. The kernel needs to support both kthread and wq.
> Or maybe even the 3rd option. Whatever it might be.
> Via sysctl or something.
> I suspect for some production workloads wq will perform better.
> For the others it will be kthread.
> Clearly kthread is more tunable, but not everyone would have
> knowledge and desire to do the tunning.

The exact same arguments can be used against RPS and RFS

RPS went first, and was later augmented with RFS (with very different goals)

They both are opt-in

> We can argue what should be the default, but that's secondary.
>
> > If this wan't urgent years ago (when it was NACK'd btw), it isn't
> > urgent for 5.10 so I don't know why we are pushing so hard for
> > this patch series to go in as-is right now.
> >
> > Please be patient and let's have a full discussion on this.
>
> +1. This is the biggest change to the kernel networking in years.
> Let's make it right.

Sure. I do not think it is urgent.

This has been revived by Felix some weeks ago, and I think we were the
ones spending time on the proposed patch set.
Not giving feedback to Felix would have been "something not right".

We reviewed and tested Felix patches, and came up with something more flexible.

Sure, a WQ is already giving nice results on appliances, because there
you do not need strong isolation.
Would a kthread approach also work well on appliances ? Probably...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ