lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 26 Sep 2020 16:22:23 +0200
From:   "Hannes Frederic Sowa" <hannes@...essinduktion.org>
To:     "Wei Wang" <weiwan@...gle.com>
Cc:     "David Miller" <davem@...emloft.net>,
        "Linux Kernel Network Developers" <netdev@...r.kernel.org>,
        "Jakub Kicinski" <kuba@...nel.org>,
        "Eric Dumazet" <edumazet@...gle.com>,
        "Paolo Abeni" <pabeni@...hat.com>, "Felix Fietkau" <nbd@....name>
Subject: Re: [RFC PATCH net-next 1/6] net: implement threaded-able napi poll loop support

Hello,

On Sat, Sep 26, 2020, at 01:50, Wei Wang wrote:
> On Fri, Sep 25, 2020 at 12:46 PM Hannes Frederic Sowa
> <hannes@...essinduktion.org> wrote:
> > The format string is only based on variable strings. To ease a quick
> > grep for napi threads with ps I would propose to use "napi-%s-%d" or
> > something alike to distinguish all threads created that way.
> >
> 
> Ack. Will add this in the next version.

I think the convention would be to use "napi/%s-%d".

> > Some other comments and questions:
> >
> > Back then my plan was to get this somewhat integrated with the
> > `threadirqs` kernel boot option because triggering the softirq from
> > threaded context (if this option is set) seemed wrong to me. Maybe in
> > theory the existing interrupt thread could already be used in this case.
> > This would also allow for fine tuning the corresponding threads as in
> > this patch series.
> >
> > Maybe the whole approach of threaded irqs plus the already existing
> > infrastructure could also be used for this series if it wouldn't be an
> > all or nothing opt-in based on the kernel cmd line parameter? napi would
> > then be able to just poll directly inline in the interrupt thread.
> >
> 
> I took a look at the current "threadirqs" implementation. From my
> understanding, the kthread used there is to handle irq from the
> driver, and needs driver-specific thread_fn to be used. It is not as
> generic as in the napi layer where a common napi_poll() related
> function could be used as the thread handler. Or did I misunderstand
> your point?

Based on my memories: We had napi_schedule & co being invoked inside
the threads without touching any driver code when we specified
threadirqs. But this would need a double check. The idea of the napi
threads came out of the observation that the threaded irq would merely
kick softirq net-rx (thread). Maybe Paolo has better memories and what
we tried back then?

Thus the idea is to add a flag NAPI_INLINE, which could run
the napi loop from within the threaded irq handler directly and thus
just build on top of the current irq management framework.

This would require to make the single-shot kernel boot parameter
configurable per device (and probably during run-time). I have
absolutely no idea if that's feasible and how complicated that is and
thus might be a dead end.

> > The difference for those kthreads and the extra threads created here
> > would be that fifo scheduling policy is set by default and they seem to
> > automatically get steered to the appropriate CPUs via the IRQTF_AFFINITY
> > mechanism. Maybe this approach is useful here as well?
> >
> > I hadn't had a look at the code for a while thus my memories might be
> > wrong here.
> 
> Yes. Using a higher priority thread policy and doing pinning could be
> beneficial in certain workloads. But I think this should be left to
> the user/admin to do the tuning accordingly.

I agree in general, but if the common case necessarily requires to set
various scheduling options it might still be worthwhile? Administrators
are free to change them later anyway. It might be the same argument that
added the default scheduling parameters to the thread irqs in the first
place, but I can be wrong here and they got added because of
correctness.

Bye,
Hannes

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ