lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e37939313fc24658a8a6b860dcea506e@AcuMS.aculab.com>
Date:   Sat, 3 Oct 2020 09:57:09 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Wei Wang' <weiwan@...gle.com>,
        "David S . Miller" <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC:     Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Hannes Frederic Sowa <hannes@...essinduktion.org>,
        Felix Fietkau <nbd@....name>
Subject: RE: [PATCH net-next v2 0/5] implement kthread based napi poll

From: Wei Wang
> Sent: 02 October 2020 23:25
> 
> The idea of moving the napi poll process out of softirq context to a
> kernel thread based context is not new.
> Paolo Abeni and Hannes Frederic Sowa have proposed patches to move napi
> poll to kthread back in 2016. And Felix Fietkau has also proposed
> patches of similar ideas to use workqueue to process napi poll just a
> few weeks ago.

What default scheduler priority are you planning to use?

The current 'softint' is (effectively) slightly higher priority
than the highest RT priority.

I think you need to use a 'middle' priority RT process so that
applications can decide whether they need to be higher/lower
priority than the network code.

But then you hit the problem that the scheduler gives RT
processes a very 'sticky' cpu affinity.
IIRC they don't ever get 'stolen' by an idle cpu, so only
migrate when the scheduler for the cpu they last ran on
decides to run something of a higher priority.
This is problematic if a low priority process in looping
in kernel space somewhere (without a cond_resched()).
(I've been running ftrace...)

Given that the napi cpu cycles have to happen sometime,
the biggest problem I found with the current softint
implementation is that a hardware interrupt can happen
while an application is holding a (user space) mutex.
This will block other application threads from acquiring
the mutex until not only the hardware interrupt
completes, but also all the associated softint (typically
napi and rcu) processing has completed.
This can take a while!
Moving the 'softint' processing to a separate thread
will allow the interrupted process to release the mutex
and all the application threads continue.

I guess the downside of using a thread is that the
data needed is likely to be in the wrong cache.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ