[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33cde8903dbe09a8abda1cd2ae7a9d3fdc2bc5e8.camel@kernel.org>
Date: Mon, 02 Nov 2020 15:27:51 -0800
From: Saeed Mahameed <saeed@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, Jonathan Corbet <corbet@....net>,
"David S. Miller" <davem@...emloft.net>,
Shuah Khan <shuah@...nel.org>, linux-doc@...r.kernel.org,
linux-kselftest@...r.kernel.org,
Marcelo Tosatti <mtosatti@...hat.com>
Subject: Re: [PATCH net-next v2 0/3] net: introduce rps_default_mask
On Mon, 2020-11-02 at 14:54 -0800, Jakub Kicinski wrote:
> On Fri, 30 Oct 2020 12:16:00 +0100 Paolo Abeni wrote:
> > Real-time setups try hard to ensure proper isolation between time
> > critical applications and e.g. network processing performed by the
> > network stack in softirq and RPS is used to move the softirq
> > activity away from the isolated core.
> >
> > If the network configuration is dynamic, with netns and devices
> > routinely created at run-time, enforcing the correct RPS setting
> > on each newly created device allowing to transient bad
> > configuration
> > became complex.
> >
> > These series try to address the above, introducing a new
> > sysctl knob: rps_default_mask. The new sysctl entry allows
> > configuring a systemwide RPS mask, to be enforced since receive
> > queue creation time without any fourther per device configuration
> > required.
> >
The whole thing can be replaced with a user daemon scripts that
monitors all newly created devices and assign to them whatever rps mask
(call it default).
So why do we need this special logic in kernel ?
I am not sure about this, but if rps queues sysfs are available before
the netdev is up, then you can also use udevd to assign the rps masks
before such devices are even brought up, so you would avoid the race
conditions that you described, which are not really clear to me to be
honest.
> > Additionally, a simple self-test is introduced to check the
> > rps_default_mask behavior.
>
> RPS is disabled by default, the processing is going to happen
> wherever
> the IRQ is mapped, and one would hope that the IRQ is not mapped to
> the
> core where the critical processing runs.
>
> Would you mind elaborating further on the use case?
Powered by blists - more mailing lists