lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Sep 2022 08:45:49 +0000
From:   "Nambiar, Amritha" <amritha.nambiar@...el.com>
To:     Alexander Duyck <alexander.duyck@...il.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "jiri@...nulli.us" <jiri@...nulli.us>,
        "xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>,
        "Gomes, Vinicius" <vinicius.gomes@...el.com>,
        "Samudrala, Sridhar" <sridhar.samudrala@...el.com>
Subject: RE: [net-next PATCH v2 0/4] Extend action skbedit to RX queue mapping

> -----Original Message-----
> From: Alexander Duyck <alexander.duyck@...il.com>
> Sent: Friday, September 9, 2022 11:35 AM
> To: Nambiar, Amritha <amritha.nambiar@...el.com>
> Cc: netdev@...r.kernel.org; kuba@...nel.org; jhs@...atatu.com;
> jiri@...nulli.us; xiyou.wangcong@...il.com; Gomes, Vinicius
> <vinicius.gomes@...el.com>; Samudrala, Sridhar
> <sridhar.samudrala@...el.com>
> Subject: Re: [net-next PATCH v2 0/4] Extend action skbedit to RX queue
> mapping
> 
> On Fri, Sep 9, 2022 at 2:18 AM Nambiar, Amritha
> <amritha.nambiar@...el.com> wrote:
> >
> > > -----Original Message-----
> > > From: Alexander Duyck <alexander.duyck@...il.com>
> > > Sent: Thursday, September 8, 2022 8:28 AM
> > > To: Nambiar, Amritha <amritha.nambiar@...el.com>
> > > Cc: netdev@...r.kernel.org; kuba@...nel.org; jhs@...atatu.com;
> > > jiri@...nulli.us; xiyou.wangcong@...il.com; Gomes, Vinicius
> > > <vinicius.gomes@...el.com>; Samudrala, Sridhar
> > > <sridhar.samudrala@...el.com>
> > > Subject: Re: [net-next PATCH v2 0/4] Extend action skbedit to RX queue
> > > mapping
> > >
> > > On Wed, Sep 7, 2022 at 6:14 PM Amritha Nambiar
> > > <amritha.nambiar@...el.com> wrote:
> > > >
> > > > Based on the discussion on
> > > >
> https://lore.kernel.org/netdev/20220429171717.5b0b2a81@kernel.org/,
> > > > the following series extends skbedit tc action to RX queue mapping.
> > > > Currently, skbedit action in tc allows overriding of transmit queue.
> > > > Extending this ability of skedit action supports the selection of receive
> > > > queue for incoming packets. Offloading this action is added for receive
> > > > side. Enabled ice driver to offload this type of filter into the
> > > > hardware for accepting packets to the device's receive queue.
> > > >
> > > > v2: Added documentation in Documentation/networking
> > > >
> > > > ---
> > > >
> > > > Amritha Nambiar (4):
> > > >       act_skbedit: Add support for action skbedit RX queue mapping
> > > >       act_skbedit: Offload skbedit queue mapping for receive queue
> > > >       ice: Enable RX queue selection using skbedit action
> > > >       Documentation: networking: TC queue based filtering
> > >
> > > I don't think skbedit is the right thing to be updating for this. In
> > > the case of Tx we were using it because at the time we stored the
> > > sockets Tx queue in the skb, so it made sense to edit it there if we
> > > wanted to tweak things before it got to the qdisc layer. However it
> > > didn't have a direct impact on the hardware and only really affected
> > > the software routing in the device, which eventually resulted in which
> > > hardware queue and qdisc was selected.
> > >
> > > The problem with editing the receive queue is that the hardware
> > > offloaded case versus the software offloaded can have very different
> > > behaviors. I wonder if this wouldn't be better served by being an
> >
> > Could you please explain how the hardware offload and software cases
> > behave differently in the skbedit case. From Jakub's suggestion on
> > https://lore.kernel.org/netdev/20220503084732.363b89cc@kernel.org/,
> > it looked like the skbedit action fits better to align the hardware and
> > software description of RX queue offload (considering the skb metadata
> > remains same in offload vs no-offload case).
> 
> So specifically my concern is RPS. The problem is RPS takes place
> before your TC rule would be applied in the software case, but after
> it has been applied in the hardware case. As a result the behavior
> will be different for one versus the other. With the redirect action
> it will pull the packet out of the Rx pipeline and reinsert it so that
> RPS will be applied to the packet and it would be received on the CPUs
> expected.
> 

Okay, so I understand that without HW offload, the SW behavior would
not align for RPS, i.e., RPS CPU would be from a queue (already selected
by HW, RSS etc.),  and may not align with the queue selected from
the SW TC rule. And I see your point, the solution to this would be
reinserting the packet after updating the queue. But, as I look more into
this, there are still some more concerns I have.

IIUC, we may be looking at a potential TC rule as below:
tc filter add dev ethX ingress ... \
action mirred ingress redirect dev ethX rxqueue <rx_qid>

It looks to me that this configuration could possibly result in loops
recursively calling act_mirred. Since the redirection is from ingress
to ingress on the same device, when the packet is reinserted into the
RX pipeline of the same device, RPS and tc classification happens again,
the tc filter with act mirred executes redirecting and reinserting the
packet again. act_mirred keeps a CPU counter of recursive calls for the
action and drops the packet when the limit is reached. 
If this is a valid configuration, I cannot find any code that perhaps uses
a combination of skb->redirect and skb->from_ingress to check and
prevent recursive classification (further execution of TC mirred redirect
action).

Also, since reinserting the packet after updating the queue would fix
the RPS inconsistency, can this be done from the skbedit action instead
of mirred redirect ? So, if skbedit action is used for Rx queue selection,
maybe this sequence helps:

RPS on RX q1 -> TC action skbedit RX q2 -> 
always reinsert if action skbedit is on RX -> RPS on RX q2 -> 
stop further execution of TC action RX skbedit

> > > extension of the mirred ingress redirect action which is already used
> > > for multiple hardware offloads as I recall.
> > >
> > > In this case you would want to be redirecting packets received on a
> > > port to being received on a specific queue on that port. By using the
> > > redirect action it would take the packet out of the receive path and
> > > reinsert it, being able to account for anything such as the RPS
> > > configuration on the device so the behavior would be closer to what
> > > the hardware offloaded behavior would be.
> >
> > Wouldn't this be an overkill as we only want to accept packets into a
> > predetermined queue? IIUC, the mirred redirect action typically moves
> > packets from one interface to another, the filter is added on interface
> > different from the destination interface. In our case, with the
> > destination interface being the same, I am not understanding the need
> > for a loopback. Also, WRT to RPS, not sure I understand the impact
> > here. In hardware, once the offloaded filter executes to select the queue,
> > RSS does not run. In software, if RPS executes before
> > sch_handle_ingress(), wouldn't any tc-actions (mirred redirect or skbedit
> > overriding the queue) behave in similar way ?
> 
> The problem is that RPS != RSS. You can use the two together to spread
> work out over a greater set of queues. So for example in a NUMA system
> with multiple sockets/nodes you might use RSS to split the work up
> into a per-node queue(s), and then use RPS to split up the work across
> CPUs within that node. If you pick a packet up from one device and
> redirect it via the mirred action the RPS is applied as though the
> packet was received on the device so the RPS queue would be correct
> assuming you updated the queue. That is what I am looking for. What
> this patch is doing is creating a situation where the effect is very
> different between the hardware and software version of things which
> would likely break things for a use case such as this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ