lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 7 Jul 2021 14:01:40 +0100
From:   Martin Habets <habetsm.xilinx@...il.com>
To:     Íñigo Huguet <ihuguet@...hat.com>
Cc:     Edward Cree <ecree.xilinx@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>, ivan@...udflare.com,
        ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
        john.fastabend@...il.com, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sfc: revert "reduce the number of requested xdp ev
 queues"

On Wed, Jul 07, 2021 at 01:49:40PM +0200, Íñigo Huguet wrote:
> > And on line 184 probably we need to set efx->xdp_tx_per_channel to the
> >  same thing, rather than blindly to EFX_MAX_TXQ_PER_CHANNEL as at
> >  present — I suspect the issue you mention in patch #2 stemmed from
> >  that.
> > Note that if we are in fact hitting this limitation (i.e. if
> >  tx_per_ev > EFX_MAX_TXQ_PER_CHANNEL), we could readily increase
> >  EFX_MAX_TXQ_PER_CHANNEL at the cost of a little host memory, enabling
> >  us to make more efficient use of our EVQs and thus retain XDP TX
> >  support up to a higher number of CPUs.
> 
> Yes, that was a possibility I was thinking of as long term solution,
> or even allocate the queues dynamically. Would this be a problem?
> What's the reason for them being statically allocated? Also, what's
> the reason for the channels being limited to 32? The hardware can be
> configured to provide more than that, but the driver has this constant
> limit.

The static defines in this area are historic only. We have wanted to
remove them for a number of years. With newer hardware the reasons to
do so are ever increasing, so we are more actively working on this now.

> Another question I have, thinking about the long term solution: would
> it be a problem to use the standard TX queues for XDP_TX/REDIRECT? At
> least in the case that we're hitting the resources limits, I think
> that they could be enqueued to these queues. I think that just taking
> netif_tx_lock would avoid race conditions, or a per-queue lock.

We considered this but did not want normal traffic to get delayed for
XDP traffic. The perceived performance drop on a normal queue would
be tricky to diagnose, and the only way to prevent it would be to
disable XDP on the interface all together. There is no way to do the
latter per interface, and we felt the "solution" of disabling XDP
was not a good way forward.
Off course our design of this was all done several years ago.

Regards,
Martin Habets

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ