lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 7 Jul 2021 13:49:40 +0200
From:   Íñigo Huguet <ihuguet@...hat.com>
To:     Edward Cree <ecree.xilinx@...il.com>
Cc:     habetsm.xilinx@...il.com, "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>, ivan@...udflare.com,
        ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
        john.fastabend@...il.com, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sfc: revert "reduce the number of requested xdp ev queues"

On Wed, Jul 7, 2021 at 1:23 PM Edward Cree <ecree.xilinx@...il.com> wrote:
> Should we then be using min(tx_per_ev, EFX_MAX_TXQ_PER_CHANNEL) in the
>  DIV_ROUND_UP?

Could be another possibility, but currently that will always result in
EFX_MAX_TXQ_PER_CHANNEL, because tx_per_ev will be 4 or 8 depending on
the model. Anyway, I will add this change to v2, just in case any
constant is changed in the future.

> And on line 184 probably we need to set efx->xdp_tx_per_channel to the
>  same thing, rather than blindly to EFX_MAX_TXQ_PER_CHANNEL as at
>  present — I suspect the issue you mention in patch #2 stemmed from
>  that.
> Note that if we are in fact hitting this limitation (i.e. if
>  tx_per_ev > EFX_MAX_TXQ_PER_CHANNEL), we could readily increase
>  EFX_MAX_TXQ_PER_CHANNEL at the cost of a little host memory, enabling
>  us to make more efficient use of our EVQs and thus retain XDP TX
>  support up to a higher number of CPUs.

Yes, that was a possibility I was thinking of as long term solution,
or even allocate the queues dynamically. Would this be a problem?
What's the reason for them being statically allocated? Also, what's
the reason for the channels being limited to 32? The hardware can be
configured to provide more than that, but the driver has this constant
limit.

Another question I have, thinking about the long term solution: would
it be a problem to use the standard TX queues for XDP_TX/REDIRECT? At
least in the case that we're hitting the resources limits, I think
that they could be enqueued to these queues. I think that just taking
netif_tx_lock would avoid race conditions, or a per-queue lock.

In any case, these are 2 different things: one is fixing this bug as
soon as possible, and another thinking and implementing the long term
solution to the short-of-resources problem.

Regards
-- 
Íñigo Huguet

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ