lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 13 Dec 2020 10:44:56 -0800
From:   Ivan Babrou <ivan@...udflare.com>
To:     Ivan Babrou <ivan@...udflare.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        kernel-team <kernel-team@...udflare.com>,
        Edward Cree <ecree.xilinx@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH net-next] sfc: backport XDP EV queue sharing from the
 out-of-tree driver

On Sun, Dec 13, 2020 at 4:23 AM Martin Habets <habetsm.xilinx@...il.com> wrote:
>
> On Thu, Dec 10, 2020 at 04:18:53PM -0800, Ivan Babrou wrote:
> > Queue sharing behaviour already exists in the out-of-tree sfc driver,
> > available under xdp_alloc_tx_resources module parameter.
>
> This comment is not relevant for in-tree patches. I'd also like to
> make clear that we never intend to upstream any module parameters.

Would the following commit message be acceptable?

sfc: reduce the number of requested xdp ev queues

Without this change the driver tries to allocate too many queues,
breaching the number of available msi-x interrupts on machines
with many logical cpus and default adapter settings:

Insufficient resources for 12 XDP event queues (24 other channels, max 32)

Which in turn triggers EINVAL on XDP processing:

sfc 0000:86:00.0 ext0: XDP TX failed (-22)

> > This avoids the following issue on machines with many cpus:
> >
> > Insufficient resources for 12 XDP event queues (24 other channels, max 32)
> >
> > Which in turn triggers EINVAL on XDP processing:
> >
> > sfc 0000:86:00.0 ext0: XDP TX failed (-22)
>
> The code changes themselves are good.
> The real limit that is hit here is with the number of MSI-X interrupts.
> Reducing the number of event queues needed also reduces the number of
> interrupts required, so this is a good thing.
> Another way to get around this issue is to increase the number of
> MSI-X interrupts allowed bu the NIC using the sfboot tool.

I've tried that, but on 5.10-rc7 with the in-tree driver both ethtool -l
and sfboot are unable to work for some reason with sfc adapter.

The docs about the setting itself says you need to contact support
to figure out the right values to use to make sure it works properly.

What is your overall verdict on the patch? Should it be in the kernel
or should users change msix-limit configuration? The configuration
change requires breaking pcie lockdown measures as well, which is
why I'd prefer for things to work out of the box.

Thanks!

>
> Best regards,
> Martin
>
> > Signed-off-by: Ivan Babrou <ivan@...udflare.com>
> > ---
> >  drivers/net/ethernet/sfc/efx_channels.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/sfc/efx_channels.c
> > b/drivers/net/ethernet/sfc/efx_channels.c
> > index a4a626e9cd9a..1bfeee283ea9 100644
> > --- a/drivers/net/ethernet/sfc/efx_channels.c
> > +++ b/drivers/net/ethernet/sfc/efx_channels.c
> > @@ -17,6 +17,7 @@
> >  #include "rx_common.h"
> >  #include "nic.h"
> >  #include "sriov.h"
> > +#include "workarounds.h"
> >
> >  /* This is the first interrupt mode to try out of:
> >   * 0 => MSI-X
> > @@ -137,6 +138,7 @@ static int efx_allocate_msix_channels(struct efx_nic *efx,
> >  {
> >   unsigned int n_channels = parallelism;
> >   int vec_count;
> > + int tx_per_ev;
> >   int n_xdp_tx;
> >   int n_xdp_ev;
> >
> > @@ -149,9 +151,9 @@ static int efx_allocate_msix_channels(struct efx_nic *efx,
> >   * multiple tx queues, assuming tx and ev queues are both
> >   * maximum size.
> >   */
> > -
> > + tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx);
> >   n_xdp_tx = num_possible_cpus();
> > - n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, EFX_MAX_TXQ_PER_CHANNEL);
> > + n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev);
> >
> >   vec_count = pci_msix_vec_count(efx->pci_dev);
> >   if (vec_count < 0)
> > --
> > 2.29.2
>
> --
> Martin Habets <habetsm.xilinx@...il.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ