lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Jul 2021 14:14:13 +0200
From:   Íñigo Huguet <ihuguet@...hat.com>
To:     Íñigo Huguet <ihuguet@...hat.com>,
        Edward Cree <ecree.xilinx@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>, ivan@...udflare.com,
        ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
        john.fastabend@...il.com, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sfc: revert "reduce the number of requested xdp ev queues"

On Wed, Jul 7, 2021 at 3:01 PM Martin Habets <habetsm.xilinx@...il.com> wrote:
> > Another question I have, thinking about the long term solution: would
> > it be a problem to use the standard TX queues for XDP_TX/REDIRECT? At
> > least in the case that we're hitting the resources limits, I think
> > that they could be enqueued to these queues. I think that just taking
> > netif_tx_lock would avoid race conditions, or a per-queue lock.
>
> We considered this but did not want normal traffic to get delayed for
> XDP traffic. The perceived performance drop on a normal queue would
> be tricky to diagnose, and the only way to prevent it would be to
> disable XDP on the interface all together. There is no way to do the
> latter per interface, and we felt the "solution" of disabling XDP
> was not a good way forward.
> Off course our design of this was all done several years ago.

In my opinion, there is no reason to make that distinction between
normal traffic and XDP traffic. XDP traffic redirected with XDP_TX or
XDP_REDIRECT is traffic that the user has chosen to redirect that way,
but pushing the work down in the stack. Without XDP, this traffic had
gone up the stack to userspace, or at least to the firewall, and then
redirected, passed again to the network stack and added to normal TX
queues.

If the user wants to prevent XDP from mixing with normal traffic, just
not attaching an XDP program to the interface, or not using
XDP_TX/REDIRECT in it would be enough. Probably I don't understand
what you want to say here.

Anyway, if you think that keeping XDP TX queues separated is the way
to go, it's OK, but my proposal is to share the normal TX queues at
least in the cases where dedicated queues cannot be allocated. As you
say, the performance drop would be tricky to measure, if there's any,
but in any case, even separating the queues, they're competing for
resources of CPU, PCI bandwidth, network bandwidth...

The fact is that the situation right now is this one:
- Many times (or almost always with modern servers' processors)
XDP_TX/REDIRECT doesn't work at all
- The only workaround is reducing the number of normal channels to let
free resources for XDP, but this is a much higher performance drop for
normal traffic than sharing queues with XDP, IMHO.

Increasing the maximum number of channels and queues, or even making
them virtually unlimited, would be very good, I think, because people
who knows how to configure the hardware would take advantage of it,
but there will always be situations of getting short of resources:
- Who knows how many cores we will be using 5 forward from now?
- VFs normally have less resources available: 8 MSI-X vectors by default

With some time, I can try to prepare some patches with these changes,
if you agree.

Regards
-- 
Íñigo Huguet

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ