[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5a8f5032-f5fb-aa6d-9366-d2c85a761310@iogearbox.net>
Date: Fri, 10 Aug 2018 16:10:20 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org
Cc: victor@...iniac.net, eric@...it.org,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
jhsiao@...hat.com
Subject: Re: [bpf-next V2 PATCH 0/2] Implement sample code for XDP cpumap
IP-pair load-balancing
On 08/10/2018 02:02 PM, Jesper Dangaard Brouer wrote:
> Background: cpumap moves the SKB allocation out of the driver code,
> and instead allocate it on the remote CPU, and invokes the regular
> kernel network stack with the newly allocated SKB.
>
> The idea behind the XDP CPU redirect feature, is to use XDP as a
> load-balancer step in-front of regular kernel network stack. But the
> current sample code does not provide a good example of this. Part of
> the reason is that, I have implemented this as part of Suricata XDP
> load-balancer.
>
> Given this is the most frequent feature request I get. This patchset
> implement the same XDP load-balancing as Suricata does, which is a
> symmetric hash based on the IP-pairs + L4-protocol.
>
> The expected setup for the use-case is to reduce the number of NIC RX
> queues via ethtool (as XDP can handle more per core), and via
> smp_affinity assign these RX queues to a set of CPUs, which will be
> handling RX packets. The CPUs that runs the regular network stack is
> supplied to the sample xdp_redirect_cpu tool by specifying
> the --cpu option multiple times on the cmdline.
>
> I do note that cpumap SKB creation is not feature complete yet, and
> more work is coming. E.g. given GRO is not implemented yet, do expect
> TCP workloads to be slower. My measurements do indicate UDP workloads
> are faster.
Applied to bpf-next, thanks Jesper!
Powered by blists - more mailing lists