[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191028110828.512eb99c@carbon>
Date: Mon, 28 Oct 2019 11:08:28 +0100
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: David Ahern <dsahern@...il.com>,
Toshiaki Makita <toshiaki.makita1@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Pablo Neira Ayuso <pablo@...filter.org>,
Jozsef Kadlecsik <kadlec@...filter.org>,
Florian Westphal <fw@...len.de>,
Pravin B Shelar <pshelar@....org>, netdev@...r.kernel.org,
bpf@...r.kernel.org, William Tu <u9012063@...il.com>,
Stanislav Fomichev <sdf@...ichev.me>
Subject: Re: [RFC PATCH v2 bpf-next 00/15] xdp_flow: Flow offload to XDP
On Mon, 28 Oct 2019 09:36:12 +0100
Toke Høiland-Jørgensen <toke@...hat.com> wrote:
> David Ahern <dsahern@...il.com> writes:
>
> > On 10/27/19 9:21 AM, Toke Høiland-Jørgensen wrote:
> >> Rather, what we should be doing is exposing the functionality through
> >> helpers so XDP can hook into the data structures already present in the
> >> kernel and make decisions based on what is contained there. We already
> >> have that for routing; L2 bridging, and some kind of connection
> >> tracking, are obvious contenders for similar additions.
> >
> > The way OVS is coded and expected to flow (ovs_vport_receive ->
> > ovs_dp_process_packet -> ovs_execute_actions -> do_execute_actions) I do
> > not see any way to refactor it to expose a hook to XDP. But, if the use
> > case is not doing anything big with OVS (e.g., just ACLs and forwarding)
> > that is easy to replicate in XDP - but then that means duplicate data
> > and code.
>
> Yeah, I didn't mean that part for OVS, that was a general comment for
> reusing kernel functionality.
>
> > Linux bridge on the other hand seems fairly straightforward to
> > refactor. One helper is needed to convert ingress <port,mac,vlan> to
> > an L2 device (and needs to consider stacked devices) and then a second
> > one to access the fdb for that device.
>
> Why not just a single lookup like what you did for routing? Not too
> familiar with the routing code...
I'm also very interested in hearing more about how we can create an XDP
bridge lookup BPF-helper...
> > Either way, bypassing the bridge has mixed results: latency improves
> > but throughput takes a hit (no GRO).
>
> Well, for some traffic mixes XDP should be able to keep up without GRO.
> And longer term, we probably want to support GRO with XDP anyway
Do you have any numbers to back up your expected throughput decrease,
due to lack of GRO? Or is it a theory?
GRO mainly gains performance due to the bulking effect. XDP redirect
also have bulking. For bridging, I would claim that XDP redirect
bulking works better, because it does bulking based on egress
net_device. (Even for intermixed packets per NAPI budget). You might
worry that XDP will do a bridge-lookup per frame, but as the likely fit
in the CPU I-cache, then this will have very little effect.
> (I believe Jesper has plans for supporting bigger XDP frames)...
Yes [1], but it's orthogonal and mostly that to support HW features,
like TSO, jumbo-frames, packet header split.
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists