[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALDO+SZEU7yfZK_JTPKQm-8HR_HMUfNjdMMik862dJDBc8SGQA@mail.gmail.com>
Date: Tue, 12 Nov 2019 09:38:55 -0800
From: William Tu <u9012063@...il.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: Toshiaki Makita <toshiaki.makita1@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Pablo Neira Ayuso <pablo@...filter.org>,
Jozsef Kadlecsik <kadlec@...filter.org>,
Florian Westphal <fw@...len.de>,
Pravin B Shelar <pshelar@....org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
bpf <bpf@...r.kernel.org>, Stanislav Fomichev <sdf@...ichev.me>
Subject: Re: [RFC PATCH v2 bpf-next 00/15] xdp_flow: Flow offload to XDP
On Sun, Oct 27, 2019 at 8:24 AM Toke Høiland-Jørgensen <toke@...hat.com> wrote:
>
> Toshiaki Makita <toshiaki.makita1@...il.com> writes:
>
> > On 19/10/23 (水) 2:45:05, Toke Høiland-Jørgensen wrote:
> >> John Fastabend <john.fastabend@...il.com> writes:
> >>
> >>> I think for sysadmins in general (not OVS) use case I would work
> >>> with Jesper and Toke. They seem to be working on this specific
> >>> problem.
> >>
> >> We're definitely thinking about how we can make "XDP magically speeds up
> >> my network stack" a reality, if that's what you mean. Not that we have
> >> arrived at anything specific yet...
> >>
> >> And yeah, I'd also be happy to discuss what it would take to make a
> >> native XDP implementation of the OVS datapath; including what (if
> >> anything) is missing from the current XDP feature set to make this
> >> feasible. I must admit that I'm not quite clear on why that wasn't the
> >> approach picked for the first attempt to speed up OVS using XDP...
> >
> > Here's some history from William Tu et al.
> > https://linuxplumbersconf.org/event/2/contributions/107/
> >
> > Although his aim was not to speed up OVS but to add kernel-independent
> > datapath, his experience shows full OVS support by eBPF is very
> > difficult.
>
> Yeah, I remember seeing that presentation; it still isn't clear to me
> what exactly the issue was with implementing the OVS datapath in eBPF.
> As far as I can tell from glancing through the paper, only lists program
> size and lack of loops as limitations; both of which have been lifted
> now.
>
Sorry it's not very clear in the presentation and paper.
Some of the limitations are resolved today, let me list my experiences.
This is from OVS's feature requirements:
What's missing in eBPF
- limited stack size (resolved now)
- limited program size (resolved now)
- dynamic loop support for OVS actions applied to packet
(now bounded loop is supported)
- no connection tracking/alg support (people suggest to look cilium)
- no packet fragment/defragment support
- no wildcard table/map type support
I think it would be good to restart the project again using
existing eBPF features.
What's missing in XDP
- clone a packet: this is very basic feature for a switch to
broadcast/multicast. I understand it's hard to implement.
A workaround is to XDP_PASS and let tc do the clone. But slow.
Because of no packet cloning support, I didn't try implementing
OVS datapath in XDP.
> The results in the paper also shows somewhat disappointing performance
> for the eBPF implementation, but that is not too surprising given that
> it's implemented as a TC eBPF hook, not an XDP program. I seem to recall
> that this was also one of the things puzzling to me back when this was
> presented...
Right, the point of that project is not performance improvement.
But sort of to see how existing eBPF feature can be used to implement
all features needed by OVS datapath.
Regards,
William
Powered by blists - more mailing lists