[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190628102254.28191f12@carbon>
Date: Fri, 28 Jun 2019 10:22:54 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: "Eelco Chaudron" <echaudro@...hat.com>
Cc: "Machulsky, Zorik" <zorik@...zon.com>,
"Jubran, Samih" <sameehj@...zon.com>, davem@...emloft.net,
netdev@...r.kernel.org, "Woodhouse, David" <dwmw@...zon.co.uk>,
"Matushevsky, Alexander" <matua@...zon.com>,
"Bshara, Saeed" <saeedb@...zon.com>,
"Wilson, Matt" <msw@...zon.com>,
"Liguori, Anthony" <aliguori@...zon.com>,
"Bshara, Nafea" <nafea@...zon.com>,
"Tzalik, Guy" <gtzalik@...zon.com>,
"Belgazal, Netanel" <netanel@...zon.com>,
"Saidi, Ali" <alisaidi@...zon.com>,
"Herrenschmidt, Benjamin" <benh@...zon.com>,
"Kiyanovski, Arthur" <akiyano@...zon.com>,
"Daniel Borkmann" <borkmann@...earbox.net>,
"Toke Høiland-Jørgensen"
<toke@...hat.com>,
"Ilias Apalodimas" <ilias.apalodimas@...aro.org>,
"Alexei Starovoitov" <alexei.starovoitov@...il.com>,
"Jakub Kicinski" <jakub.kicinski@...ronome.com>,
xdp-newbies@...r.kernel.org, brouer@...hat.com,
Steffen Klassert <steffen.klassert@...unet.com>
Subject: Re: XDP multi-buffer incl. jumbo-frames (Was: [RFC V1 net-next 1/1]
net: ena: implement XDP drop support)
On Fri, 28 Jun 2019 09:14:39 +0200
"Eelco Chaudron" <echaudro@...hat.com> wrote:
> On 26 Jun 2019, at 10:38, Jesper Dangaard Brouer wrote:
>
> > On Tue, 25 Jun 2019 03:19:22 +0000
> > "Machulsky, Zorik" <zorik@...zon.com> wrote:
> >
> >> On 6/23/19, 7:21 AM, "Jesper Dangaard Brouer" <brouer@...hat.com>
> >> wrote:
> >>
> >> On Sun, 23 Jun 2019 10:06:49 +0300 <sameehj@...zon.com> wrote:
> >>
> >> > This commit implements the basic functionality of drop/pass logic in the
> >> > ena driver.
> >>
> >> Usually we require a driver to implement all the XDP return codes,
> >> before we accept it. But as Daniel and I discussed with Zorik during
> >> NetConf[1], we are going to make an exception and accept the driver
> >> if you also implement XDP_TX.
> >>
> >> As we trust that Zorik/Amazon will follow and implement XDP_REDIRECT
> >> later, given he/you wants AF_XDP support which requires XDP_REDIRECT.
> >>
> >> Jesper, thanks for your comments and very helpful discussion during
> >> NetConf! That's the plan, as we agreed. From our side I would like to
> >> reiterate again the importance of multi-buffer support by xdp frame.
> >> We would really prefer not to see our MTU shrinking because of xdp
> >> support.
> >
> > Okay we really need to make a serious attempt to find a way to support
> > multi-buffer packets with XDP. With the important criteria of not
> > hurting performance of the single-buffer per packet design.
> >
> > I've created a design document[2], that I will update based on our
> > discussions: [2]
> > https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> >
> > The use-case that really convinced me was Eric's packet header-split.
> >
> >
> > Lets refresh: Why XDP don't have multi-buffer support:
> >
> > XDP is designed for maximum performance, which is why certain
> > driver-level
> > use-cases were not supported, like multi-buffer packets (like
> > jumbo-frames).
> > As it e.g. complicated the driver RX-loop and memory model handling.
> >
> > The single buffer per packet design, is also tied into eBPF
> > Direct-Access
> > (DA) to packet data, which can only be allowed if the packet memory is
> > in
> > contiguous memory. This DA feature is essential for XDP performance.
> >
> >
> > One way forward is to define that XDP only get access to the first
> > packet buffer, and it cannot see subsequent buffers. For XDP_TX and
> > XDP_REDIRECT to work then XDP still need to carry pointers (plus
> > len+offset) to the other buffers, which is 16 bytes per extra buffer.
>
>
> I’ve seen various network processor HW designs, and they normally get
> the first x bytes (128 - 512) which they can manipulate
> (append/prepend/insert/modify/delete).
Good data point, thank you! It confirms that XDP only getting access to
the first packet-buffer makes sense, for most use-cases.
We also have to remember that XDP it not meant to handle every
use-case. XDP is a software fast-path, that can accelerate certain
use-case. We have the existing network stack as a fall-back for
handling the corner-cases, that would otherwise slowdown our XDP
fast-path.
> There are designs where they can “page in” the additional fragments
> but it’s expensive as it requires additional memory transfers. But
> the majority do not care (cannot change) the remaining fragments. Can
> also not think of a reason why you might want to remove something at
> the end of the frame (thinking about routing/forwarding needs here).
Use-cases that need to adjust tail of packet:
- ICMP replies directly from XDP[1] need to shorten packet tail, but
this use-case doesn't use fragments.
- IPsec need to add/extend packet tail for IPset-trailer[2], again
unlikely that this needs fragments(?). (This use-case convinced me
that we need to add extend-tail support to bpf_xdp_adjust_tail)
- DNS or memcached replies directly from XDP, need to extend packet
tail, to have room for reply. (It would be interesting to allow larger
replies, but I'm not sure we should ever support that).
> If we do want XDP to access other fragments we could do this through
> a helper which swaps the packet context?
That might be a way forward. If the XDP developer have to call a
helper, then they should realize and "buy into" an additional
overhead/cost.
[1] https://github.com/torvalds/linux/blob/master/samples/bpf/xdp_adjust_tail_kern.c
[2] http://vger.kernel.org/netconf2019_files/xfrm_xdp.pdf
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists