[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f3da06d5de6c_1b0e2ab87245e5c01b@john-XPS-13-9370.notmuch>
Date: Wed, 19 Aug 2020 14:58:05 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Jakub Kicinski <kuba@...nel.org>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
bpf@...r.kernel.org, davem@...emloft.net, brouer@...hat.com,
echaudro@...hat.com, sameehj@...zon.com
Subject: Re: [PATCH net-next 6/6] net: mvneta: enable jumbo frames for XDP
Jakub Kicinski wrote:
> On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:
> > > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:
> > > > Enable the capability to receive jumbo frames even if the interface is
> > > > running in XDP mode
> > > >
> > > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > >
> > > Hm, already? Is all the infra in place? Or does it not imply
> > > multi-buffer.
> >
> > with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> > and ndo_xpd_xmit()) so we can remove MTU limitation.
>
> Is there an API for programs to access the multi-buf frames?
Hi Lorenzo,
This is not enough to support multi-buffer in my opinion. I have the
same comment as Jakub. We need an API to pull in the multiple
buffers otherwise we break the ability to parse the packets and that
is a hard requirement to me. I don't want to lose visibility to get
jumbo frames.
At minimum we need a bpf_xdp_pull_data() to adjust pointer. In the
skmsg case we use this,
bpf_msg_pull_data(u32 start, u32 end, u64 flags)
Where start is the offset into the packet and end is the last byte we
want to adjust start/end pointers to. This way we can walk pages if
we want and avoid having to linearize the data unless the user actual
asks us for a block that crosses a page range. Smart users then never
do a start/end that crosses a page boundary if possible. I think the
same would apply here.
XDP by default gives you the first page start/end to use freely. If
you need to parse deeper into the payload then you call bpf_msg_pull_data
with the byte offsets needed.
Also we would want performance numbers to see how good/bad this is
compared to the base case.
Thanks,
John
Powered by blists - more mailing lists