[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210106112812.osgetx6pmuup6cd7@svensmacbookair.sven.lan>
Date: Wed, 6 Jan 2021 12:28:12 +0100
From: Sven Auhagen <sven.auhagen@...eatech.de>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Marek Behún <kabel@...nel.org>,
netdev@...r.kernel.org, davem@...emloft.net,
Jakub Kicinski <kuba@...nel.org>,
Matteo Croce <mcroce@...rosoft.com>,
Lorenzo Bianconi <lorenzo@...nel.org>,
John Fastabend <john.fastabend@...il.com>
Subject: Re: [PATCH net-next] net: mvpp2: increase MTU limit when XDP enabled
On Wed, Jan 06, 2021 at 11:33:50AM +0100, Jesper Dangaard Brouer wrote:
> On Tue, 5 Jan 2021 18:43:08 +0100
> Marek Behún <kabel@...nel.org> wrote:
>
> > On Tue, 5 Jan 2021 18:24:37 +0100
> > Sven Auhagen <sven.auhagen@...eatech.de> wrote:
> >
> > > On Tue, Jan 05, 2021 at 06:19:21PM +0100, Marek Behún wrote:
> > > > Currently mvpp2_xdp_setup won't allow attaching XDP program if
> > > > mtu > ETH_DATA_LEN (1500).
> > > >
> > > > The mvpp2_change_mtu on the other hand checks whether
> > > > MVPP2_RX_PKT_SIZE(mtu) > MVPP2_BM_LONG_PKT_SIZE.
> > > >
> > > > These two checks are semantically different.
> > > >
> > > > Moreover this limit can be increased to MVPP2_MAX_RX_BUF_SIZE, since in
> > > > mvpp2_rx we have
> > > > xdp.data = data + MVPP2_MH_SIZE + MVPP2_SKB_HEADROOM;
> > > > xdp.frame_sz = PAGE_SIZE;
> > > >
> > > > Change the checks to check whether
> > > > mtu > MVPP2_MAX_RX_BUF_SIZE
> > >
> > > Hello Marek,
> > >
> > > in general, XDP is based on the model, that packets are not bigger
> > > than 1500.
>
> This is WRONG.
>
> The XDP design/model (with PAGE_SIZE 4096) allows MTU to be 3506 bytes.
>
> This comes from:
> * 4096 = frame_sz = PAGE_SIZE
> * -256 = reserved XDP_PACKET_HEADROOM
> * -320 = reserved tailroom with sizeof(skb_shared_info)
> * - 14 = Ethernet header size as MTU value is L3
>
> 4096-256-320-14 = 3506 bytes
>
> Depending on driver memory layout choices this can (of-cause) be lower.
Got it, thanks.
>
> > > I am not sure if that has changed, I don't believe Jumbo Frames are
> > > upstreamed yet.
>
> This is unrelated to this patch, but Lorenzo and Eelco is assigned to
> work on this.
>
> > > You are correct that the MVPP2 driver can handle bigger packets
> > > without a problem but if you do XDP redirect that won't work with
> > > other drivers and your packets will disappear.
> >
>
> This statement is too harsh. The XDP layer will not do (IP-level)
> fragmentation for you. Thus, if you redirect/transmit frames out
> another interface with lower MTU than the frame packet size then the
> packet will of-cause be dropped (the drop counter is unfortunately not
> well defined). This is pretty standard behavior.
Some drivers do not have a XDP drop counter and from own testing it is very difficult
to find out what happened to the packet when it is dropped like that.
>
> This is why I'm proposing the BPF-helper bpf_check_mtu(). To allow the
> BPF-prog to check the MTU before doing the redirect.
>
>
> > At least 1508 is required when I want to use XDP with a Marvell DSA
> > switch: the DSA header is 4 or 8 bytes long there.
> >
> > The DSA driver increases MTU on CPU switch interface by this length
> > (on my switches to 1504).
> >
> > So without this I cannot use XDP with mvpp2 with a Marvell switch with
> > default settings, which I think is not OK.
> >
> > Since with the mvneta driver it works (mvneta checks for
> > MVNETA_MAX_RX_BUF_SIZE rather than ETH_DATA_LEN), I think it should also work
> > with mvpp2.
>
> I think you patch makes perfect sense.
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fbrouer&data=04%7C01%7Csven.auhagen%40voleatech.de%7C2450996aa72245f4a6da08d8b22e971a%7Cb82a99f679814a7295344d35298f847b%7C0%7C0%7C637455260465184088%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=LDmq8nFGgKuzG3rbqmaILTw6W4Qsc04MULSQvwmoVLw%3D&reserved=0
>
Powered by blists - more mailing lists