[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201013160726.367e3871@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Tue, 13 Oct 2020 16:07:26 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: John Fastabend <john.fastabend@...il.com>, bpf@...r.kernel.org,
netdev@...r.kernel.org, Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
maze@...gle.com, lmb@...udflare.com, shaun@...era.io,
Lorenzo Bianconi <lorenzo@...nel.org>, marek@...udflare.com,
eyal.birger@...il.com
Subject: Re: [PATCH bpf-next V3 0/6] bpf: New approach for BPF MTU handling
On Tue, 13 Oct 2020 22:40:09 +0200 Jesper Dangaard Brouer wrote:
> > FWIW I took a quick swing at testing it with the HW I have and it did
> > exactly what hardware should do. The TX unit entered an error state
> > and then the driver detected that and reset it a few seconds later.
>
> The drivers (i40e, mlx5, ixgbe) I tested with didn't entered an error
> state, when getting packets exceeding the MTU. I didn't go much above
> 4K, so maybe I didn't trigger those cases.
You probably need to go above 16k to get out of the acceptable jumbo
frame size. I tested ixgbe by converting TSO frames to large TCP frames,
at low probability.
Powered by blists - more mailing lists