[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171127131502.1fbfaa66@xeon-e3>
Date: Mon, 27 Nov 2017 13:15:02 -0800
From: Stephen Hemminger <stephen@...workplumber.org>
To: Solio Sarabia <solio.sarabia@...el.com>
Cc: dsahern@...il.com, davem@...emloft.net, netdev@...r.kernel.org,
sthemmin@...rosoft.com
Subject: Re: [PATCH RFC 2/2] veth: propagate bridge GSO to peer
On Mon, 27 Nov 2017 12:14:19 -0800
Solio Sarabia <solio.sarabia@...el.com> wrote:
> On Sun, Nov 26, 2017 at 11:07:25PM -0800, Stephen Hemminger wrote:
> > On Sun, 26 Nov 2017 20:13:39 -0700
> > David Ahern <dsahern@...il.com> wrote:
> >
> > > On 11/26/17 11:17 AM, Stephen Hemminger wrote:
> > > > This allows veth device in containers to see the GSO maximum
> > > > settings of the actual device being used for output.
> > >
> > > veth devices can be added to a VRF instead of a bridge, and I do not
> > > believe the gso propagation works for L3 master devices.
> > >
> > > From a quick grep, team devices do not appear to handle gso changes either.
> >
> > This code should still work correctly, but no optimization would happen.
> > The gso_max_size of the VRF or team will
> > still be GSO_MAX_SIZE so there would be no change. If VRF or Team ever got smart
> > enough to handle GSO limits, then the algorithm would handle it.
>
> This patch propagates gso value from bridge to its veth endpoints.
> However, since bridge is never aware of the GSO limit from underlying
> interfaces, bridge/veth still have larger GSO size.
>
> In the docker case, bridge is not linked directly to physical or
> synthetic interfaces; it relies on iptables to decide which interface to
> forward packets to.
So for the docker case, then direct control of GSO values via netlink (ie ip link set)
seems like the better solution.
Powered by blists - more mailing lists