lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171201123042.4d565c6f@xeon-e3>
Date:   Fri, 1 Dec 2017 12:30:42 -0800
From:   Stephen Hemminger <stephen@...workplumber.org>
To:     David Ahern <dsahern@...il.com>
Cc:     Solio Sarabia <solio.sarabia@...el.com>, davem@...emloft.net,
        netdev@...r.kernel.org, sthemmin@...rosoft.com
Subject: Re: [PATCH RFC 2/2] veth: propagate bridge GSO to peer

On Mon, 27 Nov 2017 19:02:01 -0700
David Ahern <dsahern@...il.com> wrote:

> On 11/27/17 6:42 PM, Solio Sarabia wrote:
> > On Mon, Nov 27, 2017 at 01:15:02PM -0800, Stephen Hemminger wrote:  
> >> On Mon, 27 Nov 2017 12:14:19 -0800
> >> Solio Sarabia <solio.sarabia@...el.com> wrote:
> >>  
> >>> On Sun, Nov 26, 2017 at 11:07:25PM -0800, Stephen Hemminger wrote:  
> >>>> On Sun, 26 Nov 2017 20:13:39 -0700
> >>>> David Ahern <dsahern@...il.com> wrote:
> >>>>     
> >>>>> On 11/26/17 11:17 AM, Stephen Hemminger wrote:    
> >>>>>> This allows veth device in containers to see the GSO maximum
> >>>>>> settings of the actual device being used for output.    
> >>>>>
> >>>>> veth devices can be added to a VRF instead of a bridge, and I do not
> >>>>> believe the gso propagation works for L3 master devices.
> >>>>>
> >>>>> From a quick grep, team devices do not appear to handle gso changes either.    
> >>>>
> >>>> This code should still work correctly, but no optimization would happen.
> >>>> The gso_max_size of the VRF or team will
> >>>> still be GSO_MAX_SIZE so there would be no change. If VRF or Team ever got smart
> >>>> enough to handle GSO limits, then the algorithm would handle it.    
> >>>
> >>> This patch propagates gso value from bridge to its veth endpoints.
> >>> However, since bridge is never aware of the GSO limit from underlying
> >>> interfaces, bridge/veth still have larger GSO size.
> >>>
> >>> In the docker case, bridge is not linked directly to physical or
> >>> synthetic interfaces; it relies on iptables to decide which interface to
> >>> forward packets to.  
> >>
> >> So for the docker case, then direct control of GSO values via netlink (ie ip link set)
> >> seems like the better solution.  
> > 
> > Adding ioctl support for 'ip link set' would work. I'm still concerned
> > how to enforce the upper limit to not exceed that of the lower devices.
> > 
> > Consider a system with three NICs, each reporting values in the range
> > [60,000 - 62,780]. Users could set virtual interfaces' gso to 65,536,
> > exceeding the limit, and having the host do sw gso (vms settings must
> > not affect host performance.)
> > 
> > Looping through interfaces?  With the difference that now it'd be
> > trigger upon user's request, not every time a veth is created (like one
> > previous patch discussed.)
> >   
> 
> You are concerned about the routed case right? One option is to have VRF
> devices propagate gso sizes to all devices (veth, vlan, etc) enslaved to
> it. VRF devices are Layer 3 master devices so an L3 parallel to a bridge.

See the patch set I posted today which punts the problem to veth setup.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ