lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171204.104002.1311507295275740357.davem@davemloft.net>
Date:   Mon, 04 Dec 2017 10:40:02 -0500 (EST)
From:   David Miller <davem@...emloft.net>
To:     stephen@...workplumber.org
Cc:     netdev@...r.kernel.org, sthemmin@...rosoft.com
Subject: Re: [PATCH net-next 0/2] allow setting gso_maximum values

From: Stephen Hemminger <stephen@...workplumber.org>
Date: Fri, 1 Dec 2017 15:30:01 -0800

> On Fri,  1 Dec 2017 12:11:56 -0800
> Stephen Hemminger <stephen@...workplumber.org> wrote:
> 
>> This is another way of addressing the GSO maximum performance issues for
>> containers on Azure. What happens is that the underlying infrastructure uses
>> a overlay network such that GSO packets over 64K - vlan header end up cause
>> either guest or host to have do expensive software copy and fragmentation.
>> 
>> The netvsc driver reports GSO maximum settings correctly, the issue
>> is that containers on veth devices still have the larger settings.
>> One solution that was examined was propogating the values back
>> through the bridge device, but this does not work for cases where
>> virtual container network is done on L3.
>> 
>> This patch set punts the problem to the orchestration layer that sets
>> up the container network. It also enables other virtual devices
>> to have configurable settings for GSO maximum.
>> 
>> Stephen Hemminger (2):
>>   rtnetlink: allow GSO maximums to be passed to device
>>   veth: allow configuring GSO maximums
>> 
>>  drivers/net/veth.c   | 20 ++++++++++++++++++++
>>  net/core/rtnetlink.c |  2 ++
>>  2 files changed, 22 insertions(+)
>> 
> 
> I would like a confirmation from Intel that is doing Docker testing
> that this works for them before merging.

Like David Ahern, I think you should allow this net netlink setting
during changelink as well as newlink.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ