lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Aug 2017 20:56:17 +0000
From:   "Keller, Jacob E" <jacob.e.keller@...el.com>
To:     Jakub Kicinski <kubakici@...pl>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [RFC PATCH] net: limit maximum number of packets to mark with
 xmit_more

> -----Original Message-----
> From: netdev-owner@...r.kernel.org [mailto:netdev-owner@...r.kernel.org] On
> Behalf Of Jakub Kicinski
> Sent: Friday, August 25, 2017 12:34 PM
> To: Keller, Jacob E <jacob.e.keller@...el.com>
> Cc: netdev@...r.kernel.org
> Subject: Re: [RFC PATCH] net: limit maximum number of packets to mark with
> xmit_more
> 
> On Fri, 25 Aug 2017 08:24:49 -0700, Jacob Keller wrote:
> > Under some circumstances, such as with many stacked devices, it is
> > possible that dev_hard_start_xmit will bundle many packets together, and
> > mark them all with xmit_more.
> 
> Excuse my ignorance but what are those stacked devices?  Could they
> perhaps be fixed somehow?  My intuition was that long xmit_more
> sequences can only happen if NIC and/or BQL are back pressuring, and
> therefore we shouldn't be seeing a long xmit_more "train" arriving at
> an empty device ring...

a veth device connecting a VM to the host, then connected to a bridge, which is connected to a vlan interface connected to a bond, which is hooked in active-backup to a physical device.

Sorry if I don't really know the correct way to refer to these, I just think of them as devices stacked on top of each other.

During root cause investigation I found that we (the i40e driver) sometimes received up to 100 or more SKBs in a row with xmit_more set. We were incorrectly also using xmit_more as a hint for not marking packets to get writebacks, which caused significant throughput issues. Additionally there was concern that that many packets in a row without a tail bump would cause latency issues, so I thought maybe it was best to simply guarantee that the stack didn't send us too many packets marked with xmit more at once.

It seems based on discussion that it should be up to the driver to determine exactly how to handle the xmit_more hint and to determine when it actually isn't helpful or not, so I do not think this patch makes sense now.

Thanks,
Jake 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ