lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140827.135355.312468204295431099.davem@davemloft.net>
Date:	Wed, 27 Aug 2014 13:53:55 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	therbert@...gle.com
Cc:	cwang@...pensource.com, netdev@...r.kernel.org, jhs@...atatu.com,
	hannes@...essinduktion.org, edumazet@...gle.com,
	jeffrey.t.kirsher@...el.com, rusty@...tcorp.com.au,
	dborkman@...hat.com, brouer@...hat.com
Subject: Re: [PATCH 0/2] Get rid of ndo_xmit_flush

From: Tom Herbert <therbert@...gle.com>
Date: Wed, 27 Aug 2014 12:31:15 -0700

> On Wed, Aug 27, 2014 at 11:28 AM, Cong Wang <cwang@...pensource.com> wrote:
>> On Mon, Aug 25, 2014 at 4:34 PM, David Miller <davem@...emloft.net> wrote:
>>>
>>> Given Jesper's performance numbers, it's not the way to go.
>>>
>>> Instead, go with a signalling scheme via new boolean skb->xmit_more.
>>>
>>> This has several advantages:
>>>
>>> 1) Nearly trivial driver support, just protect the tail pointer
>>>    update with the skb->xmit_more check.
>>>
>>> 2) No extra indirect calls in the non-deferral cases.
>>>
>>
>> First of all, I missed your discussion at kernel summit.
>>
>> Second of all, I am not familiar with hardware NIC drivers.
>>
>> But for me, it looks like you are trying to pend some more packets
>> in a TX queue until the driver decides to flush them all in one shot.
>> So if that is true, doesn't this mean the latency of first packet pending
>> in this queue will increase and network traffic will be more bursty for
>> the receiver??
>>
> I suspect this won't be an big issue. The dequeue is still work
> conserving and BQL limit already ensures that HW queue doesn't drain
> completely when packets are pending in the qdisc-- I doubt this will
> increase BQL limits, but that should be verified. We might see some
> latency increase for a batch sent on an idle link (possible with
> GSO)-- if this is a concern we could arrange flush on sending packets
> on idle links.

That's also correct.

The issue to handle specially is the initial send on a TX queue which
is empty or close to being empty.

Probably we want some kind of exponential backoff type scheme, so
assuming we have an empty TX queue we'd trigger TX on the first
packet, then the third, then the 7th.  Assuming we had that many to
send at once.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ