lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 08 Mar 2014 19:52:07 -0800
From:	John Fastabend <john.fastabend@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	Ming Chen <v.mingchen@...il.com>, netdev@...r.kernel.org,
	Erez Zadok <ezk@....cs.sunysb.edu>,
	Dean Hildebrand <dhildeb@...ibm.com>,
	Geoff Kuenning <geoff@...hmc.edu>
Subject: Re: [BUG?] ixgbe: only num_online_cpus() of the tx queues are enabled

On 03/08/2014 07:37 PM, Eric Dumazet wrote:
> On Sat, 2014-03-08 at 19:53 -0500, Ming Chen wrote:
>> Hi Eric,
>>
>> We noticed many changes in the TCP stack, and a lot of them come from you :-)
>>
>> Actually, we have a question about this patch you submitted
>> (http://lwn.net/Articles/564979/) regarding an experiment we conducted
>> in the 3.12.0 kernel. The results we observed in shown in the second
>> figure of panel 6 in this poster at
>> http://www.fsl.cs.sunysb.edu/~mchen/fast14poster-hashcast-portrait.pdf
>> .  We have repeated the same experiment for 100 times, and observed
>> that results like that appeared 4 times. For this experiment, we
>> observed that all five flows are using dedicated tx queues.  But what
>> makes a big difference is the average packet sizes of the flows.
>> Client4 has an average packet size of around 3KB while all other
>> clients generate packet sizes over 50KB. We suspect it might be caused
>> by this TSO Packets Automatic Sizing feaure. Our reasoning is this: if
>> a TCP flow starts slowly, this feature will assign it a small packet
>> size. The packet size and the sending rate can somehow form a feedback
>> loop, which can force the TCP flow's rate to stay low. What do you
>> think about this?
>
> I think nothing at all. TCP is not fair. TCP tries to steal whole
> bandwidth by definition. One flow can have much more than the neighbour.
>
> With FQ, you can force some fairness, but if you use multiqueue, there
> is no guarantee at all, unless you make sure :
>
> - no more than one flow per queue.
> - Nic is able to provide fairness among all active TX queues.
>

The NIC by default will round robin amongst the queues and should be
reasonably fair. We could increase the number of TX queues the driver
enables and for a small number of flows the first condition is easier
to meet. Although it wont help as the flow count increases.

Using FQ as a root qdisc though I think will really hurt performance
on small packet sizes. For larger packet sizes its probably less
noticeable. Each queue can use FQ as noted previously.


> Thats the ideal condition, and that is quite hard to meet.
>
> The feedback loop you mention should be solved by the patch I sent
> today : TCP Small queue make sure that you have no more than 2 packets
> per flow on qdisc / TX queues. So on 'fast' flow cannot have 90% of the
> packets in the qdisc. cwnd is maintained to very small values, assuming
> receiver is behaving normally.
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


-- 
John Fastabend         Intel Corporation
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ