lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 08 Mar 2014 20:11:17 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	John Fastabend <john.fastabend@...il.com>
Cc:	Ming Chen <v.mingchen@...il.com>, netdev@...r.kernel.org,
	Erez Zadok <ezk@....cs.sunysb.edu>,
	Dean Hildebrand <dhildeb@...ibm.com>,
	Geoff Kuenning <geoff@...hmc.edu>
Subject: Re: [BUG?] ixgbe: only num_online_cpus() of the tx queues are
 enabled

On Sat, 2014-03-08 at 19:52 -0800, John Fastabend wrote:
> On 03/08/2014 07:37 PM, Eric Dumazet wrote:
> > On Sat, 2014-03-08 at 19:53 -0500, Ming Chen wrote:
> >> Hi Eric,
> >>
> >> We noticed many changes in the TCP stack, and a lot of them come from you :-)
> >>
> >> Actually, we have a question about this patch you submitted
> >> (http://lwn.net/Articles/564979/) regarding an experiment we conducted
> >> in the 3.12.0 kernel. The results we observed in shown in the second
> >> figure of panel 6 in this poster at
> >> http://www.fsl.cs.sunysb.edu/~mchen/fast14poster-hashcast-portrait.pdf
> >> .  We have repeated the same experiment for 100 times, and observed
> >> that results like that appeared 4 times. For this experiment, we
> >> observed that all five flows are using dedicated tx queues.  But what
> >> makes a big difference is the average packet sizes of the flows.
> >> Client4 has an average packet size of around 3KB while all other
> >> clients generate packet sizes over 50KB. We suspect it might be caused
> >> by this TSO Packets Automatic Sizing feaure. Our reasoning is this: if
> >> a TCP flow starts slowly, this feature will assign it a small packet
> >> size. The packet size and the sending rate can somehow form a feedback
> >> loop, which can force the TCP flow's rate to stay low. What do you
> >> think about this?
> >
> > I think nothing at all. TCP is not fair. TCP tries to steal whole
> > bandwidth by definition. One flow can have much more than the neighbour.
> >
> > With FQ, you can force some fairness, but if you use multiqueue, there
> > is no guarantee at all, unless you make sure :
> >
> > - no more than one flow per queue.
> > - Nic is able to provide fairness among all active TX queues.
> >
> 
> The NIC by default will round robin amongst the queues and should be
> reasonably fair. We could increase the number of TX queues the driver
> enables and for a small number of flows the first condition is easier
> to meet. Although it wont help as the flow count increases.
> 
> Using FQ as a root qdisc though I think will really hurt performance
> on small packet sizes. For larger packet sizes its probably less
> noticeable. Each queue can use FQ as noted previously.
> 

Note Ming case was using between 1 and 10 flows.

Of course, the MQ+FQ is better for performance, but then the fairness
problem is back.

It all depends of what is really wanted, thats why we can tweak
things ;)

To play with fq (instead of pfifo_fast), and mq, its as simple as :

echo fq >/proc/sys/net/core/default_qdisc
tc qdisc replace dev eth0 root pfifo
tc qdisc del dev eth0 root

And you now have MQ+FQ, instead of MQ+pfifo_fast



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ