lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Dec 2011 14:45:20 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	"John A. Sullivan III" <jsullivan@...nsourcedevel.com>
Cc:	netdev@...r.kernel.org
Subject: Re: SFQ on HFSC leaf does not seem to work

Le vendredi 23 décembre 2011 à 08:13 -0500, John A. Sullivan III a
écrit :
> On Fri, 2011-12-23 at 09:10 +0100, Eric Dumazet wrote:
> > Le vendredi 23 décembre 2011 à 01:00 -0500, John A. Sullivan III a
> > écrit :
> > 
> > > Where might this 3000 ms delay be coming from?
> > > 
> > 
> > Certainly not from SFQ
> > 
> > You could use tcpdump to check if delay is at egress or ingress side.
> > 
> > 
> > 
> That's perplexing as well.  Tracing on the eth1 interface of the test
> devices, we see the packets going out with near perfect regularity at
> one second intervals and replies returning immediately.  They are also
> completely in sequence so it is not as if we are matching packet 4 with
> packet 7 and seeing an immediate reply that is really offset by three
> seconds.
> 
> Thus, the delay seems to be registered only in the ICMP application
> itself.  The CPU is virtually idle - very impressed with Debian's
> handling of interrupts on this old system would have been spending
> almost 60% on hardware interrupts.
> 
> Another weird characteristic is that the delay is introduced gradually
> over several seconds:
> 64 bytes from 192.168.223.84: icmp_req=4 ttl=64 time=58.0 ms
> 64 bytes from 192.168.223.84: icmp_req=5 ttl=64 time=52.4 ms
> 64 bytes from 192.168.223.84: icmp_req=6 ttl=64 time=48.7 ms
> 64 bytes from 192.168.223.84: icmp_req=7 ttl=64 time=118 ms
> 64 bytes from 192.168.223.84: icmp_req=8 ttl=64 time=834 ms
> 64 bytes from 192.168.223.84: icmp_req=9 ttl=64 time=896 ms
> 64 bytes from 192.168.223.84: icmp_req=10 ttl=64 time=897 ms
> 64 bytes from 192.168.223.84: icmp_req=11 ttl=64 time=1081 ms
> 64 bytes from 192.168.223.84: icmp_req=12 ttl=64 time=1257 ms
> 64 bytes from 192.168.223.84: icmp_req=13 ttl=64 time=1744 ms
> 64 bytes from 192.168.223.84: icmp_req=14 ttl=64 time=2107 ms
> 64 bytes from 192.168.223.84: icmp_req=15 ttl=64 time=2532 ms
> 64 bytes from 192.168.223.84: icmp_req=16 ttl=64 time=2948 ms
> 64 bytes from 192.168.223.84: icmp_req=17 ttl=64 time=3191 ms
> 64 bytes from 192.168.223.84: icmp_req=18 ttl=64 time=3163 ms
> 
> While the delay builds, we can see the replied noticeably delayed
> however, once we hit 3000ms, the display updates one packet per second.
> I would expect one packet per three seconds unless we were interleaving
> packets but we are not according to the packet trace.
> 
> So I am guessing an inbound queue but where? Ah, netem.  I pulled out
> netem and I seem vastly different results.  I get a very occasional lost
> packet but no impact to latency:
> 
> 64 bytes from 192.168.223.84: icmp_req=48 ttl=64 time=0.802 ms
> 64 bytes from 192.168.223.84: icmp_req=49 ttl=64 time=0.843 ms
> 64 bytes from 192.168.223.84: icmp_req=50 ttl=64 time=0.739 ms
> 64 bytes from 192.168.223.84: icmp_req=51 ttl=64 time=0.769 ms
> 64 bytes from 192.168.223.84: icmp_req=52 ttl=64 time=0.833 ms
> 64 bytes from 192.168.223.84: icmp_req=53 ttl=64 time=0.872 ms
> 64 bytes from 192.168.223.84: icmp_req=54 ttl=64 time=0.786 ms
> 64 bytes from 192.168.223.84: icmp_req=55 ttl=64 time=0.766 ms
> 64 bytes from 192.168.223.84: icmp_req=56 ttl=64 time=0.715 ms
> 64 bytes from 192.168.223.84: icmp_req=57 ttl=64 time=0.710 ms
> 64 bytes from 192.168.223.84: icmp_req=58 ttl=64 time=0.784 ms
> 64 bytes from 192.168.223.84: icmp_req=59 ttl=64 time=0.766 ms
> 64 bytes from 192.168.223.84: icmp_req=60 ttl=64 time=0.748 ms
> 
> 
> So netem has bufferbloat ;)  Thanks - John
> 

1) What kernel version do you use ?

2) How many concurrent flows are running (number of netperf/netcat)

3) Remind that 'perturb xxx' introduces a temporary doubling of the
number of flows.

4) Had you disabled tso on eth1 ?
   (If not, you might send 64Kbytes packets, and at 400kbit, they take a
lot of time to transmit : more than one second ...)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ