lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Wed, 13 Aug 2014 16:38:09 +0400
From:	Vasily Averin <vvs@...allels.com>
To:	netdev@...r.kernel.org, Jamal Hadi Salim <jhs@...atatu.com>,
	"David S. Miller" <davem@...emloft.net>
CC:	Alexey Kuznetsov <kuznet@....inr.ac.ru>
Subject: [PATCH 0/2] cbq: incorrectly low bandwidth blocks limited traffic

Mainstream commit f0f6ee1f70c4eaab9d52cf7d255df4bd89f8d1c2 have side effect:
if cbq bandwidth setting is less than real interface throughput
non-limited traffic can delay limited traffic for a very long time.

This happen (again) because of q->now changes incorrectly in cbq_dequeue():
in described scenario L2T is much greater than real time delay,
and q->now gets an extra boost for each transmitted packet.

To compensate this boost we did not changed q->now until it will be
synchronized with real time. Unfortunately it does not work in described
scenario: boost accumulated by non-limited traffic blocks processing
of limited traffic.

Below you can found an example of this problem.

To fix the problem I propose the following path set:

Vasily Averin (2):
  cbq: incorrectly low bandwidth setting blocks limited traffic
  cbq: now_rt removal

 net/sched/sch_cbq.c |   48 ++++++++++++++----------------------------------
 1 files changed, 14 insertions(+), 34 deletions(-)

First patch prevents incorrect updates q->now in cbq_dequeue(),
now it just saves real time on each function call.
q_now change required for cbg_update() was compensated inside this function.

Second patch removes q->now_rt because now it is identical to q->now.

My testing confirms that these patches fixes the problem,
and do not affect 

Thank you,
	Vasily Averin

Node gigabit link, 100Mbit bandwidth in cbq, and 70 Mbit limit for some traffic

# tc qdisc del dev eth0 root
# tc qdisc add dev eth0 root handle 1: cbq avpkt 64kb bandwidth 100Mbit
# tc class add dev eth0 parent 1: classid 1:1 cbq rate 70Mbit bounded allot 64kb
# tc filter add dev eth0 parent 1: protocol ip u32 match ip dst 10.30.3.116 flowid 1:1 

Initially shaper works correctly

# netperf -H 10.30.3.116 -l5
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.30.3.116 () port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    5.08       66.31

... then node generates some non-limited traffic...

# netperf -H ANOTHER_IP -l5
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to ANOTHER_IP () port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    5.02      818.12

... and it blocks limited traffic

# netperf -H 10.30.3.116 -l5
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.30.3.116 () port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    9.00        0.10

Ping on neighbor console shows that traffic is not dropped but delayed

64 bytes from 10.30.3.116: icmp_seq=17 ttl=64 time=0.131 ms
64 bytes from 10.30.3.116: icmp_seq=18 ttl=64 time=0.161 ms
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
64 bytes from 10.30.3.116: icmp_seq=19 ttl=64 time=37568 ms
64 bytes from 10.30.3.116: icmp_seq=20 ttl=64 time=36568 ms
64 bytes from 10.30.3.116: icmp_seq=21 ttl=64 time=35568 ms
64 bytes from 10.30.3.116: icmp_seq=22 ttl=64 time=34568 ms
64 bytes from 10.30.3.116: icmp_seq=23 ttl=64 time=33568 ms
64 bytes from 10.30.3.116: icmp_seq=24 ttl=64 time=32568 ms
64 bytes from 10.30.3.116: icmp_seq=25 ttl=64 time=31569 ms
64 bytes from 10.30.3.116: icmp_seq=26 ttl=64 time=30569 ms
64 bytes from 10.30.3.116: icmp_seq=27 ttl=64 time=29569 ms
64 bytes from 10.30.3.116: icmp_seq=28 ttl=64 time=28569 ms
64 bytes from 10.30.3.116: icmp_seq=29 ttl=64 time=27570 ms
64 bytes from 10.30.3.116: icmp_seq=30 ttl=64 time=26570 ms
64 bytes from 10.30.3.116: icmp_seq=38 ttl=64 time=0.187 ms

tc -s -d class ls dev eth0 output taken while limited traffic was blocked is attached.

View attachment "tc-cbq-stat.txt" of type "text/plain" (4341 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ