lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 Mar 2012 12:32:12 +0100
From:	Jesper Dangaard Brouer <jdb@...x.dk>
To:	Dave Taht <dave.taht@...il.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	Jesper Dangaard Brouer <netoptimizer@...uer.com>
Subject: Re: [PATCH] sch_sfq: revert dont put new flow at the end of flows

ons, 14 03 2012 kl. 04:52 +0000, skrev Dave Taht:
> On Wed, Mar 14, 2012 at 4:04 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > This reverts commit d47a0ac7b6 (sch_sfq: dont put new flow at the end of
> > flows)
> >
> > As Jesper found out, patch sounded great but has bad side effects.
> 
> Well under most circumstances it IS great.

Yes, I had really high hopes for this patch.  It unfortunately it can
cause starvation in some situations :-(.


> As the depth of the sfq queue increases it gets increasingly hard to
> trigger the problem. I've been using values in the 200-300 range, and
> in combination with red, haven't seen it happen.

I don't think you should adjust the "depth", but instead "limit" or
"flows".

The problem can be solved by SFQ parameter tuning.  Perhaps, we could
just change the default parameters?

The problem occurs when all flows have ONE packet, then sfq_drop()
cannot find a good flow to drop packets from...

This situation can occur because the default setting is "limit=127"
packets and "flows=127".  If we just make sure that "limit" > "flows",
then one flow with >=2 packets should exist, which is then chosen for
drop.
My practical experiments show that "limit" should be between 10-20
packets larger than "flows" (I'm not completely sure why this is
needed).  

[cut]
> > In stress situation, pushing new flows in front of the queue can prevent
> > old flows doing any progress. Packets can stay in SFQ queue for
> > unlimited amount of time.

In my experiments, one "existing" flow would get all the bandwidth,
while other flows got starved.   And new flows could not be established.


--Jesper Brouer

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ