lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1331733899.2456.66.camel@edumazet-laptop>
Date:	Wed, 14 Mar 2012 07:04:59 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	jdb@...x.dk
Cc:	Dave Taht <dave.taht@...il.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	Jesper Dangaard Brouer <netoptimizer@...uer.com>
Subject: Re: [PATCH] sch_sfq: revert dont put new flow at the end of flows

Le mercredi 14 mars 2012 à 12:32 +0100, Jesper Dangaard Brouer a écrit :
> ons, 14 03 2012 kl. 04:52 +0000, skrev Dave Taht:
> > On Wed, Mar 14, 2012 at 4:04 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:

> > As the depth of the sfq queue increases it gets increasingly hard to
> > trigger the problem. I've been using values in the 200-300 range, and
> > in combination with red, haven't seen it happen.
> 
> I don't think you should adjust the "depth", but instead "limit" or
> "flows".
> 
> The problem can be solved by SFQ parameter tuning.  Perhaps, we could
> just change the default parameters?
> 
> The problem occurs when all flows have ONE packet, then sfq_drop()
> cannot find a good flow to drop packets from...
> 
> This situation can occur because the default setting is "limit=127"
> packets and "flows=127".  If we just make sure that "limit" > "flows",
> then one flow with >=2 packets should exist, which is then chosen for
> drop.
> My practical experiments show that "limit" should be between 10-20
> packets larger than "flows" (I'm not completely sure why this is
> needed).  
> 

There are many ways to starve SFQ if we dont revert the patch or add new
logic in linux-3.4

Even if we change default settings, we can have following situation :

SFQ in a state with several regular flows in queue, correctly behaving
because they are nice.

loop repeat_as_many_times_you_can_think
 enqueue : packet comes for a new flow X.
           OK lets favor this new flow against 'old' ones.
 dequeue : takes the packet for flow X.
           forget about flow X since dequeue all its packets.
endloop

All other flows are in a frozen state.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ