lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C87C710.8080804@gmail.com>
Date:	Wed, 08 Sep 2010 19:25:36 +0200
From:	Jarek Poplawski <jarkao2@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	Anand Raj Manickam <anandrm@...il.com>, netdev@...r.kernel.org,
	netfilter-devel@...r.kernel.org, shemminger@...tta.com
Subject: Re: Kernel Panic on OOM with 10 HTB rules

Eric Dumazet wrote, On 09/08/2010 04:45 PM:

> Le mercredi 08 septembre 2010 à 19:39 +0530, Anand Raj Manickam a
> écrit :
>>
>> imq0      Link encap:UNSPEC  HWaddr
>> 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
>>           UP RUNNING NOARP  MTU:16000  Metric:1
>>           RX packets:129112 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:129114 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 txqueuelen:11000
>>           RX bytes:31060964 (29.6 MiB)  TX bytes:31062207 (29.6 MiB)
>>
> If you cannot switch to 64bit kernel, then you are forced to use lower
> queue lengths (I see your imq devices use insane 11000 txqueuelen)
> 
> Each frame use 4K, maybe 16K, it depends on MTU.
> 
> even if we dont take into account other needs :
> 11000 * 16K = 170 Mbytes per imqX
> 1000 * 4K = 4Mbytes per ethX
> 
> 170M * 8 -> memory overflow
> 

You should consider that htb creates by default one queue per class
with txqueuelen limit each. This should probably explain why your
problems start when you classify to many classes.

Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists