lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 1 Oct 2010 22:30:22 +0200
From:	Willy Tarreau <w@....eu>
To:	Robin Holt <holt@....com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Alexey Kuznetsov <kuznet@....inr.ac.ru>,
	"Pekka Savola (ipv6)" <pekkas@...core.fi>,
	James Morris <jmorris@...ei.org>,
	Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
	Patrick McHardy <kaber@...sh.net>,
	Vlad Yasevich <vladislav.yasevich@...com>,
	Sridhar Samudrala <sri@...ibm.com>,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	linux-decnet-user@...ts.sourceforge.net, linux-sctp@...r.kernel.org
Subject: Re: sysctl_{tcp,udp,sctp}_mem overflow on 16TB system.

Hello Robin,

On Fri, Oct 01, 2010 at 02:39:58PM -0500, Robin Holt wrote:
> 
> On a 16TB system, we noticed that sysctl_tcp_mem[2] and sysctl_udp_mem[2]
> were negative.  Code review indicates that the same should occur with
> sysctl_sctp_mem[2].
> 
> There are a couple ways we could address this.  The one which appears most
> reasonable would be to change the struct proto defintion for sysctl_mem
> from an int to a long and handle all the associated fallout.
> 
> An alternative is to limit the calculation to 1/2 INT_MAX.  The downside
> being that the administrator could not tune the system to use more than
> INT_MAX memory when much more is available.
> 
> Is there a compelling reason to not change the structure's definition
> over to longs instead of ints and deal with the fallout from that change?

Could we not see it differently ? => is there any reason someone would
want to assign more than 8 TB of RAM to the network buffers in the near
future ? Even at 100 Gbps, that's still 10 minutes of traffic stuck in
buffers. Probably that the day we need that large buffers, Linux won't
support 32-bit systems anymore and all such limits will have switched
to 64-bit.

So probably that limiting the value to INT_MAX/2 sounds reasonable ?

Regards,
Willy

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ