lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Nov 2007 08:58:06 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Valdis.Kletnieks@...edu
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Linux kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] get rid of NR_OPEN and introduce a sysctl_nr_open

Valdis.Kletnieks@...edu a écrit :
> On Tue, 27 Nov 2007 08:09:19 +0100, Eric Dumazet said:
> 
>> Changing NR_OPEN is not considered safe because of vmalloc space potential 
>> exhaust.
> 
> Verbiage about this point...
> 
> 
>> +nr_open
>> +-------
>> +
>> +Denotes the maximum number of file-handles a process can
>> +allocate. Default value is 1024*1024 (1048576) which should be
>> +enough for most machines. Actual limit depends on RLIMIT_NOFILE
>> +resource limit.
>> +
> 
> should probably be in here - can you add something of the form "Setting this
> too high can cause vmalloc failures, especially on smaller-RAM machines",
> and/or *say* how much RAM the default takes?  Sure, it's 1M entries, but
> my tuning on a 2G-RAM machine will differ if these are byte-sized, or 128-byte
> sized - one is off in a corner, the other is 1/16th of my entire memory.

vmalloc failures can already happen if you start 32 processes on i386 kernels, 
each of them wanting to open file handle number 600.000 (if their 
RLIMIT_NOFILE >= 600000)

fcntl(0, F_DUPFD, 600000);

We are not going to add warnings about vmalloc on every sysctl around there 
that could allow a root user to exhaust vmalloc space. This is a vmalloc issue 
on 32bit kernel, and quite frankly I never hit this limit.

If you take a look at vmalloc() implementation, fact that it uses a 'struct 
vm_struct *vmlist;' to track all active zones show that vmalloc() is not used 
that much.

> 
> Also, would it be useful to *lower* the value drastically, if you know a priori
> that no process should get up to 1K file handles, much less 1M? Does that
> buy me anything different than setting RLIMIT_NOFILE=1024?

NR_OPEN is the max value that RLIMIT_NOFILE can reach, nothing more.

You can set it to 256*1024*1024 or 4*1024 it wont change memory needs on your 
machine, unless you raise RLIMIT_NOFILE and one of your program leaks file 
handles, or really want to open simultaneously many of them.

Most programs wont open more than 500 files, so their file table is allocated 
via kmalloc()

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ