lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <nkgz2gvz7pyp3qcvincc4ovkofwg6dzp5dgjyvzq7agwwqlmo7@52pgxfk5iqh4>
Date: Wed, 25 Dec 2024 16:44:32 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: Nixiaoming <nixiaoming@...wei.com>
Cc: "arnd@...db.de" <arnd@...db.de>, 
	"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>, "brauner@...nel.org" <brauner@...nel.org>, 
	"jack@...e.cz" <jack@...e.cz>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>, "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>, 
	"weiyongjun (A)" <weiyongjun1@...wei.com>, "Liuyang (Young,C)" <young.liuyang@...wei.com>
Subject: Re: [RFC] RLIMIT_NOFILE: the maximum number of open files or the
 maximum fd index?

On Tue, Dec 24, 2024 at 01:20:15AM +0000, Nixiaoming wrote:
> I always thought that RLIMIT_NOFILE limits the number of open files, but when I
>  read the code for alloc_fd(), I found that RLIMIT_NOFILE is the largest fd index?
> Is this a mistake in my understanding, or is it a code implementation error?
> 
> -----
> 
> alloc_fd code:
> 
> diff --git a/fs/file.c b/fs/file.c
> index fb1011c..e47ddac 100644
> --- a/fs/file.c
> +++ b/fs/file.c
> @@ -561,6 +561,7 @@ static int alloc_fd(unsigned start, unsigned end, unsigned flags)
>  	 */
>  	error = -EMFILE;
>  	if (unlikely(fd >= end))
> +		// There may be unclosed fd between [end, max]. the number of open files can be greater than RLIMIT_NOFILE.
>  		goto out;
>  
> 	if (unlikely(fd >= fdt->max_fds)) {
> 
> -----
> 
> Test Procedure
> 1. ulimit -n 1024.
> 2. Create 1000 FDs.
> 3. ulimit -n 100.
> 4. Close all FDs less than 100 and continue to hold FDs greater than 100.
> 5. Open() and check whether the FD is successfully created,
> 
> If RLIMIT_NOFILE is the upper limit of the number of opened files, step 5 should fail, but step 5 returns success.
> 
 
This is the expected behavior, albeit posix is a little sketchy about
the description:

https://pubs.opengroup.org/onlinepubs/009696699/functions/getrlimit.html

> RLIMIT_NOFILE
>    This is a number one greater than the maximum value that the system
>    may assign to a newly-created descriptor. If this limit is
>    exceeded, functions that allocate a file descriptor shall fail with
>    errno set to [EMFILE]. This limit constrains the number of file
>    descriptors that a process may allocate.

Since you freed up values in the range fitting the limit, allocation was
allowed to succeed.

Note other systems act the same way, nobody is explicitly counting used
fds for NOFILE enforcement and per the above they should not.

Ultimately it *does* constrain the number of file descriptors a process
may allocate if you take a look at all values present during the
lifetime of the process.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ