lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YyV0AZ9+Zz4aopq4@localhost.localdomain>
Date:   Sat, 17 Sep 2022 10:15:13 +0300
From:   Alexey Dobriyan <adobriyan@...il.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Ivan Babrou <ivan@...udflare.com>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, kernel-team@...udflare.com,
        Kalesh Singh <kaleshsingh@...gle.com>,
        Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [RFC] proc: report open files as size in stat() for /proc/pid/fd

On Fri, Sep 16, 2022 at 05:01:15PM -0700, Andrew Morton wrote:
> (cc's added)
> 
> On Fri, 16 Sep 2022 16:08:52 -0700 Ivan Babrou <ivan@...udflare.com> wrote:
> 
> > Many monitoring tools include open file count as a metric. Currently
> > the only way to get this number is to enumerate the files in /proc/pid/fd.
> > 
> > The problem with the current approach is that it does many things people
> > generally don't care about when they need one number for a metric.
> > In our tests for cadvisor, which reports open file counts per cgroup,
> > we observed that reading the number of open files is slow. Out of 35.23%
> > of CPU time spent in `proc_readfd_common`, we see 29.43% spent in
> > `proc_fill_cache`, which is responsible for filling dentry info.
> > Some of this extra time is spinlock contention, but it's a contention
> > for the lock we don't want to take to begin with.
> > 
> > We considered putting the number of open files in /proc/pid/stat.
> > Unfortunately, counting the number of fds involves iterating the fdtable,
> > which means that it might slow down /proc/pid/stat for processes
> > with many open files. Instead we opted to put this info in /proc/pid/fd
> > as a size member of the stat syscall result. Previously the reported
> > number was zero, so there's very little risk of breaking anything,
> > while still providing a somewhat logical way to count the open files.
> 
> Documentation/filesystems/proc.rst would be an appropriate place to
> document this ;)
> 
> > Previously:
> > 
> > ```
> > $ sudo stat /proc/1/fd | head -n2
> >   File: /proc/1/fd
> >   Size: 0         	Blocks: 0          IO Block: 1024   directory
> > ```
> > 
> > With this patch:
> > 
> > ```
> > $ sudo stat /proc/1/fd | head -n2
> >   File: /proc/1/fd
> >   Size: 65        	Blocks: 0          IO Block: 1024   directory

Yes. This is natural place.

> > ```
> > 
> > Correctness check:
> > 
> > ```
> > $ sudo ls /proc/1/fd | wc -l
> > 65
> > ```
> > 
> > There are two alternatives to this approach that I can see:
> > 
> > * Expose /proc/pid/fd_count with a count there

> > * Make fd count acces O(1) and expose it in /proc/pid/status

This is doable, next to FDSize.

Below is doable too.

> > --- a/fs/proc/fd.c
> > +++ b/fs/proc/fd.c
> > @@ -279,6 +279,29 @@ static int proc_readfd_common(struct file *file, struct dir_context *ctx,
> >  	return 0;
> >  }
> >  
> > +static int proc_readfd_count(struct inode *inode)
> > +{
> > +	struct task_struct *p = get_proc_task(inode);
> > +	unsigned int fd = 0, count = 0;
> > +
> > +	if (!p)
> > +		return -ENOENT;
> > +
> > +	rcu_read_lock();
> > +	while (task_lookup_next_fd_rcu(p, &fd)) {
> > +		rcu_read_unlock();
> > +
> > +		count++;
> > +		fd++;
> > +
> > +		cond_resched();
> > +		rcu_read_lock();
> > +	}
> > +	rcu_read_unlock();
> > +	put_task_struct(p);
> > +	return count;
> > +}
> > +
> >  static int proc_readfd(struct file *file, struct dir_context *ctx)
> >  {
> >  	return proc_readfd_common(file, ctx, proc_fd_instantiate);
> > @@ -319,9 +342,33 @@ int proc_fd_permission(struct user_namespace *mnt_userns,
> >  	return rv;
> >  }
> >  
> > +int proc_fd_getattr(struct user_namespace *mnt_userns,
> > +			const struct path *path, struct kstat *stat,
> > +			u32 request_mask, unsigned int query_flags)
> > +{
> > +	struct inode *inode = d_inode(path->dentry);
> > +	struct proc_dir_entry *de = PDE(inode);
> > +
> > +	if (de) {
> > +		nlink_t nlink = READ_ONCE(de->nlink);
> > +
> > +		if (nlink > 0)
> > +			set_nlink(inode, nlink);
> > +	}
> > +
> > +	generic_fillattr(&init_user_ns, inode, stat);
			 ^^^^^^^^^^^^^

Is this correct? I'm not userns guy at all.

> > +
> > +	/* If it's a directory, put the number of open fds there */
> > +	if (S_ISDIR(inode->i_mode))
> > +		stat->size = proc_readfd_count(inode);

ENOENT can get there. In principle this is OK, userspace can live with it.

> >  const struct inode_operations proc_fd_inode_operations = {
> >  	.lookup		= proc_lookupfd,
> >  	.permission	= proc_fd_permission,
> > +	.getattr	= proc_fd_getattr,
> >  	.setattr	= proc_setattr,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ