lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200223113024.GA4941@avx2>
Date:   Sun, 23 Feb 2020 14:30:24 +0300
From:   Alexey Dobriyan <adobriyan@...il.com>
To:     Joe Perches <joe@...ches.com>
Cc:     akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v3] proc: faster open/read/close with "permanent" files

On Sat, Feb 22, 2020 at 12:39:39PM -0800, Joe Perches wrote:
> On Sat, 2020-02-22 at 23:15 +0300, Alexey Dobriyan wrote:
> > Now that "struct proc_ops" exist we can start putting there stuff which
> > could not fly with VFS "struct file_operations"...
> > 
> > Most of fs/proc/inode.c file is dedicated to make open/read/.../close reliable
> > in the event of disappearing /proc entries which usually happens if module is
> > getting removed. Files like /proc/cpuinfo which never disappear simply do not
> > need such protection.
> > 
> > Save 2 atomic ops, 1 allocation, 1 free per open/read/close sequence for such
> > "permanent" files.
> > 
> > Enable "permanent" flag for
> > 
> > 	/proc/cpuinfo
> > 	/proc/kmsg
> > 	/proc/modules
> > 	/proc/slabinfo
> > 	/proc/stat
> > 	/proc/sysvipc/*
> > 	/proc/swaps
> > 
> > More will come once I figure out foolproof way to prevent out module
> > authors from marking their stuff "permanent" for performance reasons
> > when it is not.
> > 
> > This should help with scalability: benchmark is "read /proc/cpuinfo R times
> > by N threads scattered over the system".
> 
> Is this an actual expected use-case?

Yes.

> Is there some additional unnecessary memory consumption
> in the unscaled systems?

No, it's the opposite. Less memory usage for everyone and noticeable
performance improvement for contented case.

> >  static ssize_t proc_reg_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> >  {
> >  	struct proc_dir_entry *pde = PDE(file_inode(file));
> >  	ssize_t rv = -EIO;
> > -	if (use_pde(pde)) {
> > -		typeof_member(struct proc_ops, proc_read) read;
> >  
> > -		read = pde->proc_ops->proc_read;
> > -		if (read)
> > -			rv = read(file, buf, count, ppos);
> > +	if (pde_is_permanent(pde)) {
> > +		return pde_read(pde, file, buf, count, ppos);
> > +	} else if (use_pde(pde)) {
> > +		rv = pde_read(pde, file, buf, count, ppos);
> >  		unuse_pde(pde);
> 
> Perhaps all the function call duplication could be minimized
> by using code without direct returns like:
> 
> 	rv = pde_read(pde, file, buf, count, pos);
> 	if (!pde_is_permanent(pde))
> 		unuse_pde(pde);
> 
> 	return rv;

Function call non-duplication is false goal.
Surprisingly it makes code bigger:

	$ ./scripts/bloat-o-meter ../vmlinux-000 ../obj/vmlinux
	add/remove: 0/0 grow/shrink: 1/0 up/down: 10/0 (10)
	Function                                     old     new   delta
	proc_reg_read                                108     118     +10

and worse too: "rv" is carried on stack through "unuse_pde" call.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ