lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 16 Jun 2024 11:47:40 +0800
From: "Ma, Yu" <yu.ma@...el.com>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: viro@...iv.linux.org.uk, brauner@...nel.org, jack@...e.cz,
 linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
 tim.c.chen@...ux.intel.com, tim.c.chen@...el.com, pan.deng@...el.com,
 tianyou.li@...el.com, yu.ma@...el.com
Subject: Re: [PATCH 3/3] fs/file.c: move sanity_check from alloc_fd() to
 put_unused_fd()


On 6/15/2024 12:41 PM, Mateusz Guzik wrote:
> On Fri, Jun 14, 2024 at 12:34:16PM -0400, Yu Ma wrote:
>> alloc_fd() has a sanity check inside to make sure the FILE object mapping to the
> Total nitpick: FILE is the libc thing, I would refer to it as 'struct
> file'. See below for the actual point.

Good point, not nitpick at all ;) , will update the word in commit message.

>> Combined with patch 1 and 2 in series, pts/blogbench-1.1.0 read improved by
>> 32%, write improved by 15% on Intel ICX 160 cores configuration with v6.8-rc6.
>>
>> Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
>> Signed-off-by: Yu Ma <yu.ma@...el.com>
>> ---
>>   fs/file.c | 14 ++++++--------
>>   1 file changed, 6 insertions(+), 8 deletions(-)
>>
>> diff --git a/fs/file.c b/fs/file.c
>> index a0e94a178c0b..59d62909e2e3 100644
>> --- a/fs/file.c
>> +++ b/fs/file.c
>> @@ -548,13 +548,6 @@ static int alloc_fd(unsigned start, unsigned end, unsigned flags)
>>   	else
>>   		__clear_close_on_exec(fd, fdt);
>>   	error = fd;
>> -#if 1
>> -	/* Sanity check */
>> -	if (rcu_access_pointer(fdt->fd[fd]) != NULL) {
>> -		printk(KERN_WARNING "alloc_fd: slot %d not NULL!\n", fd);
>> -		rcu_assign_pointer(fdt->fd[fd], NULL);
>> -	}
>> -#endif
>>   
> I was going to ask when was the last time anyone seen this fire and
> suggest getting rid of it if enough time(tm) passed. Turns out it does
> show up sometimes, latest result I found is 2017 vintage:
> https://groups.google.com/g/syzkaller-bugs/c/jfQ7upCDf9s/m/RQjhDrZ7AQAJ
>
> So you are moving this to another locked area, but one which does not
> execute in the benchmark?

The consideration here as mentioned is to reduce the performance impact 
(if to reserve the sanity check, and have the same functionality) by 
moving it from critical path to non-critical, as put_unused_fd() is 
mostly used for error handling when fd is allocated successfully but 
struct file failed to obtained in the next step.

>
> Patch 2/3 states 28% read and 14% write increase, this commit message
> claims it goes up to 32% and 15% respectively -- pretty big. I presume
> this has to do with bouncing a line containing the fd.
>
> I would argue moving this check elsewhere is about as good as removing
> it altogether, but that's for the vfs overlords to decide
>
> All that aside, looking at disasm of alloc_fd it is pretty clear there
> is time to save, for example:
>
> 	if (unlikely(nr >= fdt->max_fds)) {
> 		if (fd >= end) {
> 			error = -EMFILE;
> 			goto out;
> 		}
> 		error = expand_files(fd, fd);
> 		if (error < 0)
> 			goto out;
> 		if (error)
> 			goto repeat;
> 	}
>
> This elides 2 branches and a func call in the common case. Completely
> untested, maybe has some brainfarts, feel free to take without credit
> and further massage the routine.
>
> Moreover my disasm shows that even looking for a bit results in
> a func call(!) to _find_next_zero_bit -- someone(tm) should probably
> massage it into another inline.
>
> After the above massaging is done and if it turns out the check has to
> stay, you can plausibly damage-control it with prefetch -- issue it
> immediately after finding the fd number, before any other work.
>
> All that said, by the above I'm confident there is still *some*
> performance left on the table despite the lock.

Thank you Guzik for such quick check and good suggestions :) Yes, there 
are extra branches and func call can be reduced for better performance, 
considering the fix for fast path, how about flow as below draft 
(besides the massage to find_next_fd()):

         error = -EMFILE;
         if (fd < fdt->max_fds) {
                 if (~fdt->open_fds[0]) {
                         fd = find_next_zero_bit(fdt->open_fds, 
BITS_PER_LONG, fd);
                         goto fastreturn;
                 }
                 fd = find_next_fd(fdt, fd);
         }

         if (unlikely(fd >= fdt->max_fds)) {
                 error = expand_files(files, fd);
                 if (error < 0)
                         goto out;
                 if (error)
                         goto repeat;
         }
fastreturn:
         if (unlikely(fd >= end))
                 goto out;
         if (start <= files->next_fd)
                 files->next_fd = fd + 1;

        ....

>>   out:
>>   	spin_unlock(&files->file_lock);
>> @@ -572,7 +565,7 @@ int get_unused_fd_flags(unsigned flags)
>>   }
>>   EXPORT_SYMBOL(get_unused_fd_flags);
>>   
>> -static void __put_unused_fd(struct files_struct *files, unsigned int fd)
>> +static inline void __put_unused_fd(struct files_struct *files, unsigned int fd)
>>   {
>>   	struct fdtable *fdt = files_fdtable(files);
>>   	__clear_open_fd(fd, fdt);
>> @@ -583,7 +576,12 @@ static void __put_unused_fd(struct files_struct *files, unsigned int fd)
>>   void put_unused_fd(unsigned int fd)
>>   {
>>   	struct files_struct *files = current->files;
>> +	struct fdtable *fdt = files_fdtable(files);
>>   	spin_lock(&files->file_lock);
>> +	if (unlikely(rcu_access_pointer(fdt->fd[fd]))) {
>> +		printk(KERN_WARNING "put_unused_fd: slot %d not NULL!\n", fd);
>> +		rcu_assign_pointer(fdt->fd[fd], NULL);
>> +	}
>>   	__put_unused_fd(files, fd);
>>   	spin_unlock(&files->file_lock);
>>   }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ