lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Apr 2021 12:24:13 +0200
From:   Jan Kara <jack@...e.cz>
To:     Chao Yu <yuchao0@...wei.com>
Cc:     viro@...iv.linux.org.uk, jack@...e.com,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        chao@...nel.org
Subject: Re: [PATCH] direct-io: use read lock for DIO_LOCKING flag

On Thu 15-04-21 17:43:32, Chao Yu wrote:
> 9902af79c01a ("parallel lookups: actual switch to rwsem") changes inode
> lock from mutex to rwsem, however, we forgot to adjust lock for
> DIO_LOCKING flag in do_blockdev_direct_IO(), so let's change to hold read
> lock to mitigate performance regression in the case of read DIO vs read DIO,
> meanwhile it still keeps original functionality of avoiding buffered access
> vs direct access.
> 
> Signed-off-by: Chao Yu <yuchao0@...wei.com>

Thanks for the patch but this is not safe. Originally we had exclusive lock
(with i_mutex), switching to rwsem doesn't change that requirement. It may
be OK for some filesystems to actually use shared acquisition of rwsem for
DIO reads but it is not clear that is fine for all filesystems (and I
suspect those filesystems that actually do care already don't use
DIO_LOCKING flag or were already converted to iomap_dio_rw()). So unless
you do audit of all filesystems using do_blockdev_direct_IO() with
DIO_LOCKING flag and make sure they are all fine with inode lock in shared
mode, this is a no-go.

								Honza

> ---
>  fs/direct-io.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/direct-io.c b/fs/direct-io.c
> index b2e86e739d7a..93ff912f2749 100644
> --- a/fs/direct-io.c
> +++ b/fs/direct-io.c
> @@ -1166,7 +1166,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
>  	dio->flags = flags;
>  	if (dio->flags & DIO_LOCKING && iov_iter_rw(iter) == READ) {
>  		/* will be released by direct_io_worker */
> -		inode_lock(inode);
> +		inode_lock_shared(inode);
>  	}
>  
>  	/* Once we sampled i_size check for reads beyond EOF */
> @@ -1316,7 +1316,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
>  	 * of protecting us from looking up uninitialized blocks.
>  	 */
>  	if (iov_iter_rw(iter) == READ && (dio->flags & DIO_LOCKING))
> -		inode_unlock(dio->inode);
> +		inode_unlock_shared(dio->inode);
>  
>  	/*
>  	 * The only time we want to leave bios in flight is when a successful
> @@ -1341,7 +1341,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
>  
>  fail_dio:
>  	if (dio->flags & DIO_LOCKING && iov_iter_rw(iter) == READ)
> -		inode_unlock(inode);
> +		inode_unlock_shared(inode);
>  
>  	kmem_cache_free(dio_cache, dio);
>  	return retval;
> -- 
> 2.29.2
> 
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ