lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160108074804.6e820e16@tlielax.poochiereds.net>
Date:	Fri, 8 Jan 2016 07:48:04 -0500
From:	Jeff Layton <jlayton@...chiereds.net>
To:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:	Dmitry Vyukov <dvyukov@...gle.com>,
	"J. Bruce Fields" <bfields@...ldses.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	syzkaller <syzkaller@...glegroups.com>,
	Kostya Serebryany <kcc@...gle.com>,
	Alexander Potapenko <glider@...gle.com>,
	Sasha Levin <sasha.levin@...cle.com>,
	Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH] locks: fix unlock when fcntl_setlk races with a close

On Thu,  7 Jan 2016 21:22:22 -0500
Jeff Layton <jlayton@...chiereds.net> wrote:

> Dmitry reported that he was able to reproduce the WARN_ON_ONCE that
> fires in locks_free_lock_context when the flc_posix list isn't empty.
> 
> The problem turns out to be that we're basically rebuilding the
> file_lock from scratch in fcntl_setlk when we discover that the setlk
> has raced with a close. If the l_whence field is SEEK_CUR or SEEK_END,
> then we may end up with fl_start and fl_end values that differ from
> when the lock was initially set, if the file position or length of the
> file has changed in the interim.
> 
> Fix this by just reusing the same lock request structure, and simply
> override fl_type value with F_UNLCK as appropriate. That ensures that
> we really are unlocking the lock that was initially set.
> 
> While we're there, make sure that we do pop a WARN_ON_ONCE if the
> removal ever fails. Also return -EBADF in this event, since that's
> what we would have returned if the close had happened earlier.
> 
> Cc: <stable@...r.kernel.org>
> Reported-by: Dmitry Vyukov <dvyukov@...gle.com>
> Signed-off-by: Jeff Layton <jeff.layton@...marydata.com>
> ---
>  fs/locks.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/locks.c b/fs/locks.c
> index 593dca300b29..0db640e4ced4 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -2181,7 +2181,6 @@ int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
>  		goto out;
>  	}
>  
> -again:
>  	error = flock_to_posix_lock(filp, file_lock, &flock);
>  	if (error)
>  		goto out;
> @@ -2231,9 +2230,11 @@ again:
>  	spin_lock(&current->files->file_lock);
>  	f = fcheck(fd);
>  	spin_unlock(&current->files->file_lock);
> -	if (!error && f != filp && flock.l_type != F_UNLCK) {
> -		flock.l_type = F_UNLCK;
> -		goto again;
> +	if (!error && f != filp && file_lock->fl_type != F_UNLCK) {
> +		file_lock->fl_type = F_UNLCK;
> +		error = do_lock_file_wait(filp, cmd, file_lock);
> +		WARN_ON_ONCE(error);
> +		error = -EBADF;
>  	}
>  
>  out:
> @@ -2321,7 +2322,6 @@ int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
>  		goto out;
>  	}
>  
> -again:
>  	error = flock64_to_posix_lock(filp, file_lock, &flock);
>  	if (error)
>  		goto out;
> @@ -2366,11 +2366,12 @@ again:
>  	spin_lock(&current->files->file_lock);
>  	f = fcheck(fd);
>  	spin_unlock(&current->files->file_lock);
> -	if (!error && f != filp && flock.l_type != F_UNLCK) {
> -		flock.l_type = F_UNLCK;
> -		goto again;
> +	if (!error && f != filp && file_lock->fl_type != F_UNLCK) {
> +		file_lock->fl_type = F_UNLCK;
> +		error = do_lock_file_wait(filp, cmd, file_lock);
> +		WARN_ON_ONCE(error);
> +		error = -EBADF;
>  	}
> -
>  out:
>  	locks_free_lock(file_lock);
>  	return error;

While this does fix Dmitri's reproducer, I think the basic concept of
removing locks like this after they are set is racy. Consider where we
have two threads:

Thread1				Thread2
----------------------------------------------------------------------------
fd1 = memfd_create(...);
fd2 = dup(fd1);
				fcntl(fd2, F_SETLK);
				(Here we call fcntl, and lock is set, but
				 task gets scheduled out before fcheck)
close(fd2)
fcntl(fd1, F_SETLK...);

				Task scheduled back in, does fcheck for fd2
				and finds that it's gone. Removes the lock
				that Thread1 just set.

So that seems wrong...in the face of the race above we can end up with
no lock set on the file, even though Thread1 thinks it has one. It is a
pretty unlikely race, but I don't see anything that prevents it.

The fix for filesystems that do not define their own ->lock op would be
pretty simple. We could do a fcheck after taking the flc_lock, but
before setting the lock on the file. The flc_lock should be enough to
prevent that race (though we may need to revisit some of the lockless
checks in locks_remove_posix). That wouldn't work for filesystems that
do set ->lock though, and I think we really do need a more general
solution there.

The good news is that OFD locks should be exempt from that fcheck
altogether. I'll spin up another patch for that, so we can at least
ensure that they aren't subject to that race.

Any thoughts on how to fix the above for traditional POSIX locks though?
-- 
Jeff Layton <jlayton@...chiereds.net>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ