[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180126031346.GW13338@ZenIV.linux.org.uk>
Date: Fri, 26 Jan 2018 03:13:46 +0000
From: Al Viro <viro@...IV.linux.org.uk>
To: Joel Fernandes <joelaf@...gle.com>
Cc: linux-kernel@...r.kernel.org, Todd Kjos <tkjos@...gle.com>,
Arve Hjonnevag <arve@...roid.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH] ashmem: Fix lockdep issue during llseek
On Thu, Jan 25, 2018 at 06:46:49PM -0800, Joel Fernandes wrote:
> ashmem_mutex create a chain of dependencies like so:
>
> (1)
> mmap syscall ->
> mmap_sem -> (acquired)
> ashmem_mmap
> ashmem_mutex (try to acquire)
> (block)
>
> (2)
> llseek syscall ->
> ashmem_llseek ->
> ashmem_mutex -> (acquired)
> inode_lock ->
> inode->i_rwsem (try to acquire)
> (block)
>
> (3)
> getdents ->
> iterate_dir ->
> inode_lock ->
> inode->i_rwsem (acquired)
> copy_to_user ->
> mmap_sem (try to acquire)
>
> There is a lock ordering created between mmap_sem and inode->i_rwsem
> during a syzcaller test, this patch fixes the issue by releasing the
> ashmem_mutex before the call to vfs_llseek, and reacquiring it after.
That looks odd. If this approach works, what the hell do you need
ashmem_mutex for in ashmem_llseek() in the first place? What is
it protecting there?
Powered by blists - more mailing lists