[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171217071009.GA8631@rapoport-lnx>
Date: Sun, 17 Dec 2017 09:10:10 +0200
From: Mike Rapoport <rppt@...ux.vnet.ibm.com>
To: Christoph Hellwig <hch@....de>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>,
Andrea Arcangeli <aarcange@...hat.com>,
Jason Baron <jbaron@...mai.com>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Matthew Wilcox <mawilcox@...rosoft.com>
Subject: Re: [PATCH 2/3] userfaultfd: use fault_wqh lock
Hi,
On Thu, Dec 14, 2017 at 04:23:43PM +0100, Christoph Hellwig wrote:
> From: Matthew Wilcox <mawilcox@...rosoft.com>
>
> The epoll code currently uses the unlocked waitqueue helpers for managing
The userfaultfd code
> fault_wqh, but instead of holding the waitqueue lock for this waitqueue
> around these calls, it the waitqueue lock of fault_pending_wq, which is
> a different waitqueue instance. Given that the waitqueue is not exposed
> to the rest of the kernel this actually works ok at the moment, but
> prevents the epoll locking rules from being enforced using lockdep.
ditto
> Switch to the internally locked waitqueue helpers instead. This means
> that the lock inside fault_wqh now nests inside the fault_pending_wqh
> lock, but that's not a problem since it was entireyl unused before.
spelling: entirely
> Signed-off-by: Matthew Wilcox <mawilcox@...rosoft.com>
> [hch: slight changelog updates]
> Signed-off-by: Christoph Hellwig <hch@....de>
Reviewed-by: Mike Rapoport <rppt@...ux.vnet.ibm.com>
> ---
> fs/userfaultfd.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index ac9a4e65ca49..a39bc3237b68 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -879,7 +879,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> */
> spin_lock(&ctx->fault_pending_wqh.lock);
> __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range);
> - __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, &range);
> + __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, &range);
> spin_unlock(&ctx->fault_pending_wqh.lock);
>
> /* Flush pending events that may still wait on event_wqh */
> @@ -1045,7 +1045,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
> * anyway.
> */
> list_del(&uwq->wq.entry);
> - __add_wait_queue(&ctx->fault_wqh, &uwq->wq);
> + add_wait_queue(&ctx->fault_wqh, &uwq->wq);
>
> write_seqcount_end(&ctx->refile_seq);
>
> @@ -1194,7 +1194,7 @@ static void __wake_userfault(struct userfaultfd_ctx *ctx,
> __wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL,
> range);
> if (waitqueue_active(&ctx->fault_wqh))
> - __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, range);
> + __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, range);
> spin_unlock(&ctx->fault_pending_wqh.lock);
> }
>
> --
> 2.14.2
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists