[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cac3a5c9-e798-47f2-81ff-3c6003c6d8bb@kernel.dk>
Date: Wed, 23 Apr 2025 16:57:58 -0600
From: Jens Axboe <axboe@...nel.dk>
To: 姜智伟 <qq282012236@...il.com>
Cc: viro@...iv.linux.org.uk, brauner@...nel.org, jack@...e.cz,
akpm@...ux-foundation.org, peterx@...hat.com, asml.silence@...il.com,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, io-uring@...r.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault
scenarios
On 4/23/25 9:55 AM, Jens Axboe wrote:
> Something like this, perhaps - it'll ensure that io-wq workers get a
> chance to flush out pending work, which should prevent the looping. I've
> attached a basic test case. It'll issue a write that will fault, and
> then try and cancel that as a way to trigger the TIF_NOTIFY_SIGNAL based
> looping.
Something that may actually work - use TASK_UNINTERRUPTIBLE IFF
signal_pending() is true AND the fault has already been tried once
before. If that's the case, rather than just call schedule() with
TASK_INTERRUPTIBLE, use TASK_UNINTERRUPTIBLE and schedule_timeout() with
a suitable timeout length that prevents the annoying parts busy looping.
I used HZ / 10.
I don't see how to fix userfaultfd for this case, either using io_uring
or normal write(2). Normal syscalls can pass back -ERESTARTSYS and get
it retried, but there's no way to do that from inside fault handling. So
I think we just have to be nicer about it.
Andrew, as the userfaultfd maintainer, what do you think?
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index d80f94346199..1016268c7b51 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -334,15 +334,29 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
return ret;
}
-static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags)
+struct userfault_wait {
+ unsigned int task_state;
+ bool timeout;
+};
+
+static struct userfault_wait userfaultfd_get_blocking_state(unsigned int flags)
{
+ /*
+ * If the fault has already been tried AND there's a signal pending
+ * for this task, use TASK_UNINTERRUPTIBLE with a small timeout.
+ * This prevents busy looping where schedule() otherwise does nothing
+ * for TASK_INTERRUPTIBLE when the task has a signal pending.
+ */
+ if ((flags & FAULT_FLAG_TRIED) && signal_pending(current))
+ return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, true };
+
if (flags & FAULT_FLAG_INTERRUPTIBLE)
- return TASK_INTERRUPTIBLE;
+ return (struct userfault_wait) { TASK_INTERRUPTIBLE, false };
if (flags & FAULT_FLAG_KILLABLE)
- return TASK_KILLABLE;
+ return (struct userfault_wait) { TASK_KILLABLE, false };
- return TASK_UNINTERRUPTIBLE;
+ return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, false };
}
/*
@@ -368,7 +382,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
struct userfaultfd_wait_queue uwq;
vm_fault_t ret = VM_FAULT_SIGBUS;
bool must_wait;
- unsigned int blocking_state;
+ struct userfault_wait wait_mode;
/*
* We don't do userfault handling for the final child pid update
@@ -466,7 +480,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
uwq.ctx = ctx;
uwq.waken = false;
- blocking_state = userfaultfd_get_blocking_state(vmf->flags);
+ wait_mode = userfaultfd_get_blocking_state(vmf->flags);
/*
* Take the vma lock now, in order to safely call
@@ -488,7 +502,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
* following the spin_unlock to happen before the list_add in
* __add_wait_queue.
*/
- set_current_state(blocking_state);
+ set_current_state(wait_mode.task_state);
spin_unlock_irq(&ctx->fault_pending_wqh.lock);
if (!is_vm_hugetlb_page(vma))
@@ -501,7 +515,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
if (likely(must_wait && !READ_ONCE(ctx->released))) {
wake_up_poll(&ctx->fd_wqh, EPOLLIN);
- schedule();
+ /* See comment in userfaultfd_get_blocking_state() */
+ if (!wait_mode.timeout)
+ schedule();
+ else
+ schedule_timeout(HZ / 10);
}
__set_current_state(TASK_RUNNING);
--
Jens Axboe
Powered by blists - more mailing lists