lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANHzP_uJft1FPJ0W++0Zp5rUjayaULEdpAQRn1VuuqDVq3DmJA@mail.gmail.com>
Date: Thu, 24 Apr 2025 22:08:50 +0800
From: 姜智伟 <qq282012236@...il.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: viro@...iv.linux.org.uk, brauner@...nel.org, jack@...e.cz, 
	akpm@...ux-foundation.org, peterx@...hat.com, asml.silence@...il.com, 
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, io-uring@...r.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios

Jens Axboe <axboe@...nel.dk> 于2025年4月24日周四 06:58写道:
>
> On 4/23/25 9:55 AM, Jens Axboe wrote:
> > Something like this, perhaps - it'll ensure that io-wq workers get a
> > chance to flush out pending work, which should prevent the looping. I've
> > attached a basic test case. It'll issue a write that will fault, and
> > then try and cancel that as a way to trigger the TIF_NOTIFY_SIGNAL based
> > looping.
>
> Something that may actually work - use TASK_UNINTERRUPTIBLE IFF
> signal_pending() is true AND the fault has already been tried once
> before. If that's the case, rather than just call schedule() with
> TASK_INTERRUPTIBLE, use TASK_UNINTERRUPTIBLE and schedule_timeout() with
> a suitable timeout length that prevents the annoying parts busy looping.
> I used HZ / 10.
>
> I don't see how to fix userfaultfd for this case, either using io_uring
> or normal write(2). Normal syscalls can pass back -ERESTARTSYS and get
> it retried, but there's no way to do that from inside fault handling. So
> I think we just have to be nicer about it.
>
> Andrew, as the userfaultfd maintainer, what do you think?
>
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index d80f94346199..1016268c7b51 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -334,15 +334,29 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
>         return ret;
>  }
>
> -static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags)
> +struct userfault_wait {
> +       unsigned int task_state;
> +       bool timeout;
> +};
> +
> +static struct userfault_wait userfaultfd_get_blocking_state(unsigned int flags)
>  {
> +       /*
> +        * If the fault has already been tried AND there's a signal pending
> +        * for this task, use TASK_UNINTERRUPTIBLE with a small timeout.
> +        * This prevents busy looping where schedule() otherwise does nothing
> +        * for TASK_INTERRUPTIBLE when the task has a signal pending.
> +        */
> +       if ((flags & FAULT_FLAG_TRIED) && signal_pending(current))
> +               return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, true };
> +
>         if (flags & FAULT_FLAG_INTERRUPTIBLE)
> -               return TASK_INTERRUPTIBLE;
> +               return (struct userfault_wait) { TASK_INTERRUPTIBLE, false };
>
>         if (flags & FAULT_FLAG_KILLABLE)
> -               return TASK_KILLABLE;
> +               return (struct userfault_wait) { TASK_KILLABLE, false };
>
> -       return TASK_UNINTERRUPTIBLE;
> +       return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, false };
>  }
>
>  /*
> @@ -368,7 +382,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>         struct userfaultfd_wait_queue uwq;
>         vm_fault_t ret = VM_FAULT_SIGBUS;
>         bool must_wait;
> -       unsigned int blocking_state;
> +       struct userfault_wait wait_mode;
>
>         /*
>          * We don't do userfault handling for the final child pid update
> @@ -466,7 +480,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>         uwq.ctx = ctx;
>         uwq.waken = false;
>
> -       blocking_state = userfaultfd_get_blocking_state(vmf->flags);
> +       wait_mode = userfaultfd_get_blocking_state(vmf->flags);
>
>          /*
>           * Take the vma lock now, in order to safely call
> @@ -488,7 +502,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>          * following the spin_unlock to happen before the list_add in
>          * __add_wait_queue.
>          */
> -       set_current_state(blocking_state);
> +       set_current_state(wait_mode.task_state);
>         spin_unlock_irq(&ctx->fault_pending_wqh.lock);
>
>         if (!is_vm_hugetlb_page(vma))
> @@ -501,7 +515,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>
>         if (likely(must_wait && !READ_ONCE(ctx->released))) {
>                 wake_up_poll(&ctx->fd_wqh, EPOLLIN);
> -               schedule();
> +               /* See comment in userfaultfd_get_blocking_state() */
> +               if (!wait_mode.timeout)
> +                       schedule();
> +               else
> +                       schedule_timeout(HZ / 10);
>         }
>
>         __set_current_state(TASK_RUNNING);
>
> --
> Jens Axboe
I guess the previous io_work_fault patch might have already addressed the
issue sufficiently. The later patch that adds a timeout for userfaultfd might
not be necessary  wouldn’t returning after a timeout just cause the same
fault to repeat indefinitely again? Regardless of whether the thread is in
UN or IN state, the expected behavior should be to wait until the page is
filled or the uffd resource is released to be woken up,
which seems like the correct logic.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ