lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fdcb66e3-e99a-49f9-8874-00110b06bb3d@kernel.dk>
Date: Thu, 24 Apr 2025 09:21:18 -0600
From: Jens Axboe <axboe@...nel.dk>
To: 姜智伟 <qq282012236@...il.com>
Cc: viro@...iv.linux.org.uk, brauner@...nel.org, jack@...e.cz,
 akpm@...ux-foundation.org, peterx@...hat.com, asml.silence@...il.com,
 linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, io-uring@...r.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault
 scenarios

On 4/24/25 9:12 AM, ??? wrote:
> Jens Axboe <axboe@...nel.dk> ?2025?4?24??? 22:53???
>>
>> On 4/24/25 8:45 AM, ??? wrote:
>>> Jens Axboe <axboe@...nel.dk> ?2025?4?24??? 22:13???
>>>>
>>>> On 4/24/25 8:08 AM, ??? wrote:
>>>>> Jens Axboe <axboe@...nel.dk> ?2025?4?24??? 06:58???
>>>>>>
>>>>>> On 4/23/25 9:55 AM, Jens Axboe wrote:
>>>>>>> Something like this, perhaps - it'll ensure that io-wq workers get a
>>>>>>> chance to flush out pending work, which should prevent the looping. I've
>>>>>>> attached a basic test case. It'll issue a write that will fault, and
>>>>>>> then try and cancel that as a way to trigger the TIF_NOTIFY_SIGNAL based
>>>>>>> looping.
>>>>>>
>>>>>> Something that may actually work - use TASK_UNINTERRUPTIBLE IFF
>>>>>> signal_pending() is true AND the fault has already been tried once
>>>>>> before. If that's the case, rather than just call schedule() with
>>>>>> TASK_INTERRUPTIBLE, use TASK_UNINTERRUPTIBLE and schedule_timeout() with
>>>>>> a suitable timeout length that prevents the annoying parts busy looping.
>>>>>> I used HZ / 10.
>>>>>>
>>>>>> I don't see how to fix userfaultfd for this case, either using io_uring
>>>>>> or normal write(2). Normal syscalls can pass back -ERESTARTSYS and get
>>>>>> it retried, but there's no way to do that from inside fault handling. So
>>>>>> I think we just have to be nicer about it.
>>>>>>
>>>>>> Andrew, as the userfaultfd maintainer, what do you think?
>>>>>>
>>>>>> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
>>>>>> index d80f94346199..1016268c7b51 100644
>>>>>> --- a/fs/userfaultfd.c
>>>>>> +++ b/fs/userfaultfd.c
>>>>>> @@ -334,15 +334,29 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
>>>>>>         return ret;
>>>>>>  }
>>>>>>
>>>>>> -static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags)
>>>>>> +struct userfault_wait {
>>>>>> +       unsigned int task_state;
>>>>>> +       bool timeout;
>>>>>> +};
>>>>>> +
>>>>>> +static struct userfault_wait userfaultfd_get_blocking_state(unsigned int flags)
>>>>>>  {
>>>>>> +       /*
>>>>>> +        * If the fault has already been tried AND there's a signal pending
>>>>>> +        * for this task, use TASK_UNINTERRUPTIBLE with a small timeout.
>>>>>> +        * This prevents busy looping where schedule() otherwise does nothing
>>>>>> +        * for TASK_INTERRUPTIBLE when the task has a signal pending.
>>>>>> +        */
>>>>>> +       if ((flags & FAULT_FLAG_TRIED) && signal_pending(current))
>>>>>> +               return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, true };
>>>>>> +
>>>>>>         if (flags & FAULT_FLAG_INTERRUPTIBLE)
>>>>>> -               return TASK_INTERRUPTIBLE;
>>>>>> +               return (struct userfault_wait) { TASK_INTERRUPTIBLE, false };
>>>>>>
>>>>>>         if (flags & FAULT_FLAG_KILLABLE)
>>>>>> -               return TASK_KILLABLE;
>>>>>> +               return (struct userfault_wait) { TASK_KILLABLE, false };
>>>>>>
>>>>>> -       return TASK_UNINTERRUPTIBLE;
>>>>>> +       return (struct userfault_wait) { TASK_UNINTERRUPTIBLE, false };
>>>>>>  }
>>>>>>
>>>>>>  /*
>>>>>> @@ -368,7 +382,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>>>>>>         struct userfaultfd_wait_queue uwq;
>>>>>>         vm_fault_t ret = VM_FAULT_SIGBUS;
>>>>>>         bool must_wait;
>>>>>> -       unsigned int blocking_state;
>>>>>> +       struct userfault_wait wait_mode;
>>>>>>
>>>>>>         /*
>>>>>>          * We don't do userfault handling for the final child pid update
>>>>>> @@ -466,7 +480,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>>>>>>         uwq.ctx = ctx;
>>>>>>         uwq.waken = false;
>>>>>>
>>>>>> -       blocking_state = userfaultfd_get_blocking_state(vmf->flags);
>>>>>> +       wait_mode = userfaultfd_get_blocking_state(vmf->flags);
>>>>>>
>>>>>>          /*
>>>>>>           * Take the vma lock now, in order to safely call
>>>>>> @@ -488,7 +502,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>>>>>>          * following the spin_unlock to happen before the list_add in
>>>>>>          * __add_wait_queue.
>>>>>>          */
>>>>>> -       set_current_state(blocking_state);
>>>>>> +       set_current_state(wait_mode.task_state);
>>>>>>         spin_unlock_irq(&ctx->fault_pending_wqh.lock);
>>>>>>
>>>>>>         if (!is_vm_hugetlb_page(vma))
>>>>>> @@ -501,7 +515,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>>>>>>
>>>>>>         if (likely(must_wait && !READ_ONCE(ctx->released))) {
>>>>>>                 wake_up_poll(&ctx->fd_wqh, EPOLLIN);
>>>>>> -               schedule();
>>>>>> +               /* See comment in userfaultfd_get_blocking_state() */
>>>>>> +               if (!wait_mode.timeout)
>>>>>> +                       schedule();
>>>>>> +               else
>>>>>> +                       schedule_timeout(HZ / 10);
>>>>>>         }
>>>>>>
>>>>>>         __set_current_state(TASK_RUNNING);
>>>>>>
>>>>>> --
>>>>>> Jens Axboe
>>>>> I guess the previous io_work_fault patch might have already addressed
>>>>> the issue sufficiently. The later patch that adds a timeout for
>>>>> userfaultfd might
>>>>
>>>> That one isn't guaranteed to be safe, as it's not necessarily a safe
>>>> context to prune the conditions that lead to a busy loop rather than the
>>>> normal "schedule until the condition is resolved". Running task_work
>>>> should only be done at the outermost point in the kernel, where the task
>>>> state is known sane in terms of what locks etc are being held. For some
>>>> conditions the patch will work just fine, but it's not guaranteed to be
>>>> the case.
>>>>
>>>>> not be necessary  wouldn?t returning after a timeout just cause the
>>>>> same fault to repeat indefinitely again? Regardless of whether the
>>>>> thread is in UN or IN state, the expected behavior should be to wait
>>>>> until the page is filled or the uffd resource is released to be woken
>>>>> up, which seems like the correct logic.
>>>>
>>>> Right, it'll just sleep timeout for a bit as not to be a 100% busy loop.
>>>> That's unfortunately the best we can do for this case... The expected
>>>> behavior is indeed to schedule until we get woken, however that just
>>>> doesn't work if there are signals pending, or other conditions that lead
>>>> to TASK_INTERRUPTIBLE + schedule() being a no-op.
>>>>
>>>> --
>>>> Jens Axboe
>>> In my testing, clearing the NOTIFY flag in the original io_work_fault
>>> ensures that the next schedule correctly waits. However, adding a
>>
>> That's symptom fixing again - the NOTIFY flag is the thing that triggers
>> for io_uring, but any legitimate signal (or task_work added with
>> signaling) will cause the same issue.
>>
>>> timeout causes the issue to return to multiple faults again.
>>> Also, after clearing the NOTIFY flag in handle_userfault,
>>> it?s possible that some task work hasn?t been executed.
>>> But if task_work_run isn?t added back, tasks might get lost?
>>> It seems like there isn?t an appropriate place to add it back.
>>> So, do you suggest adjusting the fault frequency in userfaultfd
>>> to make it more rhythmic to alleviate the issue?
>>
>> The task_work is still there, you just removed the notification
>> mechanism that tells the kernel that there's task_work there. For this
>> particular case, you could re-set TIF_NOTIFY_SIGNAL at the end after
>> schedule(), but again it'd only fix that specific one case, not the
>> generic issue.
>>
>> What's the objection to the sleep approach? If the task is woken by the
>> fault being filled, it'll still wake on time, no delay. If not, then it
>> prevents a busy loop, which is counterproductive.
>>
>> --
>> Jens Axboe
> OK Thanks .and i?m curious about what exactly is meant by a
> 'specific one case 'and what qualifies as a 'generic issue' with re-set
> TIF_NOTIFY_SIGNAL.

I already outlined that in earlier replies, find the email that states
the various conditions that can lead to schedule() w/TASK_INTERRUPTIBLE
to return immediately rather than sleeping. TIF_NOTIFY_SIGNAL is _one_
such condition, it's not _all_ conditions.

> So, in your final opinion, do you think the code in io_uring is not
> suitable for modification, should focus on making adjustments in
> userfaultfd to mitigate the issue?

The problem isn't in io_uring in the first place, you just happened to
trip over it via that path. I even sent out a test case that
demonstrates how to trigger this without io_uring as well. I'm a bit
puzzled as to why all of this isn't clear already.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ