lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b61ac651-fafe-449a-82ed-7239123844e1@kernel.dk>
Date: Tue, 22 Apr 2025 11:33:41 -0600
From: Jens Axboe <axboe@...nel.dk>
To: 姜智伟 <qq282012236@...il.com>
Cc: viro@...iv.linux.org.uk, brauner@...nel.org, jack@...e.cz,
 akpm@...ux-foundation.org, peterx@...hat.com, asml.silence@...il.com,
 linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, io-uring@...r.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault
 scenarios

On 4/22/25 11:04 AM, ??? wrote:
> On Wed, Apr 23, 2025 at 12:32?AM Jens Axboe <axboe@...nel.dk> wrote:
>>
>> On 4/22/25 10:29 AM, Zhiwei Jiang wrote:
>>> diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
>>> index d4fb2940e435..8567a9c819db 100644
>>> --- a/io_uring/io-wq.h
>>> +++ b/io_uring/io-wq.h
>>> @@ -70,8 +70,10 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
>>>                                       void *data, bool cancel_all);
>>>
>>>  #if defined(CONFIG_IO_WQ)
>>> -extern void io_wq_worker_sleeping(struct task_struct *);
>>> -extern void io_wq_worker_running(struct task_struct *);
>>> +extern void io_wq_worker_sleeping(struct task_struct *tsk);
>>> +extern void io_wq_worker_running(struct task_struct *tsk);
>>> +extern void set_userfault_flag_for_ioworker(void);
>>> +extern void clear_userfault_flag_for_ioworker(void);
>>>  #else
>>>  static inline void io_wq_worker_sleeping(struct task_struct *tsk)
>>>  {
>>> @@ -79,6 +81,12 @@ static inline void io_wq_worker_sleeping(struct task_struct *tsk)
>>>  static inline void io_wq_worker_running(struct task_struct *tsk)
>>>  {
>>>  }
>>> +static inline void set_userfault_flag_for_ioworker(void)
>>> +{
>>> +}
>>> +static inline void clear_userfault_flag_for_ioworker(void)
>>> +{
>>> +}
>>>  #endif
>>>
>>>  static inline bool io_wq_current_is_worker(void)
>>
>> This should go in include/linux/io_uring.h and then userfaultfd would
>> not have to include io_uring private headers.
>>
>> But that's beside the point, like I said we still need to get to the
>> bottom of what is going on here first, rather than try and paper around
>> it. So please don't post more versions of this before we have that
>> understanding.
>>
>> See previous emails on 6.8 and other kernel versions.
>>
>> --
>> Jens Axboe
> The issue did not involve creating new worker processes. Instead, the
> existing IOU worker kernel threads (about a dozen) associated with the VM
> process were fully utilizing CPU without writing data, caused by a fault
> while reading user data pages in the fault_in_iov_iter_readable function
> when pulling user memory into kernel space.

OK that makes more sense, I can certainly reproduce a loop in this path:

iou-wrk-726     729    36.910071:       9737 cycles:P: 
        ffff800080456c44 handle_userfault+0x47c
        ffff800080381fc0 hugetlb_fault+0xb68
        ffff80008031fee4 handle_mm_fault+0x2fc
        ffff8000812ada6c do_page_fault+0x1e4
        ffff8000812ae024 do_translation_fault+0x9c
        ffff800080049a9c do_mem_abort+0x44
        ffff80008129bd78 el1_abort+0x38
        ffff80008129ceb4 el1h_64_sync_handler+0xd4
        ffff8000800112b4 el1h_64_sync+0x6c
        ffff80008030984c fault_in_readable+0x74
        ffff800080476f3c iomap_file_buffered_write+0x14c
        ffff8000809b1230 blkdev_write_iter+0x1a8
        ffff800080a1f378 io_write+0x188
        ffff800080a14f30 io_issue_sqe+0x68
        ffff800080a155d0 io_wq_submit_work+0xa8
        ffff800080a32afc io_worker_handle_work+0x1f4
        ffff800080a332b8 io_wq_worker+0x110
        ffff80008002dd38 ret_from_fork+0x10

which seems to be expected, we'd continually try and fault in the
ranges, if the userfaultfd handler isn't filling them.

I guess this is where I'm still confused, because I don't see how this
is different from if you have a normal write(2) syscall doing the same
thing - you'd get the same looping.

??

> This issue occurs like during VM snapshot loading (which uses
> userfaultfd for on-demand memory loading), while the task in the guest is
> writing data to disk.
> 
> Normally, the VM first triggers a user fault to fill the page table.
> So in the IOU worker thread, the page tables are already filled,
> fault no chance happens when faulting in memory pages
> in fault_in_iov_iter_readable.
> 
> I suspect that during snapshot loading, a memory access in the
> VM triggers an async page fault handled by the kernel thread,
> while the IOU worker's async kernel thread is also running.
> Maybe If the IOU worker's thread is scheduled first.
> I?m going to bed now.

Ah ok, so what you're saying is that because we end up not sleeping
(because a signal is pending, it seems), then the fault will never get
filled and hence progress not made? And the signal is pending because
someone tried to create a net worker, and this work is not getting
processed.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ