lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANHzP_u3zN2a_t2O+BLwgV=KJZaXtANwXVq6VVD26TvF2hFL8Q@mail.gmail.com>
Date: Wed, 23 Apr 2025 11:11:03 +0800
From: 姜智伟 <qq282012236@...il.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: viro@...iv.linux.org.uk, brauner@...nel.org, jack@...e.cz, 
	akpm@...ux-foundation.org, peterx@...hat.com, asml.silence@...il.com, 
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, io-uring@...r.kernel.org
Subject: Re: [PATCH v2 1/2] io_uring: Add new functions to handle user fault scenarios

Sorry, I may have misunderstood. I thought your test case
was working correctly. In io_wq_worker_running() it will return
if in io worker context, that is different from common progress
context.I hope the graph above can help you understand.

On Wed, Apr 23, 2025 at 10:49 AM 姜智伟 <qq282012236@...il.com> wrote:
>
> On Wed, Apr 23, 2025 at 1:33 AM Jens Axboe <axboe@...nel.dk> wrote:
> >
> > On 4/22/25 11:04 AM, ??? wrote:
> > > On Wed, Apr 23, 2025 at 12:32?AM Jens Axboe <axboe@...nel.dk> wrote:
> > >>
> > >> On 4/22/25 10:29 AM, Zhiwei Jiang wrote:
> > >>> diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
> > >>> index d4fb2940e435..8567a9c819db 100644
> > >>> --- a/io_uring/io-wq.h
> > >>> +++ b/io_uring/io-wq.h
> > >>> @@ -70,8 +70,10 @@ enum io_wq_cancel io_wq_cancel_cb(struct io_wq *wq, work_cancel_fn *cancel,
> > >>>                                       void *data, bool cancel_all);
> > >>>
> > >>>  #if defined(CONFIG_IO_WQ)
> > >>> -extern void io_wq_worker_sleeping(struct task_struct *);
> > >>> -extern void io_wq_worker_running(struct task_struct *);
> > >>> +extern void io_wq_worker_sleeping(struct task_struct *tsk);
> > >>> +extern void io_wq_worker_running(struct task_struct *tsk);
> > >>> +extern void set_userfault_flag_for_ioworker(void);
> > >>> +extern void clear_userfault_flag_for_ioworker(void);
> > >>>  #else
> > >>>  static inline void io_wq_worker_sleeping(struct task_struct *tsk)
> > >>>  {
> > >>> @@ -79,6 +81,12 @@ static inline void io_wq_worker_sleeping(struct task_struct *tsk)
> > >>>  static inline void io_wq_worker_running(struct task_struct *tsk)
> > >>>  {
> > >>>  }
> > >>> +static inline void set_userfault_flag_for_ioworker(void)
> > >>> +{
> > >>> +}
> > >>> +static inline void clear_userfault_flag_for_ioworker(void)
> > >>> +{
> > >>> +}
> > >>>  #endif
> > >>>
> > >>>  static inline bool io_wq_current_is_worker(void)
> > >>
> > >> This should go in include/linux/io_uring.h and then userfaultfd would
> > >> not have to include io_uring private headers.
> > >>
> > >> But that's beside the point, like I said we still need to get to the
> > >> bottom of what is going on here first, rather than try and paper around
> > >> it. So please don't post more versions of this before we have that
> > >> understanding.
> > >>
> > >> See previous emails on 6.8 and other kernel versions.
> > >>
> > >> --
> > >> Jens Axboe
> > > The issue did not involve creating new worker processes. Instead, the
> > > existing IOU worker kernel threads (about a dozen) associated with the VM
> > > process were fully utilizing CPU without writing data, caused by a fault
> > > while reading user data pages in the fault_in_iov_iter_readable function
> > > when pulling user memory into kernel space.
> >
> > OK that makes more sense, I can certainly reproduce a loop in this path:
> >
> > iou-wrk-726     729    36.910071:       9737 cycles:P:
> >         ffff800080456c44 handle_userfault+0x47c
> >         ffff800080381fc0 hugetlb_fault+0xb68
> >         ffff80008031fee4 handle_mm_fault+0x2fc
> >         ffff8000812ada6c do_page_fault+0x1e4
> >         ffff8000812ae024 do_translation_fault+0x9c
> >         ffff800080049a9c do_mem_abort+0x44
> >         ffff80008129bd78 el1_abort+0x38
> >         ffff80008129ceb4 el1h_64_sync_handler+0xd4
> >         ffff8000800112b4 el1h_64_sync+0x6c
> >         ffff80008030984c fault_in_readable+0x74
> >         ffff800080476f3c iomap_file_buffered_write+0x14c
> >         ffff8000809b1230 blkdev_write_iter+0x1a8
> >         ffff800080a1f378 io_write+0x188
> >         ffff800080a14f30 io_issue_sqe+0x68
> >         ffff800080a155d0 io_wq_submit_work+0xa8
> >         ffff800080a32afc io_worker_handle_work+0x1f4
> >         ffff800080a332b8 io_wq_worker+0x110
> >         ffff80008002dd38 ret_from_fork+0x10
> >
> > which seems to be expected, we'd continually try and fault in the
> > ranges, if the userfaultfd handler isn't filling them.
> >
> > I guess this is where I'm still confused, because I don't see how this
> > is different from if you have a normal write(2) syscall doing the same
> > thing - you'd get the same looping.
> >
> > ??
> >
> > > This issue occurs like during VM snapshot loading (which uses
> > > userfaultfd for on-demand memory loading), while the task in the guest is
> > > writing data to disk.
> > >
> > > Normally, the VM first triggers a user fault to fill the page table.
> > > So in the IOU worker thread, the page tables are already filled,
> > > fault no chance happens when faulting in memory pages
> > > in fault_in_iov_iter_readable.
> > >
> > > I suspect that during snapshot loading, a memory access in the
> > > VM triggers an async page fault handled by the kernel thread,
> > > while the IOU worker's async kernel thread is also running.
> > > Maybe If the IOU worker's thread is scheduled first.
> > > I?m going to bed now.
> >
> > Ah ok, so what you're saying is that because we end up not sleeping
> > (because a signal is pending, it seems), then the fault will never get
> > filled and hence progress not made? And the signal is pending because
> > someone tried to create a net worker, and this work is not getting
> > processed.
> >
> > --
> > Jens Axboe
>         handle_userfault() {
>           hugetlb_vma_lock_read();
>           _raw_spin_lock_irq() {
>             __pv_queued_spin_lock_slowpath();
>           }
>           vma_mmu_pagesize() {
>             hugetlb_vm_op_pagesize();
>           }
>           huge_pte_offset();
>           hugetlb_vma_unlock_read();
>           up_read();
>           __wake_up() {
>             _raw_spin_lock_irqsave() {
>               __pv_queued_spin_lock_slowpath();
>             }
>             __wake_up_common();
>             _raw_spin_unlock_irqrestore();
>           }
>           schedule() {
>             io_wq_worker_sleeping() {
>               io_wq_dec_running();
>             }
>             rcu_note_context_switch();
>             raw_spin_rq_lock_nested() {
>               _raw_spin_lock();
>             }
>             update_rq_clock();
>             pick_next_task() {
>               pick_next_task_fair() {
>                 update_curr() {
>                   update_curr_se();
>                   __calc_delta.constprop.0();
>                   update_min_vruntime();
>                 }
>                 check_cfs_rq_runtime();
>                 pick_next_entity() {
>                   pick_eevdf();
>                 }
>                 update_curr() {
>                   update_curr_se();
>                   __calc_delta.constprop.0();
>                   update_min_vruntime();
>                 }
>                 check_cfs_rq_runtime();
>                 pick_next_entity() {
>                   pick_eevdf();
>                 }
>                 update_curr() {
>                   update_curr_se();
>                   update_min_vruntime();
>                   cpuacct_charge();
>                   __cgroup_account_cputime() {
>                     cgroup_rstat_updated();
>                   }
>                 }
>                 check_cfs_rq_runtime();
>                 pick_next_entity() {
>                   pick_eevdf();
>                 }
>               }
>             }
>             raw_spin_rq_unlock();
>             io_wq_worker_running();
>           }
>           _raw_spin_lock_irq() {
>             __pv_queued_spin_lock_slowpath();
>           }
>           userfaultfd_ctx_put();
>         }
>       }
> The execution flow above is the one that kept faulting
> repeatedly in the IOU worker during the issue. The entire fault path,
> including this final userfault handling code you're seeing, would be
> triggered in an infinite loop. That's why I traced and found that the
> io_wq_worker_running() function returns early, causing the flow to
> differ from a normal user fault, where it should be sleeping.
>
> However, your call stack appears to behave normally,
> which makes me curious about what's different about execution flow.
> Would you be able to share your test case code so I can study it
> and try to reproduce the behavior on my side?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ