lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADUfDZovKhJvF+zaVukM75KLSUsCwUDRoMybMKLpHioPpcfJCw@mail.gmail.com>
Date: Mon, 8 Sep 2025 11:11:58 -0700
From: Caleb Sander Mateos <csander@...estorage.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: io-uring@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/5] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL

On Mon, Sep 8, 2025 at 7:13 AM Jens Axboe <axboe@...nel.dk> wrote:
>
> On 9/4/25 11:09 AM, Caleb Sander Mateos wrote:
> > IORING_SETUP_SINGLE_ISSUER doesn't currently enable any optimizations,
> > but it will soon be used to avoid taking io_ring_ctx's uring_lock when
> > submitting from the single issuer task. If the IORING_SETUP_SQPOLL flag
> > is set, the SQ thread is the sole task issuing SQEs. However, other
> > tasks may make io_uring_register() syscalls, which must be synchronized
> > with SQE submission. So it wouldn't be safe to skip the uring_lock
> > around the SQ thread's submission even if IORING_SETUP_SINGLE_ISSUER is
> > set. Therefore, clear IORING_SETUP_SINGLE_ISSUER from the io_ring_ctx
> > flags if IORING_SETUP_SQPOLL is set.
> >
> > Signed-off-by: Caleb Sander Mateos <csander@...estorage.com>
> > ---
> >  io_uring/io_uring.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index 42f6bfbb99d3..c7af9dc3d95a 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> > @@ -3724,10 +3724,19 @@ static int io_uring_sanitise_params(struct io_uring_params *p)
> >        */
> >       if ((flags & (IORING_SETUP_CQE32|IORING_SETUP_CQE_MIXED)) ==
> >           (IORING_SETUP_CQE32|IORING_SETUP_CQE_MIXED))
> >               return -EINVAL;
> >
> > +     /*
> > +      * If IORING_SETUP_SQPOLL is set, only the SQ thread issues SQEs,
> > +      * but other threads may call io_uring_register() concurrently.
> > +      * We still need uring_lock to synchronize these io_ring_ctx accesses,
> > +      * so disable the single issuer optimizations.
> > +      */
> > +     if (flags & IORING_SETUP_SQPOLL)
> > +             p->flags &= ~IORING_SETUP_SINGLE_ISSUER;
> > +
>
> As mentioned I think this is fine. Just for posterity, one solution
> here would be to require that the task doing eg io_uring_register() on a
> setup with SINGLE_ISSUER|SQPOLL would be required to park and unpark the
> SQ thread before doing what it needs to do. That should get us most/all
> of the way there to enabling it with SQPOLL as well.

Right, though that may make io_uring_register() significantly slower
and disruptive to the I/O path. Another option would be to proxy all
registrations to the SQ thread via task_work. I think leaving the
current behavior as-is makes the most sense to avoid any regressions.
If someone is interested in optimizing the IORING_SETUP_SQPOLL &&
IORING_SETUP_SINGLE_ISSUER use case, they're more than welcome to!

I appreciate your feedback on the series. Do you have any other thoughts on it?

Best,
Caleb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ