[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230712004705.316157-1-axboe@kernel.dk>
Date: Tue, 11 Jul 2023 18:46:58 -0600
From: Jens Axboe <axboe@...nel.dk>
To: io-uring@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, mingo@...hat.com, peterz@...radead.org
Subject: [PATCHSET 0/7] Add io_uring futex/futexv support
Hi,
This patchset adds support for first futex wake and wait, and then
futexv. Patches 1..2 are just prep patches, patch 3 adds the wait
and wake support for io_uring, and then patches 4..6 are again prep
patches to end up with futexv support in patch 7.
For both wait/wake/waitv, we support the bitset variant, as the
"normal" variants can be easily implemented on top of that.
PI and requeue are not supported through io_uring, just the above
mentioned parts. This may change in the future, but in the spirit
of keeping this small (and based on what people have been asking for),
this is what we currently have.
When I did these patches, I forgot that Pavel had previously posted a
futex variant for io_uring. The major thing that had been holding me
back from people asking about futexes and io_uring, is that I wanted
to do this what I consider the right way - no usage of io-wq or thread
offload, an actually async implementation that is efficient to use
and don't rely on a blocking thread for futex wait/waitv. This is what
this patchset attempts to do, while being minimally invasive on the
futex side. I believe the diffstat reflects that.
As far as I can recall, the first request for futex support with
io_uring came from Andres Freund, working on postgres. His aio rework
of postgres was one of the early adopters of io_uring, and futex
support was a natural extension for that. This is relevant from both
a usability point of view, as well as for effiency and performance.
In Andres's words, for the former:
"Futex wait support in io_uring makes it a lot easier to avoid deadlocks
in concurrent programs that have their own buffer pool: Obviously pages in
the application buffer pool have to be locked during IO. If the initiator
of IO A needs to wait for a held lock B, the holder of lock B might wait
for the IO A to complete. The ability to wait for a lock and IO
completions at the same time provides an efficient way to avoid such
deadlocks."
and in terms of effiency, even without unlocking the full potential yet,
Andres says:
"Futex wake support in io_uring is useful because it allows for more
efficient directed wakeups. For some "locks" postgres has queues
implemented in userspace, with wakeup logic that cannot easily be
implemented with FUTEX_WAKE_BITSET on a single "futex word" (imagine
waiting for journal flushes to have completed up to a certain point). Thus
a "lock release" sometimes need to wake up many processes in a row. A
quick-and-dirty conversion to doing these wakeups via io_uring lead to a
3% throughput increase, with 12% fewer context switches, albeit in a
fairly extreme workload."
Some basic io_uring futex support and test cases are available in the
liburing 'futex' branch:
https://git.kernel.dk/cgit/liburing/log/?h=futex
testing all of the variants. I originally wrote this code about a
month ago and Andres has been using it with postgres, and I'm not
aware of any bugs in it. That's not to say it's perfect, obviously,
and I welcome some feedback so we can move this forward and hash out
any potential issues.
include/linux/io_uring_types.h | 3 +
include/uapi/linux/io_uring.h | 4 +
io_uring/Makefile | 4 +-
io_uring/cancel.c | 5 +
io_uring/cancel.h | 4 +
io_uring/futex.c | 377 +++++++++++++++++++++++++++++++++
io_uring/futex.h | 36 ++++
io_uring/io_uring.c | 5 +
io_uring/opdef.c | 35 ++-
kernel/futex/futex.h | 30 +++
kernel/futex/requeue.c | 3 +-
kernel/futex/syscalls.c | 25 ++-
kernel/futex/waitwake.c | 19 +-
13 files changed, 525 insertions(+), 25 deletions(-)
You can also find the code here:
https://git.kernel.dk/cgit/linux/log/?h=io_uring-futex
--
Jens Axboe
Powered by blists - more mailing lists