[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6649c552-5a84-4a3a-b276-fc9f4008d019@gmail.com>
Date: Thu, 12 Jun 2025 10:33:29 +0100
From: Pavel Begunkov <asml.silence@...il.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: io-uring@...r.kernel.org, Martin KaFai Lau <martin.lau@...ux.dev>,
bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC v2 4/5] io_uring/bpf: add handle events callback
On 6/12/25 03:28, Alexei Starovoitov wrote:
> On Fri, Jun 6, 2025 at 6:58 AM Pavel Begunkov <asml.silence@...il.com> wrote:
>>
>> +static inline int io_run_bpf(struct io_ring_ctx *ctx, struct iou_loop_state *state)
>> +{
>> + scoped_guard(mutex, &ctx->uring_lock) {
>> + if (!ctx->bpf_ops)
>> + return IOU_EVENTS_STOP;
>> + return ctx->bpf_ops->handle_events(ctx, state);
>> + }
>> +}
>
> you're grabbing the mutex before calling bpf prog and doing
> it in a loop million times a second?
> Looks like massive overhead for program invocation.
> I'm surprised it's fast.
You need the lock to submit anything with io_uring, so there is
a parity with how it already is. And the program is just a test
and pretty silly in nature, normally you'd either get higher
batching, and the user (incl bpf) can specifically specify to
wait for more, or it'll be intermingled with sleeping at which
point the mutex is not a problem. I'll write a storage IO
example for the next time.
If there will be a good use case, I can try to relax it for
programs that don't issue requests, but that might make
sync more complicated, especially on the reg/unreg side.
--
Pavel Begunkov
Powered by blists - more mailing lists