[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5YHjvAsQKKhRWwp95PB0tGlW7nmplpjVW0b5mruoUD73qmg89ntObcPe63oCPf1mhBUh-Y3ARNMcPueF2dUttoWCyWv_KiG3VMIbguuOJHY=@negrel.dev>
Date: Tue, 30 Dec 2025 14:50:51 +0000
From: Alexandre Negrel <alexandre@...rel.dev>
To: Jens Axboe <axboe@...nel.dk>
Cc: io-uring@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] io_uring: make overflowing cqe subject to OOM
On Tuesday, December 30th, 2025 at 1:23 AM, Jens Axboe <axboe@...nel.dk> wrote:
> On 12/29/25 1:19 PM, Alexandre Negrel wrote:
>
> > diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > index 6cb24cdf8e68..5ff1a13fed1c 100644
> > --- a/io_uring/io_uring.c
> > +++ b/io_uring/io_uring.c
> > @@ -545,31 +545,12 @@ void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
> > io_eventfd_signal(ctx, true);
> > }
> >
> > -static inline void __io_cq_lock(struct io_ring_ctx *ctx)
> > -{
> > - if (!ctx->lockless_cq)
> > - spin_lock(&ctx->completion_lock);
> > -}
> > -
> > static inline void io_cq_lock(struct io_ring_ctx *ctx)
> > __acquires(ctx->completion_lock)
> > {
> > spin_lock(&ctx->completion_lock);
> > }
> >
> > -static inline void __io_cq_unlock_post(struct io_ring_ctx ctx)
> > -{
> > - io_commit_cqring(ctx);
> > - if (!ctx->task_complete) {
> > - if (!ctx->lockless_cq)
> > - spin_unlock(&ctx->completion_lock);
> > - / IOPOLL rings only need to wake up if it's also SQPOLL */
> > - if (!ctx->syscall_iopoll)
> > - io_cqring_wake(ctx);
> > - }
> > - io_commit_cqring_flush(ctx);
> > -}
> > -
> > static void io_cq_unlock_post(struct io_ring_ctx *ctx)
> > __releases(ctx->completion_lock)
> > {
> > @@ -1513,7 +1494,6 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > struct io_submit_state *state = &ctx->submit_state;
> > struct io_wq_work_node *node;
> >
> > - __io_cq_lock(ctx);
> > __wq_list_for_each(node, &state->compl_reqs) {
> > struct io_kiocb *req = container_of(node, struct io_kiocb,
> > comp_list);
> > @@ -1525,13 +1505,17 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
> > /
> > if (!(req->flags & (REQ_F_CQE_SKIP | REQ_F_REISSUE)) &&
> > unlikely(!io_fill_cqe_req(ctx, req))) {
> > - if (ctx->lockless_cq)
> > - io_cqe_overflow(ctx, &req->cqe, &req->big_cqe);
> > - else
> > - io_cqe_overflow_locked(ctx, &req->cqe, &req->big_cqe);
> > + io_cqe_overflow(ctx, &req->cqe, &req->big_cqe);
> > }
> > }
> > - __io_cq_unlock_post(ctx);
> > +
> > + io_commit_cqring(ctx);
> > + if (!ctx->task_complete) {
> > + / IOPOLL rings only need to wake up if it's also SQPOLL */
> > + if (!ctx->syscall_iopoll)
> > + io_cqring_wake(ctx);
> > + }
> > + io_commit_cqring_flush(ctx);
> >
> > if (!wq_list_empty(&state->compl_reqs)) {
> > io_free_batch_list(ctx, state->compl_reqs.first);
>
>
> You seem to just remove the lock around posting CQEs, and hence then it
> can use GFP_KERNEL? That's very broken... I'm assuming the issue here is
> that memcg will look at __GFP_HIGH somehow and allow it to proceed?
> Surely that should not stop OOM, just defer it?
>
> In any case, then below should then do the same. Can you test?
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 6cb24cdf8e68..709943fedaf4 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -864,7 +864,7 @@ static __cold bool io_cqe_overflow_locked(struct io_ring_ctx *ctx,
> {
> struct io_overflow_cqe *ocqe;
>
> - ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_ATOMIC);
> + ocqe = io_alloc_ocqe(ctx, cqe, big_cqe, GFP_NOWAIT);
> return io_cqring_add_overflow(ctx, ocqe);
> }
>
>
> --
> Jens Axboe
> You seem to just remove the lock around posting CQEs, and hence then it
> can use GFP_KERNEL? That's very broken...
This is my first time contributing to the linux kernel, sorry if my patch is
broken.
> I'm assuming the issue here is that memcg will look at __GFP_HIGH somehow and
> allow it to proceed?
Exactly, the allocation succeed even though it exceed cgroup limits. After
digging through try_charge_memcg(), it seems that OOM killer isn't involved
unless __GFP_DIRECT_RECLAIM bit is set (see gfpflags_allow_blocking).
https://github.com/torvalds/linux/blob/8640b74557fc8b4c300030f6ccb8cd078f665ec8/mm/memcontrol.c#L2329
https://github.com/torvalds/linux/blob/8640b74557fc8b4c300030f6ccb8cd078f665ec8/include/linux/gfp.h#L38
> In any case, then below should then do the same. Can you test?
I tried it and it seems to fix the issue but in a different way.
try_charge_memcg now returns -ENOMEM and the allocation failed. The completion
queue entry is "dropped on the floor" in io_cqring_add_overflow.
So I see 3 options here:
* use GFP_NOWAIT if dropping CQE is ok
* allocate using GFP_KERNEL_ACCOUNT without holding the lock then adding
overflowing entries while holding the completion_lock (iterating twice over
compl_reqs)
* charge memory after releasing the lock. I don't know if this is possible but
doing kfree(kmalloc(1, GFP_KERNEL_ACCOUNT)) after releasing the lock does the
job (even though it's dirty).
Let me know what you think
Alexandre Negrel
https://www.negrel.dev/
Powered by blists - more mailing lists