[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20260119071039.2113739-1-danisjiang@gmail.com>
Date: Mon, 19 Jan 2026 01:10:39 -0600
From: Yuhao Jiang <danisjiang@...il.com>
To: Jens Axboe <axboe@...nel.dk>,
Pavel Begunkov <asml.silence@...il.com>
Cc: io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org,
stable@...r.kernel.org,
Yuhao Jiang <danisjiang@...il.com>
Subject: [PATCH v2] io_uring/rsrc: fix RLIMIT_MEMLOCK bypass by removing cross-buffer accounting
When multiple registered buffers share the same compound page, only the
first buffer accounts for the memory via io_buffer_account_pin(). The
subsequent buffers skip accounting since headpage_already_acct() returns
true.
When the first buffer is unregistered, the accounting is decremented,
but the compound page remains pinned by the remaining buffers. This
creates a state where pinned memory is not properly accounted against
RLIMIT_MEMLOCK.
On systems with HugeTLB pages pre-allocated, an unprivileged user can
exploit this to pin memory beyond RLIMIT_MEMLOCK by cycling buffer
registrations. The bypass amount is proportional to the number of
available huge pages, potentially allowing gigabytes of memory to be
pinned while the kernel accounting shows near-zero.
Fix this by removing the cross-buffer accounting optimization entirely.
Each buffer now independently accounts for its pinned pages, even if
the same compound pages are referenced by other buffers. This prevents
accounting underflow when buffers are unregistered in arbitrary order.
The trade-off is that memory accounting may be overestimated when
multiple buffers share compound pages, but this is safe and prevents
the security issue.
Reported-by: Yuhao Jiang <danisjiang@...il.com>
Suggested-by: Pavel Begunkov <asml.silence@...il.com>
Fixes: de2939388be5 ("io_uring: improve registered buffer accounting for huge pages")
Cc: stable@...r.kernel.org
Signed-off-by: Yuhao Jiang <danisjiang@...il.com>
---
Changes in v2:
- Remove cross-buffer accounting logic entirely
- Link to v1: https://lore.kernel.org/all/20251218025947.36115-1-danisjiang@gmail.com/
io_uring/rsrc.c | 43 -------------------------------------------
1 file changed, 43 deletions(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 41c89f5c616d..f35652f36c57 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -619,47 +619,6 @@ int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
return 0;
}
-/*
- * Not super efficient, but this is just a registration time. And we do cache
- * the last compound head, so generally we'll only do a full search if we don't
- * match that one.
- *
- * We check if the given compound head page has already been accounted, to
- * avoid double accounting it. This allows us to account the full size of the
- * page, not just the constituent pages of a huge page.
- */
-static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
- int nr_pages, struct page *hpage)
-{
- int i, j;
-
- /* check current page array */
- for (i = 0; i < nr_pages; i++) {
- if (!PageCompound(pages[i]))
- continue;
- if (compound_head(pages[i]) == hpage)
- return true;
- }
-
- /* check previously registered pages */
- for (i = 0; i < ctx->buf_table.nr; i++) {
- struct io_rsrc_node *node = ctx->buf_table.nodes[i];
- struct io_mapped_ubuf *imu;
-
- if (!node)
- continue;
- imu = node->buf;
- for (j = 0; j < imu->nr_bvecs; j++) {
- if (!PageCompound(imu->bvec[j].bv_page))
- continue;
- if (compound_head(imu->bvec[j].bv_page) == hpage)
- return true;
- }
- }
-
- return false;
-}
-
static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
int nr_pages, struct io_mapped_ubuf *imu,
struct page **last_hpage)
@@ -677,8 +636,6 @@ static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
if (hpage == *last_hpage)
continue;
*last_hpage = hpage;
- if (headpage_already_acct(ctx, pages, i, hpage))
- continue;
imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
}
}
--
2.34.1
Powered by blists - more mailing lists