[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210323081440.81343-3-ming.lei@redhat.com>
Date: Tue, 23 Mar 2021 16:14:40 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: linux-block@...r.kernel.org, Christoph Hellwig <hch@....de>,
linux-kernel@...r.kernel.org, Ming Lei <ming.lei@...hat.com>
Subject: [PATCH 2/2] blktrace: limit allowed total trace buffer size
On some ARCHs, such as aarch64, page size may be 64K, meantime there may
be lots of CPU cores. relay_open() needs to allocate pages on each CPU
blktrace, so easily too many pages are taken by blktrace. For example,
on one ARM64 server: 224 CPU cores, 16G RAM, blktrace finally got
allocated 7GB in case of 'blktrace -b 8192' which is used by device-mapper
test suite[1]. This way could cause OOM easily.
Fix the issue by limiting max allowed pages to be 1/8 of totalram_pages().
[1] https://github.com/jthornber/device-mapper-test-suite.git
Signed-off-by: Ming Lei <ming.lei@...hat.com>
---
kernel/trace/blktrace.c | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index c221e4c3f625..8403ff19d533 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -466,6 +466,35 @@ static void blk_trace_setup_lba(struct blk_trace *bt,
}
}
+/* limit total allocated buffer size is <= 1/8 of total pages */
+static void validate_and_adjust_buf(struct blk_user_trace_setup *buts)
+{
+ unsigned buf_size = buts->buf_size;
+ unsigned buf_nr = buts->buf_nr;
+ unsigned long max_allowed_pages = totalram_pages() >> 3;
+ unsigned long req_pages = PAGE_ALIGN(buf_size * buf_nr) >> PAGE_SHIFT;
+
+ if (req_pages * num_online_cpus() <= max_allowed_pages)
+ return;
+
+ req_pages = DIV_ROUND_UP(max_allowed_pages, num_online_cpus());
+
+ if (req_pages == 0) {
+ buf_size = PAGE_SIZE;
+ buf_nr = 1;
+ } else {
+ buf_size = req_pages << PAGE_SHIFT / buf_nr;
+ if (buf_size < PAGE_SIZE)
+ buf_size = PAGE_SIZE;
+ buf_nr = req_pages << PAGE_SHIFT / buf_size;
+ if (buf_nr == 0)
+ buf_nr = 1;
+ }
+
+ buts->buf_size = min_t(unsigned, buf_size, buts->buf_size);
+ buts->buf_nr = min_t(unsigned, buf_nr, buts->buf_nr);
+}
+
/*
* Setup everything required to start tracing
*/
@@ -482,6 +511,9 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
if (!buts->buf_size || !buts->buf_nr)
return -EINVAL;
+ /* make sure not allocate too much for userspace */
+ validate_and_adjust_buf(buts);
+
strncpy(buts->name, name, BLKTRACE_BDEV_SIZE);
buts->name[BLKTRACE_BDEV_SIZE - 1] = '\0';
--
2.29.2
Powered by blists - more mailing lists