[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250730044348.133387-2-admin@mail.free-proletariat.dpdns.org>
Date: Wed, 30 Jul 2025 13:43:48 +0900
From: kmpfqgdwxucqz9@...il.com
To: David Sterba <dsterba@...e.com>
Cc: linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org,
KernelKraze <admin@...l.free-proletariat.dpdns.org>
Subject: [PATCH 1/1] btrfs: add integer overflow protection to flush_dir_items_batch allocation
From: KernelKraze <admin@...l.free-proletariat.dpdns.org>
The flush_dir_items_batch() function performs memory allocation using
count * sizeof(u32) + count * sizeof(struct btrfs_key) without proper
integer overflow checking. When count is large enough, this multiplication
can overflow, resulting in an allocation smaller than expected, leading to
buffer overflows during subsequent array access.
In extreme cases with very large directory item counts, this could
theoretically lead to undersized memory allocation, though such scenarios
are unlikely in normal filesystem usage.
Fix this by:
1. Adding a reasonable upper limit (195) to the batch size, consistent
with the limit used in log_delayed_insertion_items()
2. Using check_mul_overflow() and check_add_overflow() to detect integer
overflows before performing the allocation
3. Returning -EOVERFLOW when overflow is detected
4. Adding appropriate warning and error messages for debugging
This ensures that memory allocations are always sized correctly and
prevents potential issues from integer overflow conditions, improving
overall code robustness.
Signed-off-by: KernelKraze <admin@...l.free-proletariat.dpdns.org>
---
fs/btrfs/tree-log.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index 9f05d454b9df..19b443314db0 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -3655,14 +3655,35 @@ static int flush_dir_items_batch(struct btrfs_trans_handle *trans,
} else {
struct btrfs_key *ins_keys;
u32 *ins_sizes;
+ size_t keys_size, sizes_size, total_size;
- ins_data = kmalloc(count * sizeof(u32) +
- count * sizeof(struct btrfs_key), GFP_NOFS);
+ /*
+ * Prevent integer overflow when calculating allocation size.
+ * We use the same reasonable limit as log_delayed_insertion_items()
+ * to prevent excessive memory allocation and potential DoS.
+ */
+ if (count > 195) {
+ btrfs_warn(inode->root->fs_info,
+ "dir items batch size %d exceeds safe limit, truncating",
+ count);
+ count = 195;
+ }
+
+ /* Check for overflow in size calculations */
+ if (check_mul_overflow(count, sizeof(u32), &sizes_size) ||
+ check_mul_overflow(count, sizeof(struct btrfs_key), &keys_size) ||
+ check_add_overflow(sizes_size, keys_size, &total_size)) {
+ btrfs_err(inode->root->fs_info,
+ "integer overflow in batch allocation size calculation");
+ return -EOVERFLOW;
+ }
+
+ ins_data = kmalloc(total_size, GFP_NOFS);
if (!ins_data)
return -ENOMEM;
ins_sizes = (u32 *)ins_data;
- ins_keys = (struct btrfs_key *)(ins_data + count * sizeof(u32));
+ ins_keys = (struct btrfs_key *)(ins_data + sizes_size);
batch.keys = ins_keys;
batch.data_sizes = ins_sizes;
batch.total_data_size = 0;
--
2.48.1
Powered by blists - more mailing lists