lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d3afcb3-c5b6-4510-8d11-505c1538c786@gmx.com>
Date: Wed, 30 Jul 2025 16:36:14 +0930
From: Qu Wenruo <quwenruo.btrfs@....com>
To: kmpfqgdwxucqz9@...il.com, David Sterba <dsterba@...e.com>
Cc: linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
 KernelKraze <admin@...l.free-proletariat.dpdns.org>
Subject: Re: [PATCH 1/1] btrfs: add integer overflow protection to
 flush_dir_items_batch allocation



在 2025/7/30 14:13, kmpfqgdwxucqz9@...il.com 写道:
> From: KernelKraze <admin@...l.free-proletariat.dpdns.org>
> 
> The flush_dir_items_batch() function performs memory allocation using
> count * sizeof(u32) + count * sizeof(struct btrfs_key) without proper
> integer overflow checking. When count is large enough, this multiplication
> can overflow, resulting in an allocation smaller than expected, leading to
> buffer overflows during subsequent array access.
> 
> In extreme cases with very large directory item counts, this could
> theoretically lead to undersized memory allocation, though such scenarios
> are unlikely in normal filesystem usage.
> 
> Fix this by:
> 1. Adding a reasonable upper limit (195) to the batch size, consistent
>     with the limit used in log_delayed_insertion_items()
> 2. Using check_mul_overflow() and check_add_overflow() to detect integer
>     overflows before performing the allocation
> 3. Returning -EOVERFLOW when overflow is detected
> 4. Adding appropriate warning and error messages for debugging
> 
> This ensures that memory allocations are always sized correctly and
> prevents potential issues from integer overflow conditions, improving
> overall code robustness.
> 
> Signed-off-by: KernelKraze <admin@...l.free-proletariat.dpdns.org>
> ---
>   fs/btrfs/tree-log.c | 27 ++++++++++++++++++++++++---
>   1 file changed, 24 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
> index 9f05d454b9df..19b443314db0 100644
> --- a/fs/btrfs/tree-log.c
> +++ b/fs/btrfs/tree-log.c
> @@ -3655,14 +3655,35 @@ static int flush_dir_items_batch(struct btrfs_trans_handle *trans,
>   	} else {
>   		struct btrfs_key *ins_keys;
>   		u32 *ins_sizes;
> +		size_t keys_size, sizes_size, total_size;
>   
> -		ins_data = kmalloc(count * sizeof(u32) +
> -				   count * sizeof(struct btrfs_key), GFP_NOFS);
> +		/*
> +		 * Prevent integer overflow when calculating allocation size.
> +		 * We use the same reasonable limit as log_delayed_insertion_items()
> +		 * to prevent excessive memory allocation and potential DoS.
> +		 */
> +		if (count > 195) {
> +			btrfs_warn(inode->root->fs_info,
> +				   "dir items batch size %d exceeds safe limit, truncating",
> +				   count);
> +			count = 195;
> +		}

Just as Johannes said, explain this number.
> +
> +		/* Check for overflow in size calculations */
> +		if (check_mul_overflow(count, sizeof(u32), &sizes_size) ||
> +		    check_mul_overflow(count, sizeof(struct btrfs_key), &keys_size) ||
> +		    check_add_overflow(sizes_size, keys_size, &total_size)) {

And the magic number 195 won't cause any overflow anyway.

195 * (4 + 17) = 4095

Do you really think kmallocating a memory of 4096 bytes will cause extra 
problems?

Then you must have not reviewed any btrfs code.
We're going to use kmallocated memory for btrfs super block, which is 
exactly 4096 bytes, matching your "integer overflow" standard.

Your "integer overflow" looks completely impractical.


Further more, this 4095 magic number doesn't make sense on systems with 
larger page size.
If the system has a page size of 64K, limiting the kmalloc size to 4k 
make no sense.


Finally, have you checked the only caller of flush_dir_items_batch()?

The only caller is iterating all the item keys inside a leaf.

And a leaf has a very limited number of space for storing 
BTRFS_DIR_INDEX_KEY and its items.

At most there are nodesize / (dir_item_size + key_size + item_size) 
items inside a leaf. (this is already ignoring some extra overhead)

And using the default 16K nodesize, we can have over 200 dir items.

This means your flawed calculation is already going to cause false 
alerts on completely valid cases.


So a huge NO-ACK.

> +			btrfs_err(inode->root->fs_info,
> +				  "integer overflow in batch allocation size calculation");
> +			return -EOVERFLOW;
> +		}
> +
> +		ins_data = kmalloc(total_size, GFP_NOFS);
>   		if (!ins_data)
>   			return -ENOMEM;
>   
>   		ins_sizes = (u32 *)ins_data;
> -		ins_keys = (struct btrfs_key *)(ins_data + count * sizeof(u32));
> +		ins_keys = (struct btrfs_key *)(ins_data + sizes_size);
>   		batch.keys = ins_keys;
>   		batch.data_sizes = ins_sizes;
>   		batch.total_data_size = 0;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ