lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <343e3081-7d24-f783-b040-2f61aad4ea4f@huaweicloud.com>
Date: Fri, 26 Dec 2025 16:33:57 +0800
From: Li Nan <linan666@...weicloud.com>
To: Yu Kuai <yukuai@...as.com>, song@...nel.org, linux-raid@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, filippo@...ian.org, colyli@...as.com
Subject: Re: [PATCH v2 04/11] md/raid5: use mempool to allocate
 stripe_request_ctx



在 2025/11/24 14:31, Yu Kuai 写道:
> On the one hand, stripe_request_ctx is 72 bytes, and it's a bit huge for
> a stack variable.
> 
> On the other hand, the bitmap sectors_to_do is a fixed size, result in
> max_hw_sector_kb of raid5 array is at most 256 * 4k = 1Mb, and this will
> make full stripe IO impossible for the array that chunk_size * data_disks
> is bigger. Allocate ctx during runtime will make it possible to get rid
> of this limit.
> 
> Signed-off-by: Yu Kuai <yukuai@...as.com>
> ---
>   drivers/md/md.h       |  4 +++
>   drivers/md/raid1-10.c |  5 ----
>   drivers/md/raid5.c    | 61 +++++++++++++++++++++++++++----------------
>   drivers/md/raid5.h    |  2 ++
>   4 files changed, 45 insertions(+), 27 deletions(-)
> 

[...]

> @@ -7374,6 +7380,10 @@ static void free_conf(struct r5conf *conf)
>   	bioset_exit(&conf->bio_split);
>   	kfree(conf->stripe_hashtbl);
>   	kfree(conf->pending_data);
> +
> +	if (conf->ctx_pool)
> +		mempool_destroy(conf->ctx_pool);
> +
>   	kfree(conf);
>   }
>   
> @@ -8057,6 +8067,13 @@ static int raid5_run(struct mddev *mddev)
>   			goto abort;
>   	}
>   
> +	conf->ctx_pool = mempool_create_kmalloc_pool(NR_RAID_BIOS,
> +					sizeof(struct stripe_request_ctx));
> +	if (!conf->ctx_pool) {
> +		ret = -ENOMEM;
> +		goto abort;
> +	}
> +

What about moving create to setup_conf()? If so, call destroy in
free_conf() without checks.

>   	if (log_init(conf, journal_dev, raid5_has_ppl(conf)))
>   		goto abort;
>   
> diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
> index eafc6e9ed6ee..6e3f07119fa4 100644
> --- a/drivers/md/raid5.h
> +++ b/drivers/md/raid5.h
> @@ -690,6 +690,8 @@ struct r5conf {
>   	struct list_head	pending_list;
>   	int			pending_data_cnt;
>   	struct r5pending_data	*next_pending_data;
> +
> +	mempool_t		*ctx_pool;
>   };
>   
>   #if PAGE_SIZE == DEFAULT_STRIPE_SIZE

-- 
Thanks,
Nan


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ