[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <55938698-697e-4c2b-b5dc-ea5aff359567@fnnas.com>
Date: Thu, 25 Dec 2025 15:40:02 +0800
From: "Yu Kuai" <yukuai@...as.com>
To: "Tuo Li" <islituo@...il.com>, <song@...nel.org>
Cc: <linux-raid@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<xni@...hat.com>, <yukuai@...as.com>
Subject: Re: [PATCH] md/raid5: fix possible null-pointer dereferences in raid5_store_group_thread_cnt()
Hi,
在 2025/12/10 15:41, Tuo Li 写道:
> The variable mddev->private is first assigned to conf and then checked:
>
> conf = mddev->private;
> if (!conf) ...
>
> If conf is NULL, then mddev->private is also NULL. However, the function
> does not return at this point, and raid5_quiesce() is later called with
> mddev as the argument. Inside raid5_quiesce(), mddev->private is again
> assigned to conf, which is then dereferenced in multiple places, for
> example:
>
> conf->quiesce = 0;
> wake_up(&conf->wait_for_quiescent);
> ...
>
> This can lead to several null-pointer dereferences.
>
> To fix these issues, the function should unlock mddev and return early when
> conf is NULL, following the pattern in raid5_change_consistency_policy().
>
> Signed-off-by: Tuo Li <islituo@...il.com>
> ---
> drivers/md/raid5.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index e57ce3295292..be3f9a127212 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -7190,9 +7190,10 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
> raid5_quiesce(mddev, true);
>
> conf = mddev->private;
> - if (!conf)
> - err = -ENODEV;
> - else if (new != conf->worker_cnt_per_group) {
> + if (!conf) {
> + mddev_unlock_and_resume(mddev);
> + return -ENODEV;
+CC Xiao
This is still wrong, please add the NULL check and return early before raid5_quise().
And also add a fix tag:
fa1944bbe622 md/raid5: Wait sync io to finish before changing group cnt
> + } else if (new != conf->worker_cnt_per_group) {
> old_groups = conf->worker_groups;
> if (old_groups)
> flush_workqueue(raid5_wq);
--
Thansk,
Kuai
Powered by blists - more mailing lists