lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3c2951fb-a70f-d91f-c80c-64381d5e427a@huaweicloud.com>
Date: Sun, 14 Jan 2024 11:12:34 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Gui-Dong Han <2045gemini@...il.com>, song@...nel.org
Cc: linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
 baijiaju1990@...look.com, stable@...r.kernel.org,
 "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v4] md/raid5: fix atomicity violation in raid5_cache_count

在 2024/01/12 15:10, Gui-Dong Han 写道:
> In raid5_cache_count():
>      if (conf->max_nr_stripes < conf->min_nr_stripes)
>          return 0;
>      return conf->max_nr_stripes - conf->min_nr_stripes;
> The current check is ineffective, as the values could change immediately
> after being checked.
> 
> In raid5_set_cache_size():
>      ...
>      conf->min_nr_stripes = size;
>      ...
>      while (size > conf->max_nr_stripes)
>          conf->min_nr_stripes = conf->max_nr_stripes;
>      ...
> 
> Due to intermediate value updates in raid5_set_cache_size(), concurrent
> execution of raid5_cache_count() and raid5_set_cache_size() may lead to
> inconsistent reads of conf->max_nr_stripes and conf->min_nr_stripes.
> The current checks are ineffective as values could change immediately
> after being checked, raising the risk of conf->min_nr_stripes exceeding
> conf->max_nr_stripes and potentially causing an integer overflow.
> 
> This possible bug is found by an experimental static analysis tool
> developed by our team. This tool analyzes the locking APIs to extract
> function pairs that can be concurrently executed, and then analyzes the
> instructions in the paired functions to identify possible concurrency bugs
> including data races and atomicity violations. The above possible bug is
> reported when our tool analyzes the source code of Linux 6.2.
> 
> To resolve this issue, it is suggested to introduce local variables
> 'min_stripes' and 'max_stripes' in raid5_cache_count() to ensure the
> values remain stable throughout the check. Adding locks in
> raid5_cache_count() fails to resolve atomicity violations, as
> raid5_set_cache_size() may hold intermediate values of
> conf->min_nr_stripes while unlocked. With this patch applied, our tool no
> longer reports the bug, with the kernel configuration allyesconfig for
> x86_64. Due to the lack of associated hardware, we cannot test the patch
> in runtime testing, and just verify it according to the code logic.
> 
> Fixes: edbe83ab4c27 ("md/raid5: allow the stripe_cache to grow and shrink.")
> Cc: stable@...r.kernel.org
> Signed-off-by: Gui-Dong Han <2045gemini@...il.com>

LGTM
Reviewed-by: Yu Kuai <yukuai3@...wei.com>

> ---
> v2:
> * In this patch v2, we've updated to use READ_ONCE() instead of direct
> reads for accessing max_nr_stripes and min_nr_stripes, since read and
> write can concurrent.
>    Thank Yu Kuai for helpful advice.
> ---
> v3:
> * In this patch v3, we've updated to use WRITE_ONCE() in
> raid5_set_cache_size(), grow_one_stripe() and drop_one_stripe(), in order
> to pair READ_ONCE() with WRITE_ONCE().
>    Thank Yu Kuai for helpful advice.
> ---
> v4:
> * In this patch v4, we've addressed several code style issues.
>    Thank Yu Kuai for helpful advice.
> ---
>   drivers/md/raid5.c | 14 ++++++++------
>   1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 8497880135ee..30e118d10c0b 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -2412,7 +2412,7 @@ static int grow_one_stripe(struct r5conf *conf, gfp_t gfp)
>   	atomic_inc(&conf->active_stripes);
>   
>   	raid5_release_stripe(sh);
> -	conf->max_nr_stripes++;
> +	WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes + 1);
>   	return 1;
>   }
>   
> @@ -2707,7 +2707,7 @@ static int drop_one_stripe(struct r5conf *conf)
>   	shrink_buffers(sh);
>   	free_stripe(conf->slab_cache, sh);
>   	atomic_dec(&conf->active_stripes);
> -	conf->max_nr_stripes--;
> +	WRITE_ONCE(conf->max_nr_stripes, conf->max_nr_stripes - 1);
>   	return 1;
>   }
>   
> @@ -6820,7 +6820,7 @@ raid5_set_cache_size(struct mddev *mddev, int size)
>   	if (size <= 16 || size > 32768)
>   		return -EINVAL;
>   
> -	conf->min_nr_stripes = size;
> +	WRITE_ONCE(conf->min_nr_stripes, size);
>   	mutex_lock(&conf->cache_size_mutex);
>   	while (size < conf->max_nr_stripes &&
>   	       drop_one_stripe(conf))
> @@ -6832,7 +6832,7 @@ raid5_set_cache_size(struct mddev *mddev, int size)
>   	mutex_lock(&conf->cache_size_mutex);
>   	while (size > conf->max_nr_stripes)
>   		if (!grow_one_stripe(conf, GFP_KERNEL)) {
> -			conf->min_nr_stripes = conf->max_nr_stripes;
> +			WRITE_ONCE(conf->min_nr_stripes, conf->max_nr_stripes);
>   			result = -ENOMEM;
>   			break;
>   		}
> @@ -7390,11 +7390,13 @@ static unsigned long raid5_cache_count(struct shrinker *shrink,
>   				       struct shrink_control *sc)
>   {
>   	struct r5conf *conf = shrink->private_data;
> +	int max_stripes = READ_ONCE(conf->max_nr_stripes);
> +	int min_stripes = READ_ONCE(conf->min_nr_stripes);
>   
> -	if (conf->max_nr_stripes < conf->min_nr_stripes)
> +	if (max_stripes < min_stripes)
>   		/* unlikely, but not impossible */
>   		return 0;
> -	return conf->max_nr_stripes - conf->min_nr_stripes;
> +	return max_stripes - min_stripes;
>   }
>   
>   static struct r5conf *setup_conf(struct mddev *mddev)
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ