lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Dec 2022 12:23:19 +0100
From:   Jan Kara <jack@...e.cz>
To:     Kemeng Shi <shikemeng@...weicloud.com>
Cc:     axboe@...nel.dk, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, jack@...e.cz, kbusch@...nel.org
Subject: Re: [PATCH RESEND v2 2/5] sbitmap: remove redundant check in
 __sbitmap_queue_get_batch

On Thu 22-12-22 22:33:50, Kemeng Shi wrote:
> Commit fbb564a557809 ("lib/sbitmap: Fix invalid loop in
> __sbitmap_queue_get_batch()") mentioned that "Checking free bits when
> setting the target bits. Otherwise, it may reuse the busying bits."
> This commit add check to make sure all masked bits in word before
> cmpxchg is zero. Then the existing check after cmpxchg to check any
> zero bit is existing in masked bits in word is redundant.
> 
> Actually, old value of word before cmpxchg is stored in val and we
> will filter out busy bits in val by "(get_mask & ~val)" after cmpxchg.
> So we will not reuse busy bits methioned in commit fbb564a557809
> ("lib/sbitmap: Fix invalid loop in __sbitmap_queue_get_batch()"). Revert
> new-added check to remove redundant check.
> 
> Signed-off-by: Kemeng Shi <shikemeng@...weicloud.com>

...

> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index cb5e03a2d65b..11e75f4040fb 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -518,11 +518,9 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
>  
>  			get_mask = ((1UL << nr_tags) - 1) << nr;
>  			val = READ_ONCE(map->word);
> -			do {
> -				if ((val & ~get_mask) != val)
> -					goto next;
> -			} while (!atomic_long_try_cmpxchg(ptr, &val,
> -							  get_mask | val));
> +			while (!atomic_long_try_cmpxchg(ptr, &val,
> +							  get_mask | val))
> +				;
>  			get_mask = (get_mask & ~val) >> nr;
>  			if (get_mask) {
>  				*offset = nr + (index << sb->shift);

So I agree this will result in correct behavior but it can change
performance. In the original code, we end up doing
atomic_long_try_cmpxchg() only for words where we have a chance of getting
all tags allocated. Now we just accept any word where we could allocate at
least one bit. Frankly the original code looks rather restrictive and also
the fact that we look only from the first zero bit in the word looks
unnecessarily restrictive so maybe I miss some details about what's
expected from __sbitmap_queue_get_batch(). So all in all I wanted to point
out this needs more scrutiny from someone understanding better expectations
from __sbitmap_queue_get_batch().

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ