lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <792b0caa-0e99-94b2-60bf-90ad719c63d7@huaweicloud.com>
Date:   Thu, 22 Dec 2022 19:49:12 +0800
From:   Kemeng Shi <shikemeng@...weicloud.com>
To:     Jan Kara <jack@...e.cz>
Cc:     axboe@...nel.dk, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, kbusch@...nel.org
Subject: Re: [PATCH RESEND v2 2/5] sbitmap: remove redundant check in
 __sbitmap_queue_get_batch


Hi Jan, thanks for review.
on 12/22/2022 7:23 PM, Jan Kara wrote:
>> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
>> index cb5e03a2d65b..11e75f4040fb 100644
>> --- a/lib/sbitmap.c
>> +++ b/lib/sbitmap.c
>> @@ -518,11 +518,9 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
>>  
>>  			get_mask = ((1UL << nr_tags) - 1) << nr;
>>  			val = READ_ONCE(map->word);
>> -			do {
>> -				if ((val & ~get_mask) != val)
>> -					goto next;
>> -			} while (!atomic_long_try_cmpxchg(ptr, &val,
>> -							  get_mask | val));
>> +			while (!atomic_long_try_cmpxchg(ptr, &val,
>> +							  get_mask | val))
>> +				;
>>  			get_mask = (get_mask & ~val) >> nr;
>>  			if (get_mask) {
>>  				*offset = nr + (index << sb->shift);
> 
> So I agree this will result in correct behavior but it can change
> performance. In the original code, we end up doing
> atomic_long_try_cmpxchg() only for words where we have a chance of getting
> all tags allocated. Now we just accept any word where we could allocate at
> least one bit. Frankly the original code looks rather restrictive and also
> the fact that we look only from the first zero bit in the word looks
> unnecessarily restrictive so maybe I miss some details about what's
> expected from __sbitmap_queue_get_batch(). So all in all I wanted to point
> out this needs more scrutiny from someone understanding better expectations
> from __sbitmap_queue_get_batch().
In the very beginning, __sbitmap_queue_get_batch will return if we only
get partial tags allocated. Recent commit fbb564a557809 ("lib/sbitmap: Fix
invalid loop in __sbitmap_queue_get_batch()") thought we may reuse busying
bits in old codes and change behavior of __sbitmap_queue_get_batch() to get
all tags. However we will not reuse busying bits in old codes actually. So
I try to revert this wrong fix and keep the behavior of
__sbitmap_queue_get_batch() as it designed to be at beginning.

Besides, if we keep to get all tags,the check below is redundant.
	get_mask = (get_mask & ~ret) >> nr;
	if (get_mask) {
		...
	}
As we only reach here if we get all tags and the check above will always
pass. So this check in old codes should be removed.

-- 
Best wishes
Kemeng Shi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ