[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3ef0bee9-e0e5-a249-9dfb-3ea3c0af2160@gmail.com>
Date: Thu, 26 Nov 2020 13:44:36 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
Omar Sandoval <osandov@...ndov.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/4] sbitmap: remove swap_lock
On 26/11/2020 02:46, Ming Lei wrote:
> On Sun, Nov 22, 2020 at 03:35:46PM +0000, Pavel Begunkov wrote:
>> map->swap_lock protects map->cleared from concurrent modification,
>> however sbitmap_deferred_clear() is already atomically drains it, so
>> it's guaranteed to not loose bits on concurrent
>> sbitmap_deferred_clear().
>>
>> A one threaded tag heavy test on top of nullbk showed ~1.5% t-put
>> increase, and 3% -> 1% cycle reduction of sbitmap_get() according to perf.
>>
>> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
>> ---
>> include/linux/sbitmap.h | 5 -----
>> lib/sbitmap.c | 14 +++-----------
>> 2 files changed, 3 insertions(+), 16 deletions(-)
>>
>> diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h
>> index e40d019c3d9d..74cc6384715e 100644
>> --- a/include/linux/sbitmap.h
>> +++ b/include/linux/sbitmap.h
>> @@ -32,11 +32,6 @@ struct sbitmap_word {
>> * @cleared: word holding cleared bits
>> */
>> unsigned long cleared ____cacheline_aligned_in_smp;
>> -
>> - /**
>> - * @swap_lock: Held while swapping word <-> cleared
>> - */
>> - spinlock_t swap_lock;
>> } ____cacheline_aligned_in_smp;
>>
>> /**
>> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
>> index c1c8a4e69325..4fd877048ba8 100644
>> --- a/lib/sbitmap.c
>> +++ b/lib/sbitmap.c
>> @@ -15,13 +15,9 @@
>> static inline bool sbitmap_deferred_clear(struct sbitmap_word *map)
>> {
>> unsigned long mask, val;
>> - bool ret = false;
>> - unsigned long flags;
>>
>> - spin_lock_irqsave(&map->swap_lock, flags);
>> -
>> - if (!map->cleared)
>> - goto out_unlock;
>> + if (!READ_ONCE(map->cleared))
>> + return false;
>
> This way might break sbitmap_find_bit_in_index()/sbitmap_get_shallow().
> Currently if sbitmap_deferred_clear() returns false, it means nothing
> can be allocated from this word. With this patch, even though 'false'
> is returned, free bits still might be available because another
> sbitmap_deferred_clear() can be run concurrently.
But that can happen anyway if someone frees a requests right after we
return from sbitmap_deferred_clear(). Can you elaborate what exactly
it breaks? Something in sbq wakeup paths?
--
Pavel Begunkov
Powered by blists - more mailing lists