[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7020d479-e8c7-7249-c6cd-c6d01b01c92a@arrikto.com>
Date: Wed, 13 Nov 2019 13:50:59 +0200
From: Nikos Tsironis <ntsironis@...ikto.com>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: tglx@...utronix.de, linux-rt-users@...r.kernel.org,
Mike Snitzer <msnitzer@...hat.com>,
Scott Wood <swood@...hat.com>,
Ilias Tsitsimpis <iliastsi@...ikto.com>, dm-devel@...hat.com,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Daniel Wagner <dwagner@...e.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH RT 2/2 v2] list_bl: avoid BUG when the list is not locked
On 11/13/19 1:16 PM, Mikulas Patocka wrote:
>
>
> On Wed, 13 Nov 2019, Nikos Tsironis wrote:
>
>> On 11/12/19 6:16 PM, Mikulas Patocka wrote:
>>> list_bl would crash with BUG() if we used it without locking. dm-snapshot
>>> uses its own locking on realtime kernels (it can't use list_bl because
>>> list_bl uses raw spinlock and dm-snapshot takes other non-raw spinlocks
>>> while holding bl_lock).
>>>
>>> To avoid this BUG, we must set LIST_BL_LOCKMASK = 0.
>>>
>>> This patch is intended only for the realtime kernel patchset, not for the
>>> upstream kernel.
>>>
>>> Signed-off-by: Mikulas Patocka <mpatocka@...hat.com>
>>>
>>> Index: linux-rt-devel/include/linux/list_bl.h
>>> ===================================================================
>>> --- linux-rt-devel.orig/include/linux/list_bl.h 2019-11-07 14:01:51.000000000 +0100
>>> +++ linux-rt-devel/include/linux/list_bl.h 2019-11-08 10:12:49.000000000 +0100
>>> @@ -19,7 +19,7 @@
>>> * some fast and compact auxiliary data.
>>> */
>>>
>>> -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>>> +#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)) && !defined(CONFIG_PREEMPT_RT_BASE)
>>> #define LIST_BL_LOCKMASK 1UL
>>> #else
>>> #define LIST_BL_LOCKMASK 0UL
>>> @@ -161,9 +161,6 @@ static inline void hlist_bl_lock(struct
>>> bit_spin_lock(0, (unsigned long *)b);
>>> #else
>>> raw_spin_lock(&b->lock);
>>> -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>>> - __set_bit(0, (unsigned long *)b);
>>> -#endif
>>> #endif
>>> }
>>>
>>
>> Hi Mikulas,
>>
>> I think removing __set_bit()/__clear_bit() breaks hlist_bl_is_locked(),
>> which is used by the RCU variant of list_bl.
>>
>> Nikos
>
> OK. so I can remove this part of the patch.
>
I think this causes another problem. LIST_BL_LOCKMASK is used in various
functions to set/clear the lock bit, e.g. in hlist_bl_first(). So, if we
lock the list through hlist_bl_lock(), thus setting the lock bit with
__set_bit(), and then call hlist_bl_first() to get the first element,
the returned pointer will be invalid. As LIST_BL_LOCKMASK is zero the
least significant bit of the pointer will be 1.
I think for dm-snapshot to work using its own locking, and without
list_bl complaining, the following is sufficient:
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -25,7 +25,7 @@
#define LIST_BL_LOCKMASK 0UL
#endif
-#ifdef CONFIG_DEBUG_LIST
+#if defined(CONFIG_DEBUG_LIST) && !defined(CONFIG_PREEMPT_RT_BASE)
#define LIST_BL_BUG_ON(x) BUG_ON(x)
#else
#define LIST_BL_BUG_ON(x)
Nikos
> Mikulas
>
>>> @@ -172,9 +169,6 @@ static inline void hlist_bl_unlock(struc
>>> #ifndef CONFIG_PREEMPT_RT_BASE
>>> __bit_spin_unlock(0, (unsigned long *)b);
>>> #else
>>> -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>>> - __clear_bit(0, (unsigned long *)b);
>>> -#endif
>>> raw_spin_unlock(&b->lock);
>>> #endif
>>> }
>>>
>>
>
Powered by blists - more mailing lists