[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+Yx3wVAgxJRYKMOYM419NTSc9naSxUVfwCb21kjvAnF6Q@mail.gmail.com>
Date: Thu, 19 Jan 2017 11:05:36 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: David Miller <davem@...emloft.net>
Cc: Dave Jones <davej@...hat.com>, Samuel Ortiz <samuel@...tiz.org>,
Alexander Potapenko <glider@...gle.com>,
andreyknvl <andreyknvl@...gle.com>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH v3] net/irda: fix lockdep annotation
On Tue, Jan 17, 2017 at 9:19 PM, David Miller <davem@...emloft.net> wrote:
> From: Dmitry Vyukov <dvyukov@...gle.com>
> Date: Mon, 16 Jan 2017 22:10:52 +0100
>
>> The current annotation uses a global variable as recursion counter.
>> The variable is not atomic nor protected with a mutex, but mutated
>> by multiple threads. This causes lockdep bug reports episodically:
>>
>> BUG: looking up invalid subclass: 4294967295
>> ...
>> _raw_spin_lock_irqsave_nested+0x120/0x180
>> hashbin_delete+0x4fe/0x750
>> __irias_delete_object+0xab/0x170
>> irias_delete_object+0x5f/0xc0
>> ircomm_tty_detach_cable+0x1d5/0x3f0
>> ...
>>
>> Make the hashbin_lock_depth variable atomic to prevent bug reports.
>>
>> As is this causes "unused variable 'depth'" warning without LOCKDEP.
>> So also change raw_spin_lock_irqsave_nested() macro to not cause
>> the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
>> already does.
>>
>> Signed-off-by: Dmitry Vyukov <dvyukov@...gle.com>
>> Cc: Dave Jones <davej@...hat.com>
>> Cc: Samuel Ortiz <samuel@...tiz.org>
>> Cc: David S. Miller <davem@...emloft.net>
>> Cc: netdev@...r.kernel.org
>> Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")
>
> I took a closer look at this, and really this whole lockdep thing is
> more than a hack.
>
> The basic problem is that the free_func() passed into hashbin_delete()
> can trigger hashbin operations on other hashtables.
>
> This function is destroying a hash table completely, and is doing so
> in a context where no new hash table entries can be inserted. The
> last thing we do is kfree() the hashtable so this must be true.
>
> Therefore, it is much better to just kill off all of this lockdep
> stuff, and recode the teardown loop into something like:
>
> if (hashbin->hb_type & HB_LOCK)
> spin_lock_irqsave(&hashbin->hb_lock, flags);
> for (i = 0; i < HASHBIN_SIZE; i++) {
> while (1) {
> queue = dequeue_first((irda_queue_t **) &hashbin->hb_queue[i]);
>
> if (!queue)
> break;
> if (free_func) {
> if (hashbin->hb_type & HB_LOCK)
> spin_unlock_irqrestore(&hashbin->hb_lock, flags);
> free_func(queue);
> if (hashbin->hb_type & HB_LOCK)
> spin_lock_irqsave(&hashbin->hb_lock, flags);
> }
>
> }
> }
> hashbin->hb_current = NULL;
> hashbin->magic = ~HB_MAGIC;
> if (hashbin->hb_type & HB_LOCK)
> spin_unlock_irqrestore(&hashbin->hb_lock, flags);
>
> At which point the recursive locking becomes impossible.
Thanks for looking into it! This particular issue bothers my fuzzers
considerably. I agree that removing recursion is better.
So do how we proceed? Will you mail this as a real patch?
Powered by blists - more mailing lists