lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 Jan 2017 15:19:01 -0500 (EST)
From:   David Miller <davem@...emloft.net>
To:     dvyukov@...gle.com
Cc:     davej@...hat.com, samuel@...tiz.org, glider@...gle.com,
        andreyknvl@...gle.com, netdev@...r.kernel.org
Subject: Re: [PATCH v3] net/irda: fix lockdep annotation

From: Dmitry Vyukov <dvyukov@...gle.com>
Date: Mon, 16 Jan 2017 22:10:52 +0100

> The current annotation uses a global variable as recursion counter.
> The variable is not atomic nor protected with a mutex, but mutated
> by multiple threads. This causes lockdep bug reports episodically:
> 
> BUG: looking up invalid subclass: 4294967295
> ...
> _raw_spin_lock_irqsave_nested+0x120/0x180
> hashbin_delete+0x4fe/0x750
> __irias_delete_object+0xab/0x170
> irias_delete_object+0x5f/0xc0
> ircomm_tty_detach_cable+0x1d5/0x3f0
> ...
> 
> Make the hashbin_lock_depth variable atomic to prevent bug reports.
> 
> As is this causes "unused variable 'depth'" warning without LOCKDEP.
> So also change raw_spin_lock_irqsave_nested() macro to not cause
> the warning without LOCKDEP. Similar to what raw_spin_lock_nested()
> already does.
> 
> Signed-off-by: Dmitry Vyukov <dvyukov@...gle.com>
> Cc: Dave Jones <davej@...hat.com>
> Cc: Samuel Ortiz <samuel@...tiz.org>
> Cc: David S. Miller <davem@...emloft.net>
> Cc: netdev@...r.kernel.org
> Fixes: c7630a4b932af ("[IrDA]: irda lockdep annotation")

I took a closer look at this, and really this whole lockdep thing is
more than a hack.

The basic problem is that the free_func() passed into hashbin_delete()
can trigger hashbin operations on other hashtables.

This function is destroying a hash table completely, and is doing so
in a context where no new hash table entries can be inserted.  The
last thing we do is kfree() the hashtable so this must be true.

Therefore, it is much better to just kill off all of this lockdep
stuff, and recode the teardown loop into something like:

	if (hashbin->hb_type & HB_LOCK)
		spin_lock_irqsave(&hashbin->hb_lock, flags);
	for (i = 0; i < HASHBIN_SIZE; i++) {
		while (1) {
			queue = dequeue_first((irda_queue_t **) &hashbin->hb_queue[i]);

			if (!queue)
				break;
			if (free_func) {
				if (hashbin->hb_type & HB_LOCK)
					spin_unlock_irqrestore(&hashbin->hb_lock, flags);
				free_func(queue);
				if (hashbin->hb_type & HB_LOCK)
					spin_lock_irqsave(&hashbin->hb_lock, flags);
			}
			
		}
	}
	hashbin->hb_current = NULL;
	hashbin->magic = ~HB_MAGIC;
	if (hashbin->hb_type & HB_LOCK)
		spin_unlock_irqrestore(&hashbin->hb_lock, flags);

At which point the recursive locking becomes impossible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ