[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0803122236030.7266@twin.jikos.cz>
Date: Wed, 12 Mar 2008 22:40:37 +0100 (CET)
From: Jiri Kosina <jkosina@...e.cz>
To: Frank Munzert <frankm@...ux.vnet.ibm.com>
cc: mingo@...e.hu, linux-kernel@...r.kernel.org
Subject: Re: BUG: lock held when returning to user space
On Wed, 12 Mar 2008, Frank Munzert wrote:
> Unit record devices are not meant for concurrent read or write by
> multiple users, that's why we need to serialize access. The driver's
> open method uses mutex_trylock or mutex_lock_interruptible to ensure
> exclusive access to the device, while its release method uses
> mutex_unlock.
Which is wrong usage of mutex.
> As a consequence, lockdep complains about locks being held when returning to
> user space. We used a very simple char device driver (appended below) to
> produce this message:
> ------------------------------------------------
> testapp/2683 is leaving the kernel with locks still held!
[ ... ]
> For the vmur device driver it is crucial to have only one process access a
> given unit record device node at a given time. So having open hold the mutex
> and return to user space is exactly what we want.
No, it is not. You probably want just to have a single-bit flag somewhere,
and update/check it atomically, see below.
> Is there any annotation to tell lockdep to suppress or bypass this kind
> of warning?
No, as it is a real bug if you use mutexes in this way. What happens if
process that has called open() on your device (and has not closed it yet)
calls fork()?
Another breakage scenario -- what if the filedescriptor is sent through
unix socket to another process? etc.
Look at test_and_set_bit_lock() and clear_bit_unlock(), those you want to
use.
--
Jiri Kosina
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists