lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130412184542.GB19966@hmsreliant.think-freely.org>
Date:	Fri, 12 Apr 2013 14:45:42 -0400
From:	Neil Horman <nhorman@...driver.com>
To:	Bart Van Assche <bvanassche@....org>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH RFC] spinlock: split out debugging check from
 spin_lock_mutex

On Fri, Apr 12, 2013 at 04:01:04PM +0200, Bart Van Assche wrote:
> On 04/12/13 13:32, Neil Horman wrote:
> > On Fri, Apr 12, 2013 at 08:27:31AM +0200, Bart Van Assche wrote:
> >> On 04/11/13 21:14, Neil Horman wrote:
> >>>      This resulted from my commit ca99ca14c which introduced a mutex_trylock
> >>>      operation in a path that could execute in interrupt context.  When mutex
> >>>      debugging is enabled, the above warns the user when we are in fact exectuting in
> >>>      interrupt context.
> >>>
> >>>      I think this is a false positive however.  The check is intended to catch users
> >>>      who might be issuing sleeping calls in irq context, but the use of mutex_trylock
> >>>      here is guaranteed not to sleep.
> >>>
> >>>      We could fix this by replacing the DEBUG_LOCK_WARN_ON check in spin_lock_mutex
> >>>      with a __might_sleep call in the appropriate parent mutex operations, but for
> >>>      the sake of effiency (which It seems is why the check was put in the spin lock
> >>>      code only when debug is enabled), lets split the spin_lock_mutex call into two
> >>>      components, where the outer component does the debug checking.  Then
> >>>      mutex_trylock can just call the inner part as its callable from irq context
> >>>      safely.
> >>
> >> Sorry but I'm not yet convinced that it's safe to invoke
> >> mutex_trylock() from IRQ context. Please have a look at the
> >> implementation of mutex_set_owner(), which is invoked by
> >> mutex_trylock(). mutex_set_owner() stores the value of the "current"
> >> pointer into lock->owner. The value of "current" does not have a
> >> meaning in IRQ context.
> > 
> > Thats irrelevant, at least as far as deadlock safety is concerned.  current will
> > be set to the process that was running when we were interrupted, but it won't
> > change during the course of the irq handler, which is all that matters.  The
> > lock->owner field is used for optimistic spinning.  The worst that will happen
> > is, if CONFIG_MUTEX_SPIN_ON_OWNER is configured, another process may wait on
> > this mutex, spinning on the wrong task to release it (see mutex_spin_on_owner).
> > Thats not efficient, but its not deadlock prone, and its not even that
> > inefficient, when you consider that the critical path in the netpoll code is
> > relatively short.  And using the trylock here is certainly preferable to the
> > memory corruption that was possible previously.
> 
> I think there is another issue with invoking mutex_trylock() and mutex_unlock()
> from IRQ context: as far as I can see if CONFIG_DEBUG_MUTEXES is disabled
> __mutex_unlock_common_slowpath() uses spin_lock() to lock mutex.wait_lock and
> hence invoking mutex_unlock() from both non-IRQ and IRQ context is not safe.
> Any thoughts about that ?
> 
Yeah, its ugly, but in this specific case, its ok.  the netpoll code (in
netpoll_send_skb disables irq on the local cpu before entering the netpoll code
path any further, so whenver we frob this mutex from the local cpu, we're
guaranteed not to get pre-empted by an irq.

> With v2 of your patch and CONFIG_DEBUG_MUTEXES enabled I get the warning below:
> 
another instance of the same issue - its because we spin_lock_mutex in
mutex_unlock, rather than spin_lock_mutex_raw.  Can you fix it up by hand, or
shall I update the patch?

Neil

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ