[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <749e4759-3511-92d4-a19a-0f72c31b1ee6@lechnology.com>
Date: Wed, 20 Dec 2017 15:50:41 -0600
From: David Lechner <david@...hnology.com>
To: Michael Turquette <mturquette@...libre.com>,
linux-clk@...r.kernel.org
Cc: Stephen Boyd <sboyd@...eaurora.org>, linux-kernel@...r.kernel.org,
Jerome Brunet <jbrunet@...libre.com>
Subject: Re: [PATCH] clk: fix spin_lock/unlock imbalance on bad clk_enable()
reentrancy
On 12/20/2017 02:33 PM, David Lechner wrote:
>
> So, as you can see, we get 4 warnings here. There is no problem with any
> clock provider or consumer (as far as I can tell). The bug here is that
> spin_trylock_irqsave() always returns true on non-SMP systems, which
> messes up the reference counting.
>
> usb20_phy_clk_enable() currently works because mach-davinci does not use
> the common clock framework. However, I am trying to move it to the
> common clock framework, which is how I discovered this bug.
One more thing I mentioned previously, but is worth mentioning again in
detail is that if you enable CONFIG_DEBUG_SPINLOCK, that changes the
behavior of spin_trylock_irqsave() on non-SMP systems. It no longer
always returns true and so everything works as expected in the call
chain that I described previously.
The difference is that with CONFIG_DEBUG_SPINLOCK=n, we have
#define arch_spin_trylock(lock) ({ barrier(); (void)(lock); 1; })
But if CONFIG_DEBUG_SPINLOCK=y, then we have
static inline int arch_spin_trylock(arch_spinlock_t *lock)
{
char oldval = lock->slock;
lock->slock = 0;
barrier();
return oldval > 0;
}
This comes from include/linux/spinlock_up.h, which is included from
include/linux/spinlock.h
#ifdef CONFIG_SMP
# include <asm/spinlock.h>
#else
# include <linux/spinlock_up.h>
#endif
So, the question I have is: what is the actual "correct" behavior of
spin_trylock_irqsave()? Is it really supposed to always return true when
CONFIG_DEBUG_SPINLOCK=n and CONFIG_SMP=n or is this a bug?
Powered by blists - more mailing lists