lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <151372258747.45969.10121622799666696251@resonance> Date: Tue, 19 Dec 2017 14:29:47 -0800 From: Michael Turquette <mturquette@...libre.com> To: David Lechner <david@...hnology.com>, linux-clk@...r.kernel.org Cc: "Stephen Boyd" <sboyd@...eaurora.org>, linux-kernel@...r.kernel.org, "Jerome Brunet" <jbrunet@...libre.com> Subject: Re: [PATCH] clk: fix spin_lock/unlock imbalance on bad clk_enable() reentrancy Hi David, Quoting David Lechner (2017-12-15 08:29:56) > On 12/12/2017 10:14 PM, David Lechner wrote: > > On 12/12/2017 05:43 PM, David Lechner wrote: > >> If clk_enable() is called in reentrant way and spin_trylock_irqsave() is > >> not working as expected, it is possible to get a negative enable_refcnt > >> which results in a missed call to spin_unlock_irqrestore(). > >> > >> It works like this: > >> > >> 1. clk_enable() is called. > >> 2. clk_enable_unlock() calls spin_trylock_irqsave() and sets > >> enable_refcnt = 1. > >> 3. Another clk_enable() is called before the first has returned > >> (reentrant), but somehow spin_trylock_irqsave() is returning true. > >> (I'm not sure how/why this is happening yet, but it is happening > >> to me > >> with arch/arm/mach-davinci clocks that I am working on). > > > > I think I have figured out that since CONFIG_SMP=n and > > CONFIG_DEBUG_SPINLOCK=n on my kernel that > > > > #define arch_spin_trylock(lock)({ barrier(); (void)(lock); 1; }) > > > > in include/linux/spinlock_up.h is causing the problem. > > > > So, basically, reentrancy of clk_enable() is broken for non-SMP systems, > > but I'm not sure I know how to fix it. > > > > > > Here is what I came up with for a fix. If it looks reasonable, I will > resend as a proper patch. > > diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c > index bb1b1f9..53fad5f 100644 > --- a/drivers/clk/clk.c > +++ b/drivers/clk/clk.c > @@ -136,12 +136,23 @@ static void clk_prepare_unlock(void) > mutex_unlock(&prepare_lock); > } > > +#ifdef CONFIG_SMP > +#define NO_SMP 0 > +#else > +#define NO_SMP 1 > +#endif > + > static unsigned long clk_enable_lock(void) > __acquires(enable_lock) > { > - unsigned long flags; > + unsigned long flags = 0; > > - if (!spin_trylock_irqsave(&enable_lock, flags)) { > + /* > + * spin_trylock_irqsave() always returns true on non-SMP system > (unless Ugh, wrapped lines in patch make me sad. > + * spin lock debugging is enabled) so don't call > spin_trylock_irqsave() > + * unless we are on an SMP system. > + */ > + if (NO_SMP || !spin_trylock_irqsave(&enable_lock, flags)) { I'm not sure that this looks reasonable. The inverse logic (NO_SMP = 0 being equivalent to SMP = 1) just makes things harder to read for no reason. More to the point, did you fix your enable/disable call imbalance? If so, did you test that fix without this patch? I'd like to know if fixing the enable/disable imbalance is Good Enough. I'd prefer to take only that change and not this patch. Best regards, Mike > if (enable_owner == current) { > enable_refcnt++; > __acquire(enable_lock); >
Powered by blists - more mailing lists