[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170711173924.GA16509@atomide.com>
Date: Tue, 11 Jul 2017 10:39:24 -0700
From: Tony Lindgren <tony@...mide.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Sebastian Reichel <sebastian.reichel@...labora.co.uk>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>, Pavel Machek <pavel@....cz>,
Linus Walleij <linus.walleij@...aro.org>,
Grygorii Strashko <grygorii.strashko@...com>
Subject: Re: [GIT pull] irq updates for 4.13
* Thomas Gleixner <tglx@...utronix.de> [170711 10:17]:
> On Tue, 11 Jul 2017, Tony Lindgren wrote:
> > * Linus Torvalds <torvalds@...ux-foundation.org> [170711 08:40]:
> > > On Tue, Jul 11, 2017 at 7:41 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
> > > >
> > > > Ah. Now that makes sense.
> > > >
> > > > Unpatched the ordering is:
> > > >
> > > > chip_bus_lock(desc);
> > > > irq_request_resources(desc);
> > >
> > > I *looked* at that ordering and then went "Naah, that makes no sense".
> > >
> > > But if that's the only issue, how about we just re-order those things
> > > - we still don't need to move the irq_request_resources() into the
> > > spinlock, we just move it to below the chip_bus_lock().
> > >
> > > IOW, something like the (COMPLETELY UNTEESTED!) attached patch.
> >
> > Yeah that fixes the issue:
> >
> > Tested-by: Tony Lindgren <tony@...mide.com>
> >
> > > This assumes that the chip_bus_lock() thing is still ok for the RT
> > > case, but it looks like it might be: the only other one I looked at
> > > (apart from the gpio-omap one) used a mutex.
> >
> > Yeah and the ordering below makes more sense to me at least. That is
> > assuming we want to call chip_bus_lock() before we start calling the
> > chip functions :)
>
> We can do that, just the free path is ugly and does not really work that
> way.
OK
> __free_irq()
> ....
> chip_bus_sync_unlock(desc);
> ...
> synchronize_irq(irq);
> ...
> if (!desc->action) {
> irq_release_resources();
> irq_remove_timings();
> }
> mutex_unlock(&desc->request_mutex);
>
> We can't release request_mutex early otherwise we run into the issue of a
> concurrent request_irq() trying to reuse stuff which we just release, but
> we can't reacquire bus_lock under request_mutex either when we change the
> lock ordering to bus_lock -> desc->request_mutex -> desc->lock.
>
> We really want to have both the release_resources() and the
> remove_timings() calls outside of the spinlocked region. That's not only a
> RT issue, there have been requests for making the resource call 'sleepable'
> for mainline as well.
>
> Below is a slightly different fix, which keeps the lock order
>
> desc->request_mutex -> bus_lock -> desc->lock
>
> intact and conditionally reacquired the bus lock for the release call.
Yeah that fixes the issue too:
Tested-by: Tony Lindgren <tony@...mide.com>
Regards,
Tony
> 8<------------------------
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -1036,13 +1036,20 @@ static int irq_request_resources(struct
> return c->irq_request_resources ? c->irq_request_resources(d) : 0;
> }
>
> -static void irq_release_resources(struct irq_desc *desc)
> +static void irq_release_resources(struct irq_desc *desc, bool buslock)
> {
> struct irq_data *d = &desc->irq_data;
> struct irq_chip *c = d->chip;
>
> - if (c->irq_release_resources)
> - c->irq_release_resources(d);
> + if (!c->irq_release_resources)
> + return;
> + if (buslock)
> + chip_bus_lock(desc);
> +
> + c->irq_release_resources(d);
> +
> + if (buslock)
> + chip_bus_sync_unlock(desc);
> }
>
> static int
> @@ -1168,17 +1175,16 @@ static int
> new->flags &= ~IRQF_ONESHOT;
>
> mutex_lock(&desc->request_mutex);
> + chip_bus_lock(desc);
> if (!desc->action) {
> ret = irq_request_resources(desc);
> if (ret) {
> pr_err("Failed to request resources for %s (irq %d) on irqchip %s\n",
> new->name, irq, desc->irq_data.chip->name);
> - goto out_mutex;
> + goto out_bus;
> }
> }
>
> - chip_bus_lock(desc);
> -
> /*
> * The following block of code has to be executed atomically
> */
> @@ -1286,10 +1292,8 @@ static int
> ret = __irq_set_trigger(desc,
> new->flags & IRQF_TRIGGER_MASK);
>
> - if (ret) {
> - irq_release_resources(desc);
> + if (ret)
> goto out_unlock;
> - }
> }
>
> desc->istate &= ~(IRQS_AUTODETECT | IRQS_SPURIOUS_DISABLED | \
> @@ -1385,12 +1389,10 @@ static int
> out_unlock:
> raw_spin_unlock_irqrestore(&desc->lock, flags);
>
> - chip_bus_sync_unlock(desc);
> -
> if (!desc->action)
> - irq_release_resources(desc);
> -
> -out_mutex:
> + irq_release_resources(desc, false);
> +out_bus:
> + chip_bus_sync_unlock(desc);
> mutex_unlock(&desc->request_mutex);
>
> out_thread:
> @@ -1472,6 +1474,7 @@ static struct irqaction *__free_irq(unsi
> WARN(1, "Trying to free already-free IRQ %d\n", irq);
> raw_spin_unlock_irqrestore(&desc->lock, flags);
> chip_bus_sync_unlock(desc);
> + mutex_unlock(&desc->request_mutex);
> return NULL;
> }
>
> @@ -1531,7 +1534,7 @@ static struct irqaction *__free_irq(unsi
> }
>
> if (!desc->action) {
> - irq_release_resources(desc);
> + irq_release_resources(desc, true);
> irq_remove_timings(desc);
> }
>
>
>
>
>
>
>
Powered by blists - more mailing lists