[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87levn8tnx.wl-maz@kernel.org>
Date: Fri, 29 Apr 2022 22:29:54 +0100
From: Marc Zyngier <maz@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Thomas Pfaff <tpfaff@....com>, linux-kernel@...r.kernel.org,
linux-rt-users@...r.kernel.org, Hillf Danton <hdanton@...a.com>
Subject: Re: [PATCH v2] irq/core: synchronize irq_thread startup
On Fri, 29 Apr 2022 20:40:32 +0100,
Thomas Gleixner <tglx@...utronix.de> wrote:
>
> On Fri, Apr 29 2022 at 17:08, Marc Zyngier wrote:
> > On Fri, 29 Apr 2022 12:52:48 +0100,
> > Thomas Pfaff <tpfaff@....com> wrote:
>
> > +static void wait_for_irq_thread_startup(struct irq_desc *desc,
> > + struct irqaction *action)
>
> and this would be wait_for_irq_thread_ready().
>
> which is sill a misnomer as this actually wakes and waits.
Hey, I didn't say I picked the right color for that shed! ;-)
>
> >> @@ -1522,6 +1548,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
> >> }
> >> }
> >>
> >> + init_waitqueue_head(&desc->wait_for_threads);
> >> +
> >
> > I'm trying to convince myself that this one is safe.
> >
> > It was so far only done when registering the first handler of a
> > threaded interrupt, while it is now done on every call to
> > __setup_irq(). However, this is now done outside of the protection of
> > any of the locks, meaning that a concurrent __setup_irq() for a shared
> > interrupt can now barge in and corrupt the wait queue.
> >
> > So I don't think this is right. You may be able to hoist the
> > request_lock up, but I haven't checked what could break, if anything.
>
> It can't be moved here, but I can see why Thomas wants to move it. With
> a spurious wakeup of the irq thread (should not happen), the thread
> would try to invoke wake_up() on a non initialize wait queue head.
>
> Something like this should do the trick.
>
> diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
> index 939d21cd55c3..0099b87dd853 100644
> --- a/kernel/irq/irqdesc.c
> +++ b/kernel/irq/irqdesc.c
> @@ -407,6 +407,7 @@ static struct irq_desc *alloc_desc(int irq, int node, unsigned int flags,
> lockdep_set_class(&desc->lock, &irq_desc_lock_class);
> mutex_init(&desc->request_mutex);
> init_rcu_head(&desc->rcu);
> + init_waitqueue_head(&desc->wait_for_threads);
>
> desc_set_defaults(irq, desc, node, affinity, owner);
> irqd_set(&desc->irq_data, flags);
> @@ -575,6 +576,7 @@ int __init early_irq_init(void)
> raw_spin_lock_init(&desc[i].lock);
> lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
> mutex_init(&desc[i].request_mutex);
> + init_waitqueue_head(&desc[i].wait_for_threads);
> desc_set_defaults(i, &desc[i], node, NULL, NULL);
> }
> return arch_early_irq_init();
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index c03f71d5ec10..6a0942f4d068 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -1683,8 +1683,6 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
> }
>
> if (!shared) {
> - init_waitqueue_head(&desc->wait_for_threads);
> -
> /* Setup the type (level, edge polarity) if configured: */
> if (new->flags & IRQF_TRIGGER_MASK) {
> ret = __irq_set_trigger(desc,
>
Indeed, it makes a lot of sense to fully initialise the irqdesc
structure at the point of allocation, rather than later.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists