[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1012151357030.12146@localhost6.localdomain6>
Date: Wed, 15 Dec 2010 14:04:13 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Jan Kiszka <jan.kiszka@....de>
cc: Avi Kivity <avi@...hat.com>, Marcelo Tosatti <mtosatti@...hat.com>,
linux-kernel@...r.kernel.org, kvm <kvm@...r.kernel.org>,
Tom Lyon <pugs@...co.com>,
Alex Williamson <alex.williamson@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jan Kiszka <jan.kiszka@...mens.com>
Subject: Re: [PATCH v3 2/4] genirq: Inform handler about line sharing state
On Wed, 15 Dec 2010, Jan Kiszka wrote:
> Am 14.12.2010 21:54, Thomas Gleixner wrote:
> > On Mon, 13 Dec 2010, Jan Kiszka wrote:
> >> @@ -943,6 +950,9 @@ static struct irqaction *__free_irq(unsigned int irq, void *dev_id)
> >> /* Make sure it's not being used on another CPU: */
> >> synchronize_irq(irq);
> >>
> >> + if (single_handler)
> >> + desc->irq_data.drv_status &= ~IRQS_SHARED;
> >> +
> >
> > What's the reason to clear this flag outside of the desc->lock held
> > region.
>
> We need to synchronize the irq first before clearing the flag.
>
> The problematic scenario behind this: An IRQ started in shared mode,
> this the line was unmasked after the hardirq. Now we clear IRQS_SHARED
> before calling into the threaded handler. And that handler may now think
> that the line is still masked as IRQS_SHARED is set.
That should read "not set" I guess. Hmm, needs more thoughts :(
> > I need this status for other purposes as well, where I
> > definitely need serialization.
>
> Well, two options: wrap all bit manipulations with desc->lock
> acquisition/release or turn drv_status into an atomic. I don't know what
> your plans with drv_status are, so...
Some bits for irq migration and other stuff, which allows us to avoid
fiddling with irqdesc in the drivers.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists