[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1205242113400.3231@ionos>
Date: Thu, 24 May 2012 21:16:17 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Suresh Siddha <suresh.b.siddha@...el.com>
cc: Dimitri Sivanich <sivanich@....com>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Yinghai Lu <yinghai@...nel.org>,
Naga Chumbalkar <nagananda.chumbalkar@...com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: check for valid irq_cfg pointer in
smp_irq_move_cleanup_interrupt
On Thu, 24 May 2012, Suresh Siddha wrote:
> On Thu, 2012-05-24 at 09:37 -0500, Dimitri Sivanich wrote:
> > And speaking of possible holes in destroy_irq()..
> >
> > What happens if we're running __assign_irq_vector() (say we're changing irq
> > affinity), and on another cpu we had just run through __clear_irq_vector()
> > via destroy_irq(). Now destroy_irq() is going to call
> > free_irq_at()->free_irq_cfg, which will clear irq_cfg. Then
> > __assign_irq_vector goes to access irq_cfg (cfg->vector or
> > cfg->move_in_progress, for instance), which was already freed.
> >
> > I'm not sure if this can happen, but just eyeballing it, it does look that
> > that way.
> >
>
> I wanted to say, irq desc is locked when we change the irq affinity,
> which calls assign_irq_vector() and friends, so this should be fine.
>
> BUT NO. I don't see any reference counts being maintained when we do
> irq_to_desc(). So locking/unlocking that desc pointer is bogus when
> destroy_irq() can go ahead and free the desc in parallel.
>
> So, SPARSE_IRQ looks terribly broken! Yinghai, Thomas?
Yes, we need refcounts for that. We talked about that before, but then
the argument was against it was that all that code is serialized
already, so no need. How wrong :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists