[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101020142734.GD19090@redhat.com>
Date: Wed, 20 Oct 2010 10:27:34 -0400
From: Don Zickus <dzickus@...hat.com>
To: Huang Ying <ying.huang@...el.com>
Cc: Robert Richter <robert.richter@....com>,
"mingo@...e.hu" <mingo@...e.hu>,
"andi@...stfloor.org" <andi@...stfloor.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"peterz@...radead.org" <peterz@...radead.org>
Subject: Re: [PATCH 4/5] x86, NMI: Allow NMI reason io port (0x61) to be
processed on any CPU
On Wed, Oct 20, 2010 at 08:23:12AM +0800, Huang Ying wrote:
> > > > What about using raw_spin_trylock() instead? We don't have to wait
> > > > here since we are already processing it by another cpu.
> > >
> > > This would avoid a global lock and also deadlocking in case of a
> > > potential #gp in the nmi handler.
> >
> > I would feel more comfortable with it too. I can't find a reason where
> > trylock would do harm.
>
> One possible issue can be as follow:
>
> - PCI SERR NMI raised on CPU 0
> - IOCHK NMI raised on CPU 1
>
> If we use try lock, we may get unknown NMI on one CPU. Do you guys think
> so?
I thought both PCI SERR and IOCK NMI's were external and routed through
the IOAPIC, which means only one cpu could receive those (unless the
IOAPIC was updated to route them elsewhere). This would make the issue
moot. Unless I am misunderstanding where those NMIs come from?
Also as Robert said, we used to handle them on the bsp cpu only before
without any issues. I believed that was because everything in the IOAPIC
was routed that way.
I thought the point of this patch was to remove that restriction in the
nmi handler, which would allow future patches to re-route these NMIs to
another cpu, thus finally allowing people to hot-remove the bsp cpu, no?
Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists