[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1907162100190.1767@nanos.tec.linutronix.de>
Date: Tue, 16 Jul 2019 21:05:30 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Neil Horman <nhorman@...driver.com>
cc: linux-kernel@...r.kernel.org, djuran@...hat.com,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Subject: Re: [PATCH] x86: Add irq spillover warning
Neil,
On Tue, 16 Jul 2019, Neil Horman wrote:
> On Tue, Jul 16, 2019 at 05:57:31PM +0200, Thomas Gleixner wrote:
> > On Tue, 16 Jul 2019, Neil Horman wrote:
> > > If a cpu has more than this number of interrupts affined to it, they
> > > will spill over to other cpus, which potentially may be outside of their
> > > affinity mask.
> >
> > Spill over?
> >
> > The kernel decides to pick a vector on a CPU outside of the affinity when
> > it runs out of vectors on the CPUs in the affinity mask.
> >
> Yes.
>
> > Please explain issues technically correct.
> >
> I don't know what you mean by this. I explained it above, and you clearly
> understood it.
It took me a while to grok it. Simply because I first thought it's some
hardware issue. And of course after confusion settled I knew what it is,
but just because I know that code like the back of my hand.
> > > Given that this might cause unexpected behavior on
> > > performance sensitive systems, warn the user should this condition occur
> > > so that corrective action can be taken
> >
> > > @@ -244,6 +244,14 @@ __visible unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
> >
> > Why on earth warn in the interrupt delivery hotpath? Just because it's the
> > place which really needs extra instructions and extra cache lines on
> > performance sensitive systems, right?
> >
> Because theres already a check of the same variety in do_IRQ, but if the
> information is available outside the hotpath, I was unaware, and am happy to
> update this patch to refelct that.
Which check are you referring to?
Thanks,
tglx
Powered by blists - more mailing lists