[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090603170617.GB7566@us.ibm.com>
Date: Wed, 3 Jun 2009 10:06:17 -0700
From: Gary Hade <garyhade@...ibm.com>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Gary Hade <garyhade@...ibm.com>, mingo@...e.hu, mingo@...hat.com,
linux-kernel@...r.kernel.org, tglx@...utronix.de, hpa@...or.com,
x86@...nel.org, yinghai@...nel.org, lcm@...ibm.com
Subject: Re: [RESEND] [PATCH v2] [BUGFIX] x86/x86_64: fix CPU offlining
triggered "active" device IRQ interrruption
On Wed, Jun 03, 2009 at 05:03:24AM -0700, Eric W. Biederman wrote:
> Gary Hade <garyhade@...ibm.com> writes:
>
> > Impact: Eliminates an issue that can leave the system in an
> > unusable state.
> >
> > This patch addresses an issue where device generated IRQs
> > are no longer seen by the kernel following IRQ affinity
> > migration while the device is generating IRQs at a high rate.
> >
> > I have been able to consistently reproduce the problem on
> > some of our systems by running the following script (VICTIM_IRQ
> > specifies the IRQ for the aic94xx device) while a single instance
> > of the command
> > # while true; do find / -exec file {} \;; done
> > is keeping the filesystem activity and IRQ rate reasonably high.
>
> Nacked-by: "Eric W. Biederman" <ebiederm@...ssion.com>
>
> Again you are attempt to work around the fact that fixup_irqs
> is broken.
>
> fixup_irqs is what needs to be fixed to call these functions properly.
>
> We have several intense debug sessions by various people including
> myself that show that your delayed_irq_move function will simply not
> work reliably.
>
> Frankly simply looking at it gives me the screaming heebie jeebies.
>
> The fact you can't reproduce the old failure cases which demonstrated
> themselves as lockups in the ioapic state machines gives me no
> confidence in your testing of this code.
Correct, after the fix was applied my testing did _not_ show
the lockups that you are referring to. I wonder if there is a
chance that the root cause of those old failures and the root
cause of issue that my fix addresses are the same?
Can you provide the test case that demonstrated the old failure
cases so I can try it on our systems? Also, do you recall what
mainline version demonstrated the old failure cases?
Thanks,
Gary
--
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503 IBM T/L: 775-4503
garyhade@...ibm.com
http://www.ibm.com/linux/ltc
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists