[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090624194839.GG7239@us.ibm.com>
Date: Wed, 24 Jun 2009 12:48:39 -0700
From: Gary Hade <garyhade@...ibm.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Jesse Barnes <jbarnes@...tuousgeek.org>,
Gary Hade <garyhade@...ibm.com>,
Jaswinder Singh Rajput <jaswinder@...nel.org>,
Larry Finger <Larry.Finger@...inger.net>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
x86 maintainers <x86@...nel.org>, Len Brown <lenb@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: Regression with commit f9cde5f in 2.6.30-gitX
On Wed, Jun 24, 2009 at 08:45:31PM +0200, Thomas Gleixner wrote:
> On Wed, 24 Jun 2009, Jesse Barnes wrote:
> > > I was thinking 32 but 64 would be better if there aren't any
> > > downsides elsewhere of making the array that big.
> >
> > Just chatting with Len about this; apparently the PNPACPI layer ran
> > into something similar awhile back, and they had to go to a variable
> > sized list of resources, due to weird machines with huge numbers of
> > resources. Matthew says he's got an idea about how to fix this up; if
> > that doesn't work out I'll see about making the bus resource array into
> > a list instead.
>
> Can we just bring the limit check back and increase the number for now
> until folks come up with a better solution ?
Another possible option is leaving in the limit check (still valid
IMO for correct behavior of previous 'pci=use_crs') and reverting
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=9e9f46c44e487af0a82eb61b624553e2f7118f5b
until the better solution for the fixed size array issue is
available.
Gary
--
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503 IBM T/L: 775-4503
garyhade@...ibm.com
http://www.ibm.com/linux/ltc
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists