[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140314090617.GA4697@pd.tnic>
Date: Fri, 14 Mar 2014 10:06:17 +0100
From: Borislav Petkov <bp@...en8.de>
To: Daniel J Blueman <daniel@...ascale.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Borislav Petkov <bp@...e.de>, linux-kernel@...r.kernel.org,
Steffen Persvold <sp@...ascale.com>
Subject: Re: [PATCH] Fix northbridge quirk to assign correct NUMA node
On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote:
> For systems with multiple servers and routed fabric, all northbridges get
> assigned to the first server. Fix this by also using the node reported from
> the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
> by definition, which are on NUMA node 0 by definition, so this is invarient
> on most systems.
Yeah, I think this is of very low risk for !Numascale setups. :-) So
Acked-by: Borislav Petkov <bp@...e.de>
> Tested on fam10h and fam15h single and multi-fabric systems and candidate
> for stable.
I'm not sure about it - this is only reporting the wrong node, right?
Does anything depend on that node setting being correct and breaks due
to this?
Thanks.
--
Regards/Gruss,
Boris.
Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists