[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5322D295.5080905@numascale.com>
Date: Fri, 14 Mar 2014 17:57:41 +0800
From: Daniel J Blueman <daniel@...ascale.com>
To: Borislav Petkov <bp@...en8.de>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
Borislav Petkov <bp@...e.de>, linux-kernel@...r.kernel.org,
Steffen Persvold <sp@...ascale.com>
Subject: Re: [PATCH] Fix northbridge quirk to assign correct NUMA node
Hi Boris,
On 14/03/2014 17:06, Borislav Petkov wrote:
> On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote:
>> For systems with multiple servers and routed fabric, all northbridges get
>> assigned to the first server. Fix this by also using the node reported from
>> the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
>> by definition, which are on NUMA node 0 by definition, so this is invarient
>> on most systems.
>
> Yeah, I think this is of very low risk for !Numascale setups. :-) So
>
> Acked-by: Borislav Petkov <bp@...e.de>
>
>> Tested on fam10h and fam15h single and multi-fabric systems and candidate
>> for stable.
>
> I'm not sure about it - this is only reporting the wrong node, right?
> Does anything depend on that node setting being correct and breaks due
> to this?
It's only reporting the wrong node, yes. The irqbalance daemon uses
/sys/devices/.../numa_node, and we found we have to disable it to
prevent hangs on certain systems after a while, but I didn't establish a
link just yet, though found this to be incorrect.
Thanks,
Daniel
--
Daniel J Blueman
Principal Software Engineer, Numascale
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists