[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EA93699.9000101@numascale.com>
Date: Thu, 27 Oct 2011 12:46:49 +0200
From: Steffen Persvold <sp@...ascale.com>
To: Ingo Molnar <mingo@...e.hu>
CC: Daniel J Blueman <daniel@...ascale-asia.com>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, H Peter Anvin <hpa@...or.com>,
linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH 2/3] Add multi-node boot support
On 10/27/2011 09:30, Ingo Molnar wrote:
[]
>> + c->phys_proc_id = node;
>> + per_cpu(cpu_llc_id, cpu) = node;
>> + }
>
> But more importantly, please first explain why the quirk is needed
> (the patch only explains what it does but does not explain why it
> needs these changes - other NUMA systems are able to boot without
> this quirk).
The issue is that since all AMD CPUs gets their initial APIC Id assigned
in the power up process on each individual system, when connecting all
systems through Numachip we end up with multiple AMD CPUs with the same
"phys_proc_id". This in turn lets Linux think they have the same llc_id
and we get a oops somewhere in the scheduler code later on (when
scheduler is configured to be numa aware).
>
> If it's absolutely needed then add a proper quirk handler instead of
> polluting the generic code.
>
We wanted to reuse as much of the generic AMD code as possible, but it's
tricky because most of that code is based around a single HT fabric
design, whereas a NumaChip based systems consists of several HT fabrics
connected together thus you will have identical NorthBridge IDs (0-7)
etc. shared between all systems.
How would you suggest we add a quirk handler for it ?
Cheers,
--
Steffen Persvold, Chief Architect NumaChip
Numascale AS - www.numascale.com
Tel: +47 92 49 25 54 Skype: spersvold
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists