[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimAA-NF43DtrZTHZWLyYMVdDkw89QCLcfsC0Rbh@mail.gmail.com>
Date: Fri, 11 Mar 2011 10:25:23 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: Tejun Heo <tj@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>, Ingo Molnar <mingo@...e.hu>,
tglx@...utronix.de, "H. Peter Anvin" <hpa@...or.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH x86/mm UPDATED] x86-64, NUMA: Fix distance table handling
On Fri, Mar 11, 2011 at 10:19 AM, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On Fri, Mar 11, 2011 at 10:02:41AM -0800, Yinghai Lu wrote:
>> > No, NUMA implementation can skip numa_set_distance() entirely if the
>> > distance is LOCAL_DISTANCE if nids are equal, REMOTE_DISTANCE
>> > otherwise. In fact, any amdtopology configuraiton would behave this
>> > way, so it's incorrect to fill the table with LOCAL_DISTANCE. You
>> > have to check the physnid mapping and build new table whether physical
>> > table exists or not. Lack of physical distance table doesn't mean all
>> > nodes are LOCAL_DISTANCE.
>>
>> too bad. We should call numa_alloc_distance in amdtopology to set
>> default value in that array.
>
> I'm not following. If there's no distance table, the distance is
> assumed to be LOCAL between the same node and REMOTE if the nodes are
> different, which is exactly the way it should be for those machines.
> Why is this bad and why would you allocate distance table for such
> configurations?
now even emulation have that distance array.
why keep it simple to make all path have that array?
Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists