[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D6E91EC.6040906@kernel.org>
Date: Wed, 02 Mar 2011 10:52:28 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: Tejun Heo <tj@...nel.org>
CC: David Rientjes <rientjes@...gle.com>, Ingo Molnar <mingo@...e.hu>,
tglx@...utronix.de, "H. Peter Anvin" <hpa@...or.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH x86/mm UPDATED] x86-64, NUMA: Fix distance table handling
On 03/02/2011 08:55 AM, Tejun Heo wrote:
> On Wed, Mar 02, 2011 at 08:46:17AM -0800, Yinghai Lu wrote:
>>> * I don't think it's gonna matter all that much. It's one time and
>>> only used if emulation is enabled, but then again yeap MAX_NUMNODES
>>> * MAX_NUMNODES can get quite high, but it looks way too complicated
>>> for what it achieves. Just looping over enabled nodes should
>>> achieve about the same thing in much simpler way, right?
>>
>> what kind of excuse to put inefficiency code there!
>
> Complexity of a solution should match the benefit of the complexity.
> Code complexity is one of the most important metrics that we need to
> keep an eye on. If you don't do that, the code base becomes very ugly
> and difficult to maintain very quickly. So, yes, some amount of
> execution inefficiency is acceptable depending on circumstances.
> Efficiency too is something which should be traded off against other
> benefits.
No. it is not acceptable in your case.
We can accept that something like: during init stage, do some probe and call pathes to be happy.
like subarch.
Also why did you omit my first question?
>>>>diff --git a/arch/x86/mm/numa_64.c b/arch/x86/mm/numa_64.c
>>>>> index 7757d22..541746f 100644
>>>>> --- a/arch/x86/mm/numa_64.c
>>>>> +++ b/arch/x86/mm/numa_64.c
>>>>> @@ -390,14 +390,12 @@ static void __init numa_nodemask_from_meminfo(nodemask_t *nodemask,
>>>>> */
>>>>> void __init numa_reset_distance(void)
>>>>> {
>>>>> - size_t size;
>>>>> + size_t size = numa_distance_cnt * numa_distance_cnt * sizeof(numa_distance[0]);
>>>>>
>>>>> - if (numa_distance_cnt) {
>>>>> - size = numa_distance_cnt * sizeof(numa_distance[0]);
>>>>> + if (numa_distance_cnt)
>>>>> memblock_x86_free_range(__pa(numa_distance),
>>>>> __pa(numa_distance) + size);
>>>>> - numa_distance_cnt = 0;
>>>>> - }
>>>>> + numa_distance_cnt = 0;
>>>>> numa_distance = NULL;
>>>>> }
>> my original part:
>> >>
>> >> @@ -393,7 +393,7 @@ void __init numa_reset_distance(void)
>> >> size_t size;
>> >>
>> >> if (numa_distance_cnt) {
>> >> - size = numa_distance_cnt * sizeof(numa_distance[0]);
>> >> + size = numa_distance_cnt * numa_distance_cnt * sizeof(numa_distance[0]);
>> >> memblock_x86_free_range(__pa(numa_distance),
>> >> __pa(numa_distance) + size);
>> >> numa_distance_cnt = 0;
>> >>
>> >> So can you tell me why you need to make those change?
>> >> move out assigning or numa_distance_cnt and size of the the IF
> >
> > Please read the patch description. I actually wrote that down. :-)
well you said:
> > while at it, take numa_distance_cnt resetting in
> > numa_reset_distance() out of the if block to simplify the code a bit.
what are you talking about? what do you mean "simplify the code a bit" ?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists