[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48B2FD8F.4000808@sgi.com>
Date: Mon, 25 Aug 2008 11:44:31 -0700
From: Mike Travis <travis@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
David Miller <davem@...emloft.net>,
kosaki.motohiro@...fujitsu.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, cl@...ux-foundation.org,
tokunaga.keiich@...fujitsu.com
Subject: Re: [RFC][PATCH 2/2] quicklist shouldn't be proportional to # of
CPUs
Peter Zijlstra wrote:
> On Thu, 2008-08-21 at 00:27 -0700, Andrew Morton wrote:
>> On Thu, 21 Aug 2008 00:13:22 -0700 (PDT) David Miller <davem@...emloft.net> wrote:
>>
>>> From: Andrew Morton <akpm@...ux-foundation.org>
>>> Date: Wed, 20 Aug 2008 23:46:15 -0700
>>>
>>>> On Wed, 20 Aug 2008 20:08:13 +0900 KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> wrote:
>>>>
>>>>> + num_cpus_per_node = cpus_weight_nr(node_to_cpumask(node));
>>>> sparc64 allmodconfig:
>>>>
>>>> mm/quicklist.c: In function `max_pages':
>>>> mm/quicklist.c:44: error: invalid lvalue in unary `&'
>>>>
>>>> we seem to have a made a spectacular mess of cpumasks lately.
>>> It should explode similarly on x86, since it also defines node_to_cpumask()
>>> as an inline function.
>>>
>>> IA64 seems to be one of the few platforms to define this as a macro
>>> evaluating to the node-to-cpumask array entry, so it's clear what
>>> platform Motohiro-san did build testing on :-)
>> Seems to compile OK on x86_32, x86_64, ia64 and powerpc for some reason.
>>
>> This seems to fix things on sparc64:
>>
>> --- a/mm/quicklist.c~mm-quicklist-shouldnt-be-proportional-to-number-of-cpus-fix
>> +++ a/mm/quicklist.c
>> @@ -28,7 +28,7 @@ static unsigned long max_pages(unsigned
>> unsigned long node_free_pages, max;
>> int node = numa_node_id();
>> struct zone *zones = NODE_DATA(node)->node_zones;
>> - int num_cpus_per_node;
>> + cpumask_t node_cpumask;
>>
>> node_free_pages =
>> #ifdef CONFIG_ZONE_DMA
>> @@ -41,8 +41,8 @@ static unsigned long max_pages(unsigned
>>
>> max = node_free_pages / FRACTION_OF_NODE_MEM;
>>
>> - num_cpus_per_node = cpus_weight_nr(node_to_cpumask(node));
>> - max /= num_cpus_per_node;
>> + node_cpumask = node_to_cpumask(node);
>> + max /= cpus_weight_nr(node_cpumask);
>>
>> return max(max, min_pages);
>> }
>
> humm, I thought we wanted to keep cpumask_t stuff away from our stack -
> since on insanely large SGI boxen (/me looks at mike) the thing becomes
> 512 bytes.
Yes, thanks for pointing that out! I did send out an alternate coding
that should keep the cpumask_t off the stack for those arch's that need
to worry about it (using the node_to_cpumask_ptr function). I should
probably devote some time to documenting some of these gotcha's in one
of the Doc.../ files.
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists