[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A3B49BA.40100@kernel.org>
Date: Fri, 19 Jun 2009 01:18:02 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Nathan Lynch <ntl@...ox.com>
CC: Christoph Lameter <cl@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>, mingo@...e.hu,
mel@....ul.ie, tglx@...utronix.de, hpa@...or.com,
suresh.b.siddha@...el.com, linux-kernel@...r.kernel.org,
viro@...iv.linux.org.uk, rusty@...tcorp.com.au, steiner@....com,
rientjes@...gle.com, containers@...ts.linux-foundation.org
Subject: Re: [PATCH] mm: clear N_HIGH_MEMORY map before se set it again -v4
Nathan Lynch wrote:
> Yinghai Lu <yinghai@...nel.org> writes:
>> SRAT tables may contains nodes of very small size. The arch code may
>> decide to not activate such a node. However, currently the early boot code
>> sets N_HIGH_MEMORY for such nodes. These nodes therefore seem to be active
>> although these nodes have no present pages.
>>
>> for 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too
>>
>> v4: update description according to Christoph
>>
>> Signed-off-by: Yinghai Lu <Yinghai@...nel.org>
>> Tested-by: Jack Steiner <steiner@....com>
>> Acked-by: Christoph Lameter <cl@...ux-foundation.org>
>>
>> ---
>> mm/page_alloc.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> Index: linux-2.6/mm/page_alloc.c
>> ===================================================================
>> --- linux-2.6.orig/mm/page_alloc.c
>> +++ linux-2.6/mm/page_alloc.c
>> @@ -4041,6 +4041,11 @@ void __init free_area_init_nodes(unsigne
>> early_node_map[i].start_pfn,
>> early_node_map[i].end_pfn);
>>
>> + /*
>> + * find_zone_movable_pfns_for_nodes/early_calculate_totalpages init
>> + * that node_mask, clear it at first
>> + */
>> + nodes_clear(node_states[N_HIGH_MEMORY]);
>> /* Initialise every node */
>> mminit_verify_pageflags_layout();
>> setup_nr_node_ids();
>
> This patch breaks the cpuset.mems cgroup attribute on an i386 kvm guest.
>
> With v2.6.30:
>
> # uname -r
> 2.6.30
> # cat /cgroup/cpuset.mems
> 0
> # mkdir /cgroup/test
> # for i in cpus mems ; do cat /cgroup/cpuset.$i > /cgroup/test/cpuset.$i ; done
> # echo $$ > /cgroup/test/tasks
> # echo $?
> 0
>
> With a pulled-today Linus tree:
>
> # uname -r
> 2.6.30-06725-g1d89b30
> # cat /cgroup/cpuset.mems
>
> # mkdir /cgroup/test
> # for i in cpus mems ; do cat /cgroup/cpuset.$i > /cgroup/test/cpuset.$i ; done
> # echo $$ > /cgroup/test/tasks
> -bash: echo: write error: No space left on device
>
> (Note that in addition to the ENOSPC error, /cgroup/cpuset.mems is empty
> rather than '0' in the second test.)
>
> I bisected to the commit containing this change. Reverting fixes the
> problem.
>
can you use following patch to see what happens to that nodemask?
YH
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a5f3c27..eb89e8b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4189,6 +4189,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
{
unsigned long nid;
int i;
+ char buf[512];
/* Sort early_node_map as initialisation assumes it is sorted */
sort_node_map();
@@ -4244,6 +4245,9 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
* find_zone_movable_pfns_for_nodes/early_calculate_totalpages init
* that node_mask, clear it at first
*/
+ memset(buf, 0, 512);
+ nodemask_scnprintf(buf, 512, node_states[N_HIGH_MEMORY]);
+ printk(KERN_DEBUG "before clear: node_states [%d]: %s\n", N_HIGH_MEMORY, buf);
nodes_clear(node_states[N_HIGH_MEMORY]);
/* Initialise every node */
mminit_verify_pageflags_layout();
@@ -4258,6 +4262,9 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
node_set_state(nid, N_HIGH_MEMORY);
check_for_regular_memory(pgdat);
}
+ memset(buf, 0, 512);
+ nodemask_scnprintf(buf, 512, node_states[N_HIGH_MEMORY]);
+ printk(KERN_DEBUG "after online check: node_states [%d]: %s\n", N_HIGH_MEMORY, buf);
}
static int __init cmdline_parse_core(char *p, unsigned long *core)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists