[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A1A32AB.4090905@kernel.org>
Date: Sun, 24 May 2009 22:54:51 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Ingo Molnar <mingo@...e.hu>
CC: Pekka J Enberg <penberg@...helsinki.fi>,
Rusty Russell <rusty@...tcorp.com.au>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>, Jeff Garzik <jgarzik@...ox.com>,
Alexander Viro <viro@....linux.org.uk>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [GIT PULL] scheduler fixes
Ingo Molnar wrote:
>
> Ok, i think this all looks pretty realistic - but there's quite a
> bit of layering on top of pending changes in the x86 and irq trees.
> We could do this on top of those topic branches in -tip, and rebase
> in the merge window. Or delay it to .32.
would have move setup_per_cpu_areas after mem_init().
some kind of limiting bootmem related in setup_arch()
>
> ... plus i think we are _very_ close to being able to remove all of
> bootmem on x86 (with some compatibility/migration mechanism in
> place). Which bootmem calls do we have before kmalloc init with
> Pekka's patch applied? I think it's mostly the page table init code.
need to decide what should be in setup_arch_mem, or setup_arch_rest().
before initmem_init() ==> setup_arch_mem
after initmem_init()
reserve_bootmem related should stay in setup_arch_mem
try to move other call in setup_arch to _reset after ane setup_arch_rest will
be called after mem_init()
>
> ( beyond the page allocator internal use - where we could use
> straight e820 based APIs that clip memory off from the beginning
> of existing e820 RAM ranges - enriched with NUMA/SRAT locality
> info. )
yes. it is there. need to dynamic early_res array.
YH
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists