[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090818092939.2efbe158.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 18 Aug 2009 09:29:39 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Amerigo Wang <amwang@...hat.com>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
linux-kernel@...r.kernel.org, tony.luck@...el.com,
linux-ia64@...r.kernel.org, linux-mm@...ck.org,
Neil Horman <nhorman@...hat.com>,
Andi Kleen <andi@...stfloor.org>, akpm@...ux-foundation.org,
bernhard.walle@....de, Fenghua Yu <fenghua.yu@...el.com>,
Ingo Molnar <mingo@...e.hu>,
Anton Vorontsov <avorontsov@...mvista.com>
Subject: Re: [Patch 8/8] kexec: allow to shrink reserved memory
On Mon, 17 Aug 2009 17:50:21 +0800
Amerigo Wang <amwang@...hat.com> wrote:
> Eric W. Biederman wrote:
> > Amerigo Wang <amwang@...hat.com> writes:
> >
> >
> >> Not that simple, marking it as "__init" means it uses some "__init" data which
> >> will be dropped after initialization.
> >>
> >
> > If we start with the assumption that we will be reserving to much and
> > will free the memory once we know how much we really need I see a very
> > simple way to go about this. We ensure that the reservation of crash
> > kernel memory is done through a normal allocation so that we have
> > struct page entries for every page. On 32bit x86 that is an extra 1MB
> > for a 128MB allocation.
> >
> > Then when it comes time to release that memory we clear whatever magic
> > flags we have on the page (like PG_reserve) and call free_page.
> >
>
> Hmm, my MM knowledge is not good enough to judge if this works...
> I need to check more MM source code.
>
> Can any MM people help?
>
Hm, memory-hotplug guy is here.
Can I have a question ?
- How crash kernel's memory is preserved at boot ?
It's hidden from the system before mem_init() ?
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists