[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1eii7j8ne.fsf@fess.ebiederm.org>
Date: Thu, 22 Apr 2010 17:48:53 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Vitaly Mayatskikh <v.mayatskih@...il.com>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Haren Myneni <hbabu@...ibm.com>,
Neil Horman <nhorman@...driver.com>,
Cong Wang <amwang@...hat.com>, kexec@...ts.infradead.org
Subject: Re: [PATCH 0/5] Add second memory region for crash kernel
Vivek Goyal <vgoyal@...hat.com> writes:
> On Thu, Apr 22, 2010 at 03:07:11PM -0700, Eric W. Biederman wrote:
>> Vitaly Mayatskikh <v.mayatskih@...il.com> writes:
>> >
>> > This serie of patches realizes this approach. It requires also changes
>> > in kexec utility to make this feature work, but is
>> > backward-compatible: old versions of kexec will work with new
>> > kernel. I will post patch to kexec-tools upstream separately.
>>
>> Have you tried loading a 64bit vmlinux directly into a higher address
>> range? There may be a bit or two missing but you should be able to
>> load a linux kernel above 4GB. I tested the basics of that mechanism
>> when I made the 64bit relocatable kernel.
>
> I guess even if it works, for distributions it will become additional
> liability to carry vmlinux (instead of relocatable bzImage). So we shall
> have to find a way to make bzImage work.
As Peter pointed out we actually have everything thing we need except
a bit of documentation and the flag that says this is a 64bit kernel.
>From a testing perspective a 64bit vmlinux should work today without
changes. Once it is confirmed there is a solution with the 64bit
kernel we just need a small patch to boot.txt and a few tweaks to
/sbin/kexec to handle a 64bit bzImage.
>> I don't buy the argument that there is a direct connection between
>> the amount of memory you have and how much memory it takes to dump it.
>> Even an indirect connections seems suspicious.
>
> Memory requirement by user space might be of interest though like dump
> filtering tools. I vaguely remember that it used to first traverse all
> the memory pages, create some internal data structures and then start
> dumping.
>
> So memory required by filtering tool might be directly proportional to
> amount of memory present in the system.
Assuming your dump filtering tool creates a bitmap of pages to be dumped
you get a ration of 32K to 1. Or 3MB for 100GB and 32MB for 1TB.
Which is noticeable in the worst case but definitely not enough to push
us past 2GB.
> Vitaly, have you really run into cases where 2G upper limit is a concern.
> What is the configuration you have, how much memory it has and how much
> memory are you planning to reserve for kdump kernel?
A good question.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists