[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131118152255.GA32168@redhat.com>
Date: Mon, 18 Nov 2013 10:22:55 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Yinghai Lu <yinghai@...nel.org>, Ingo Molnar <mingo@...nel.org>,
jerry.hoemann@...com, Pekka Enberg <penberg@...nel.org>,
Rob Landley <rob@...dley.net>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
x86 maintainers <x86@...nel.org>,
Matt Fleming <matt.fleming@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"list@...ederm.org:DOCUMENTATION" <linux-doc@...r.kernel.org>,
"list@...ederm.org:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
linux-efi@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/3] Early use of boot service memory
On Fri, Nov 15, 2013 at 11:16:25AM -0800, H. Peter Anvin wrote:
> On 11/15/2013 10:46 AM, H. Peter Anvin wrote:
> > On 11/15/2013 10:30 AM, Vivek Goyal wrote:
> >>
> >> I agree taking assistance of hypervisor should be useful.
> >>
> >> One reason we use kdump for VM too because it makes life simple. There
> >> is no difference in how we configure, start and manage crash dumps
> >> in baremetal or inside VM. And in practice have not heard of lot of
> >> failures of kdump in VM environment.
> >>
> >> So while reliability remains a theoritical concern, in practice it
> >> has not been a real concern and that's one reason I think we have
> >> not seen a major push for alternative method in VM environment.
> >>
> >
> > Another reason, again, is that it doesn't sit on all that memory.
> >
>
> This led me to a potentially interesting idea. If we can tell the
> hypervisor about which memory blocks belong to kdump, we can still use
> kdump in its current form with only a few hypervisor calls thrown in.
>
> One set of calls would mark memory ranges as belonging to kdump. This
> would (a) make them protected,
This sounds good. We already have arch hooks to map/unmap crash kernel
ranges, crash_map_reserved_pages() and crash_unmap_reserved_pages(). Now x86,
should be able to use these hooks to tell hypervisor to remove mappings
for certain physical certain ranges and remap these back when needed. s390
already does some magic there.
> and (b) tell the hypervisor that these
> memory ranges will not be accessed and don't need to occupy physical RAM.
I am not sure if we need to do anything here. I am assuming that most of
the crashkernel memory has not been touched and does not occupy physical
memory till crash actually happens. We probably will touch only 20-30MB
of crashkernel memory during kernel load and that should ultimately make
its way to swap at some point of time.
And if that's true, then reserving 72M extra due to crashkernel=X,high
should not be a big issue in KVM guests. It will still be an issue on
physical servers though.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists