[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131115183002.GE6637@redhat.com>
Date: Fri, 15 Nov 2013 13:30:02 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Yinghai Lu <yinghai@...nel.org>, Ingo Molnar <mingo@...nel.org>,
jerry.hoemann@...com, Pekka Enberg <penberg@...nel.org>,
Rob Landley <rob@...dley.net>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
x86 maintainers <x86@...nel.org>,
Matt Fleming <matt.fleming@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"list@...ederm.org:DOCUMENTATION" <linux-doc@...r.kernel.org>,
"list@...ederm.org:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
linux-efi@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/3] Early use of boot service memory
On Fri, Nov 15, 2013 at 09:40:49AM -0800, H. Peter Anvin wrote:
> On 11/15/2013 09:33 AM, Yinghai Lu wrote:
> >
> > If the system support intel IOMMU, we only need to that 72M for SWIOTLB
> > or AMD workaround.
> > If the user really care that for intel iommu enable system, they could use
> > "crashkernel=0,low" to have that 72M back.
> >
> > and that 72M is under 4G instead of 896M.
> >
> > so reserve 72M is not better than reserve 128M?
> >
>
> Those 72M are in addition to 128M, which does add up quite a bit.
> However, the presence of a working IOMMU in the system is something that
> should be possible to know at setup time.
>
And IOMMU support is very flaky with kdump. And IOMMU's can be turned
off at command line. And that would force one to remove crahkernel_low=0.
So change of one command line option forces change of another. It is
complicated.
Also there are very few systems which work with IOMMU on. A lot more
which work without IOMMU. We have all these DMAR issues and still nobody
has been able to address IOMMU issues properly.
> Now, this was discussed partly in the context of VMs. I want to say, as
> I have again and again: the right way to dump a VM is with hypervisor
> assistance rather than an in-image dumper which is both expensive and
> may be corrupted by the failure.
I agree taking assistance of hypervisor should be useful.
One reason we use kdump for VM too because it makes life simple. There
is no difference in how we configure, start and manage crash dumps
in baremetal or inside VM. And in practice have not heard of lot of
failures of kdump in VM environment.
So while reliability remains a theoritical concern, in practice it
has not been a real concern and that's one reason I think we have
not seen a major push for alternative method in VM environment.
>
> It would be good if the various VMs with interest in Linux would agree
> on a mechanism for launching a dumper. This can be done either inband
> (on the execution of a specific hypercall, the hypervisor terminates I/O
> to the guest, inserts a dumper into the address space and launches it)
> or out-of-band (the hypervisor itself, or an assistant program, writes a
> dump file) or as a hybrid (a new dump guest is launched with the
> hypervisor-written or hypervisor-preserved crashed guest image somehow
> passed to it.)
virsh can take dumps of KVM guest, so hypervisor calling out to an
assistant program might help here.
Anyway, we will gladly use any new dump mechanism for VM once things
start working seamlessly. So till all this materializes, forcing user
to reserve that extra 72M concerns me (both in bare-metal and virtualized
environments).
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists