[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE9FiQU96BjThKbOGBhc6RHBey2+bvM6J6ZYaLb=xGuwahTuLQ@mail.gmail.com>
Date: Wed, 3 Apr 2013 13:38:56 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
"H. Peter Anvin" <hpa@...or.com>, WANG Chao <chaowang@...hat.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/4] x86, kdump: Retore crashkernel= to allocate low
On Wed, Apr 3, 2013 at 10:47 AM, Vivek Goyal <vgoyal@...hat.com> wrote:
> So what I am saying that all our code is written assuming there is one
> single reserved range. Now if we need to reserve two ranges, then let
> us make it generic to suppoprt multiple ranges instead of hardcoding
> things and assume there can be 2 ranges. That will be a more generic
> solution.
I don't think we have case that we need to support more than two ranges.
We only need to have one big range above 4G, and put second kernel and
initrd in it.
and another low one is only for switotlb or others that will be used by
second kernel.
>
> So how about this.
>
> - In 3.9, just implement crashkernel=X;high. Don't auto reserve any low
> memory. Support reservation of single range only. It could be either
> high or low.
>
> - Those who are using iommu, they can use crashkernel=X;high. Old code
> can continue to use crashkernel=X and get memory reserved in low
> memory areas.
That will not handle the case: big system that crashkernel=X;high
and kdump does not work with iommu.
>
> - In 3.10 add a feature to support multiple crash reserved ranges.
Again, we only need one high and one low range.
We don't need to support more than two ranges for crash kernel.
Thanks
Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists