[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1tz0jmfqs.fsf@fess.ebiederm.org>
Date: Fri, 07 Aug 2009 15:16:43 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Bernhard Walle <bernhard.walle@....de>
Cc: Amerigo Wang <amwang@...hat.com>, Neil Horman <nhorman@...hat.com>,
linux-kernel@...r.kernel.org, tony.luck@...el.com,
linux-ia64@...r.kernel.org, akpm@...ux-foundation.org,
Ingo Molnar <mingo@...e.hu>,
Anton Vorontsov <avorontsov@...mvista.com>,
Andi Kleen <andi@...stfloor.org>,
Kexec Mailing List <kexec@...ts.infradead.org>
Subject: Re: [Patch 0/7] Implement crashkernel=auto
Bernhard Walle <bernhard.walle@....de> writes:
> Eric W. Biederman schrieb:
>>
>> With the current set of crashkernel= options we are asking the
>> distribution installer to perform magic. Moving as much of this logic
>> into a normal init script for better maintenance is desirable.
>
> Not (necessarily) the installer but the program that configures kdump.
> system-config-kdump on Red Hat, YaST on SUSE.
Right. Somehow I thought YaST was the installer my mistake.
>> Bernhard does that sound useful to you?
>
> I don't see any problems. I don't know how much effort is it to free
> already reserved crashkernel memory, but I guess it's not really
> complicated.
Right.
> Maybe that "1/32" should be specified on the command line like
>
>
> crashkernel=>>5
>
> (for 1/32*system_memory == system_memory>>5), OTOH I have no real strong
> opinion.
The idea is for the system to give us as much as it can stand and
userspace gives the rest back. The maximum memory any particular
kernel can stand to give up is a tractable kernel level problem, and
we can make it autotune like any other kernel tunable. What a crash
kernel needs totally depends on the implementation.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists