lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080627134212.GC5801@redhat.com>
Date:	Fri, 27 Jun 2008 09:42:12 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Bernhard Walle <bwalle@...e.de>
Cc:	x86@...nel.org, kexec@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	"Eric W. Biederman" <ebiederm@...ssion.com>
Subject: Re: [PATCH] x86: Find offset for crashkernel reservation
	automatically

On Fri, Jun 27, 2008 at 09:32:56AM -0400, Vivek Goyal wrote:
> On Thu, Jun 26, 2008 at 09:54:08PM +0200, Bernhard Walle wrote:
> > This patch removes the need of the crashkernel=...@...set parameter to define
> > a fixed offset for crashkernel reservation. That feature can be used together
> > with a relocatable kernel where the kexec-tools relocate the kernel and
> > get the actual offset from /proc/iomem.
> > 
> > The use case is a kernel where the .text+.data+.bss is after 16M physical
> > memory (debug kernel with lockdep on x86_64 can cause that) which caused a
> > major pain in autoconfiguration in our distribution.
> > 
> > Also, that patch unifies crashdump architectures a bit since IA64 has
> > that semantics from the very beginning of the kdump port.
> > 
> > Please provide feedback!
> > 
> 
> Hi Bernhard,
> 
> This looks like a good idea. That means distributions don't have to
> hardcode the crashbase at 16MB and the decision to find a free memory
> can be left on kernel. Users will also find it easy that way.
> 
> > 
> > Signed-off-by: Bernhard Walle <bwalle@...e.de>
> > ---
> >  arch/x86/kernel/setup.c |   70 +++++++++++++++++++++++++++++++++++------------
> >  1 files changed, 52 insertions(+), 18 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> > index a81d82c..c30bb7b 100644
> > --- a/arch/x86/kernel/setup.c
> > +++ b/arch/x86/kernel/setup.c
> > @@ -435,6 +435,34 @@ static inline unsigned long long get_total_mem(void)
> >  }
> >  
> >  #ifdef CONFIG_KEXEC
> > +
> > +/**
> > + * Reserve @size bytes of crashkernel memory at any suitable offset.
> > + *
> > + * @size: Size of the crashkernel memory to reserve.
> > + * Returns the base address on success, and -1ULL on failure.
> > + */
> > +unsigned long long find_and_reserve_crashkernel(unsigned long long size)
> > +{
> > +	const unsigned long long alignment = 16<<20; 	/* 16M */
> > +	unsigned long long start = 0LL;
> > +
> > +	while (1) {
> > +		int ret;
> > +
> > +		start = find_e820_area(start, ULONG_MAX, size, alignment);
> > +		if (start == -1ULL)
> > +			return start;
> > +
> > +		/* try to reserve it */
> > +		ret = reserve_bootmem_generic(start, size, BOOTMEM_EXCLUSIVE);
> > +		if (ret >= 0)
> > +			return start;
> > +
> > +		start += alignment;
> > +	}
> 
> I think both i386 and x86_64 relocatable kernels had some upper limits
> where these could be loaded (Eric had mentioned those in the patch. I
> don't remember these). It might be a good idea to capture it here
> making sure "start" does not cross those limits otherwise don't reserve
> the memory.
> 

Thinking more about. Let me step back. I think it is not good idea to
take this kernel take decision about the capability of kernel being
loaded. There is no way we can find out now that if a kernel is capable
of running from this memory location or not. This is highly variable. So,
please ignore above comment.

This patch looks good to me.

Acked-by: Vivek Goyal <vgoyal@...hat.com>

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ