[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121220155147.GA2048@sgi.com>
Date: Thu, 20 Dec 2012 09:51:47 -0600
From: Cliff Wickman <cpw@....com>
To: HATAYAMA Daisuke <d.hatayama@...fujitsu.com>
Cc: kexec@...ts.infradead.org, ptesarik@...e.cz,
linux-kernel@...r.kernel.org, kumagai-atsushi@....nes.nec.co.jp,
vgoyal@...hat.com
Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
On Thu, Dec 20, 2012 at 12:22:14PM +0900, HATAYAMA Daisuke wrote:
> From: Cliff Wickman <cpw@....com>
> Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
> Date: Mon, 10 Dec 2012 09:36:14 -0600
> > On Mon, Dec 10, 2012 at 09:59:29AM +0900, HATAYAMA Daisuke wrote:
> >> From: Cliff Wickman <cpw@....com>
> >> Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
> >> Date: Mon, 19 Nov 2012 12:07:10 -0600
> >>
> >> > On Fri, Nov 16, 2012 at 03:39:44PM -0500, Vivek Goyal wrote:
> >> >> On Thu, Nov 15, 2012 at 04:52:40PM -0600, Cliff Wickman wrote:
> >
> > Hi Hatayama,
> >
> > If ioremap/iounmap is the bottleneck then perhaps you could do what
> > my patch does: it consolidates all the ranges of physical addresses
> > where the boot kernel's page structures reside (see make_kernel_mmap())
> > and passes them to the kernel, which then does a handfull of ioremaps's to
> > cover all of them. Then /proc/vmcore could look up the already-mapped
> > virtual address.
> > (also note a kludge in get_mm_sparsemem() that verifies that each section
> > of the mem_map spans contiguous ranges of page structures. I had
> > trouble with some sections when I made that assumption)
> >
> > I'm attaching 3 patches that might be useful in your testing:
> > - 121210.proc_vmcore2 my current patch that applies to the released
> > makedumpfile 1.5.1
> > - 121207.vmcore_pagescans.sles applies to a 3.0.13 kernel
> > - 121207.vmcore_pagescans.rhel applies to a 2.6.32 kernel
> >
>
> I used the same patch set on the benchmark.
>
> BTW, I have continuously reservation issue, so I think I cannot use
> terabyte memory machine at least in this year.
>
> Also, your patch set is doing ioremap per a chunk of memory map,
> i.e. a number of consequtive pages at the same time. On your terabyte
> machines, how large they are? We have memory consumption issue on the
> 2nd kernel so we must decrease amount of memory used. But looking into
> ioremap code quickly, it looks not using 2MB or 1GB pages to
> remap. This means more than tera bytes page table is generated. Or
> have you probably already investigated this?
>
> BTW, my idea to solve this issue are two:
>
> 1) make linear direct mapping for old memory, and acess the old memory
> via the linear direct mapping, not by ioremap.
>
> - adding remap code in vmcore, or passing the regions that need to
> be remapped using memmap= kernel option to tell the 2nd kenrel to
> map them in addition.
Good point. It would take over 30G of memory to map 16TB with 4k pages.
I recently tried to dump such a memory and ran out of kernel memory --
no wonder!
Do you have a patch for doing a linear direct mapping? Or can you name
existing kernel infrastructure to do such mapping? I'm just looking for
a jumpstart to enhance the patch.
-Cliff
>
> Or,
>
> 2) Support 2MB or 1GB pages in ioremap.
>
> Thanks.
> HATAYAMA, Daisuke
--
Cliff Wickman
SGI
cpw@....com
(651) 683-3824
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists