lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20121220.122214.503837449.d.hatayama@jp.fujitsu.com>
Date:	Thu, 20 Dec 2012 12:22:14 +0900 (JST)
From:	HATAYAMA Daisuke <d.hatayama@...fujitsu.com>
To:	cpw@....com
Cc:	kexec@...ts.infradead.org, ptesarik@...e.cz,
	linux-kernel@...r.kernel.org, kumagai-atsushi@....nes.nec.co.jp,
	vgoyal@...hat.com
Subject: Re: [PATCH] makedumpfile: request the kernel do page scans

From: Cliff Wickman <cpw@....com>
Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
Date: Mon, 10 Dec 2012 09:36:14 -0600
> On Mon, Dec 10, 2012 at 09:59:29AM +0900, HATAYAMA Daisuke wrote:
>> From: Cliff Wickman <cpw@....com>
>> Subject: Re: [PATCH] makedumpfile: request the kernel do page scans
>> Date: Mon, 19 Nov 2012 12:07:10 -0600
>> 
>> > On Fri, Nov 16, 2012 at 03:39:44PM -0500, Vivek Goyal wrote:
>> >> On Thu, Nov 15, 2012 at 04:52:40PM -0600, Cliff Wickman wrote:
> 
> Hi Hatayama,
> 
> If ioremap/iounmap is the bottleneck then perhaps you could do what
> my patch does: it consolidates all the ranges of physical addresses
> where the boot kernel's page structures reside (see make_kernel_mmap())
> and passes them to the kernel, which then does a handfull of ioremaps's to
> cover all of them.  Then /proc/vmcore could look up the already-mapped
> virtual address.
> (also note a kludge in get_mm_sparsemem() that verifies that each section
> of the mem_map spans contiguous ranges of page structures.  I had
> trouble with some sections when I made that assumption)
> 
> I'm attaching 3 patches that might be useful in your testing:
> - 121210.proc_vmcore2  my current patch that applies to the released
>   makedumpfile 1.5.1
> - 121207.vmcore_pagescans.sles applies to a 3.0.13 kernel
> - 121207.vmcore_pagescans.rhel applies to a 2.6.32 kernel
> 

I used the same patch set on the benchmark.

BTW, I have continuously reservation issue, so I think I cannot use
terabyte memory machine at least in this year.

Also, your patch set is doing ioremap per a chunk of memory map,
i.e. a number of consequtive pages at the same time. On your terabyte
machines, how large they are? We have memory consumption issue on the
2nd kernel so we must decrease amount of memory used. But looking into
ioremap code quickly, it looks not using 2MB or 1GB pages to
remap. This means more than tera bytes page table is generated. Or
have you probably already investigated this?

BTW, my idea to solve this issue are two:

1) make linear direct mapping for old memory, and acess the old memory
via the linear direct mapping, not by ioremap.

  - adding remap code in vmcore, or passing the regions that need to
    be remapped using memmap= kernel option to tell the 2nd kenrel to
    map them in addition.

Or,

2) Support 2MB or 1GB pages in ioremap.

Thanks.
HATAYAMA, Daisuke

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ