lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 2 May 2008 10:58:20 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	Ross Biro <rossb@...gle.com>
Cc:	Christoph Lameter <clameter@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, apm@...doween.org
Subject: Re: [PATCH 1/2] MM: Make page tables relocatable -- conditional flush (rc9)

On (29/04/08 09:27), Ross Biro didst pronounce:
> On Wed, Apr 16, 2008 at 3:22 PM, Christoph Lameter <clameter@....com> wrote:
> >  The patch is interesting because it would allow the moving of page table
> >  pages into MOVABLE sections and reduce the size of the UNMOVABLE
> >  allocations signficantly (Ross: We need some numbers here). This in turn
> 
> Is there a standard test used to evaluate kernel memory fragmentation?

Not exactly, but the test I run most frequently for testing
fragmentation-related problems is

1. Kernbench building 2.6.14 - 5 iterations
2. aim9
3. bench-hugepagecapability.sh
4. bench-stresshighalloc.sh

the last two are from vmregress
www.csn.ul.ie/~mel/projects/vmregress/vmregress-0.88-rc7.tar.gz which is a
mess of undocumented tools. bench-stresshighalloc.sh needs to be built
against the current running kernel

./configure --with-linux=PATH_TO_KERNEL_SOURCE

The parameters passed to the last two tests depend on the machine but
generally -k $((PHYS_MEMORY_IN_MB/250)) for the number of kernels and
--mb-per-sec 16 are the most important ones.

These tests are not suitable on machines with very large amounts of memory
because too many kernels would be built at the same time and the machine
just grinds. Originally, the tests reflected the most hostile load in terms
of fragmentation, but it's showing it's age now as a suitable test.

Particularly from your perspective, the test is not very pagetable-page
intentensive. You could run the tests above and then start a long-lived
test like sysbench tuned to consume most of memory and then trigger a
relocation to see how effective it would be? Tuned to consume most of
memory should mean there are a lot of pagetable-allocations as well.

If you wanted to see what large page allocation was like at any time,
you could use bench-plainhighalloc.sh from vmregress just to artifically
allocate huge pages or attempt growing of the hugepage pool via proc as
that would also give an indication of the fragmentation state of the
system.

>  I'm sure I can rig up a test to create huge amounts of fragmentation
> with about 1/2 the pages being page tables.  However, I doubt that it
> would reflect any real loads.  Similarly, if I check the memory
> fragmentation on my test system right after it's been booted, I won't
> see much fragmentation and page tables won't be causing any trouble.
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists