lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Jul 2007 11:37:04 +0100
From:	mel@...net.ie (Mel Gorman)
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Andy Whitcroft <apw@...dowen.org>, linux-kernel@...r.kernel.org,
	Christoph Lameter <clameter@....com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: -mm merge plans -- lumpy reclaim

On (11/07/07 09:46), Andrew Morton didst pronounce:
> On Wed, 11 Jul 2007 10:34:31 +0100 Andy Whitcroft <apw@...dowen.org> wrote:
> 
> > [Seems a PEBKAC occured on the subject line, resending lest it become a
> > victim of "oh thats spam".]
> > 
> > Andy Whitcroft wrote:
> > > Andrew Morton wrote:
> > > 
> > > [...]
> > >> lumpy-reclaim-v4.patch
> > >> have-kswapd-keep-a-minimum-order-free-other-than-order-0.patch
> > >> only-check-absolute-watermarks-for-alloc_high-and-alloc_harder-allocations.patch
> > >>
> > >>  Lumpy reclaim.  In a similar situation to Mel's patches.  Stuck due to
> > >>  general lack or interest and effort.
> > > 
> > > The lumpy reclaim patches originally came out of work to support Mel's
> > > anti-fragmentation work.  As such I think they have become somewhat
> > > attached to those patches.  Whilst lumpy is most effective where
> > > placement controls are in place as offered by Mel's work, we see benefit
> > > from reduction in the "blunderbuss" effect when we reclaim at higher
> > > orders.  While placement control is pretty much required for the very
> > > highest orders such as huge page size, lower order allocations are
> > > benefited in terms of lower collateral damage.
> > > 
> > > There are now a few areas other than huge page allocations which can
> > > benefit.  Stacks are still order 1.  Jumbo frames want higher order
> > > contiguous pages for there incoming hardware buffers.  SLUB is showing
> > > performance benefits from moving to a higher allocation order.  All of
> > > these should benefit from more aggressive targeted reclaim, indeed I
> > > have been surprised just how often my test workloads trigger lumpy at
> > > order 1 to get new stacks.
> > > 
> > > Truly representative work loads are hard to generate for some of these.
> > >  Though we have heard some encouraging noises from those who can
> > > reproduce these problems.
> 
> I'd expect that the main application for lumpy-reclaim is in keeping a pool
> of order-2 (say) pages in reserve for GFP_ATOMIC allocators.  ie: jumbo
> frames.
> 
> At present this relies upon the wakeup_kswapd(..., order) mechanism.
> 
> How effective is this at solving the jumbo frame problem?
> 
> (And do we still have a jumbo frame problem?  Reports seems to have subsided)

The patches have an application with hugepage pool resizing.

When lumpy-reclaim is used used with ZONE_MOVABLE, the hugepages pool can
be resized with greater reliability. Testing on a desktop machine with 2GB
of RAM showed that growing the hugepage pool with ZONE_MOVABLE on it's own
was very slow as the success rate was quite low. Without lumpy-reclaim, each
attempt to grow the pool by 100 pages would yield 1 or 2 hugepages. With
lumpy-reclaim, getting 40 to 70 hugepages on each attempt was typical.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ