lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Feb 2009 14:53:49 +0000
From:	Mel Gorman <mel@....ul.ie>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] vmscan: initialize sc.order in indirect shrink_list()
	users

On Wed, Feb 11, 2009 at 02:52:27AM +0100, Johannes Weiner wrote:
> [added Mel to CC]
> 
> On Tue, Feb 10, 2009 at 04:29:48PM -0800, Andrew Morton wrote:
> > On Tue, 10 Feb 2009 17:51:35 +0100
> > Johannes Weiner <hannes@...xchg.org> wrote:
> > 
> > > shrink_all_memory() and __zone_reclaim() currently don't initialize
> > > the .order field of their scan control.
> > > 
> > > Both of them call into functions which use that field and make certain
> > > decisions based on a random value.
> > > 
> > > The functions depending on the .order field are marked with a star,
> > > the faulty entry points are marked with a percentage sign:
> > > 
> > > * shrink_page_list()
> > >   * shrink_inactive_list()
> > >   * shrink_active_list()
> > >     shrink_list()
> > >       shrink_all_zones()
> > >         % shrink_all_memory()
> > >       shrink_zone()
> > >         % __zone_reclaim()
> > > 
> > > Initialize .order to zero in shrink_all_memory().  Initialize .order
> > > to the order parameter in __zone_reclaim().
> > > 
> > > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > > ---
> > >  mm/vmscan.c |    2 ++
> > >  1 files changed, 2 insertions(+), 0 deletions(-)
> > > 
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 4422301..9ce85ea 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -2112,6 +2112,7 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
> > >  		.may_unmap = 0,
> > >  		.swap_cluster_max = nr_pages,
> > >  		.may_writepage = 1,
> > > +		.order = 0,
> > >  		.isolate_pages = isolate_pages_global,
> > >  	};
> > >  
> > > @@ -2294,6 +2295,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> > >  					SWAP_CLUSTER_MAX),
> > >  		.gfp_mask = gfp_mask,
> > >  		.swappiness = vm_swappiness,
> > > +		.order = order,
> > >  		.isolate_pages = isolate_pages_global,
> > >  	};
> > >  	unsigned long slab_reclaimable;
> > 
> > The second hunk might fix something, but it would need a correcter
> > changelog, and some thought about what its runtimes effects are likely
> > to be, please.
> 
> zone_reclaim() is used by the watermark rebalancing of the buddy
> allocator right before trying to do an allocation.  Even though it
> tries to reclaim at least 1 << order pages, it doesn't raise sc.order
> to increase clustering efforts.
> 

This affects lumpy reclaim. Direct reclaim via try_to_free_pages() and
kswapd() is still working but the earlier reclaim attempt via zone_reclaim()
on unmapped file and slab pages is ignoring teh order. While it'd be tricky
to measure any difference, it does make sense that __zone_reclaim() initialse
the order with what the caller requested.

> I think this happens with the assumption that the upcoming allocation
> can still succeed and in that case we don't want to lump too
> aggressively to refill the zone. 

I don't get what you mean here. The caller requested the higher order so
the work has been requested.

> The allocation might succeed on
> another zone and now we have evicted precious pages due to clustering
> while we are still not sure it's even needed.
> 

Also not sure what you are getting at here. zone_reclaim() is called for the
preferred zones in order. Attempts are made to free within the preferred zone
and then allocate from it. Granted, it might evict pages and the clustering
was ineffective, but this is the cost of high-order reclaim.

> If the allocation fails from all zones, we still will use lumpy
> reclaim for higher orders in kswapd and try_to_free_pages().
> 
> So I think that __zone_reclaim() leaves sc.order = 0 intentionally.
> 
> Mel?
> 

I don't believe it's intentional. The caller called zone_reclaim() with
a given order. It should be used as part of the reclaim decision.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ