lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 29 Nov 2008 09:45:37 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Subject: Re: [PATCH] vmscan: skip freeing memory from zones with lots free

On Sat, 29 Nov 2008 11:47:25 -0500 Rik van Riel <riel@...hat.com> wrote:

> Andrew Morton wrote:
> 
> >> Index: linux-2.6.28-rc5/mm/vmscan.c
> >> ===================================================================
> >> --- linux-2.6.28-rc5.orig/mm/vmscan.c	2008-11-28 05:53:56.000000000 -0500
> >> +++ linux-2.6.28-rc5/mm/vmscan.c	2008-11-28 06:05:29.000000000 -0500
> >> @@ -1510,6 +1510,9 @@ static unsigned long shrink_zones(int pr
> >>  			if (zone_is_all_unreclaimable(zone) &&
> >>  						priority != DEF_PRIORITY)
> >>  				continue;	/* Let kswapd poll it */
> >> +			if (zone_watermark_ok(zone, sc->order,
> >> +					4*zone->pages_high, high_zoneidx, 0))
> >> +				continue;	/* Lots free already */
> >>  			sc->all_unreclaimable = 0;
> >>  		} else {
> >>  			/*
> > 
> > We already tried this, or something very similar in effect, I think...
> 
> Yes, we have a check just like this in balance_pgdat().
> 
> It's been there forever with no ill effect.

This patch affects direct reclaim as well as kswapd.

> > commit 26e4931632352e3c95a61edac22d12ebb72038fe
> > Author: akpm <akpm>
> > Date:   Sun Sep 8 19:21:55 2002 +0000
> > 
> >     [PATCH] refill the inactive list more quickly
> >     
> >     Fix a problem noticed by Ed Tomlinson: under shifting workloads the
> >     shrink_zone() logic will refill the inactive load too slowly.
> >     
> >     Bale out of the zone scan when we've reclaimed enough pages.  Fixes a
> >     rarely-occurring problem wherein refill_inactive_zone() ends up
> >     shuffling 100,000 pages and generally goes silly.
> 
> This is not a bale out, this is a "skip zones that have way
> too many free pages already".

It is similar in effect.

Will this new patch reintroduce the problem which
26e4931632352e3c95a61edac22d12ebb72038fe fixed?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ