lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081129153902.GA1944@cmpxchg.org>
Date:	Sat, 29 Nov 2008 16:39:04 +0100
From:	Johannes Weiner <hannes@...xchg.org>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc:	Rik van Riel <riel@...hat.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, mel@....ul.ie,
	akpm@...ux-foundation.org
Subject: Re: [rfc] vmscan: serialize aggressive reclaimers

On Sat, Nov 29, 2008 at 04:46:24PM +0900, KOSAKI Motohiro wrote:
> > Since we have to pull through a reclaim cycle once we commited to it,
> > what do you think about serializing the lower priority levels
> > completely?
> > 
> > The idea is that when one reclaimer has done a low priority level
> > iteration with a huge reclaim target, chances are that succeeding
> > reclaimers don't even need to drop to lower levels at all because
> > enough memory has already been freed.
> > 
> > My testprogram maps and faults in a file that is about as large as my
> > physical memory.  Then it spawns off n processes that try allocate
> > 1/2n of total memory in anon pages, i.e. half of it in sum.  After it
> > ran, I check how much memory has been reclaimed.  But my zone sizes
> > are too small to induce enormous reclaim targets so I don't see vast
> > over-reclaims.
> > 
> > I have measured the time of other tests on an SMP machine with 4 cores
> > and the following patch applied.  I couldn't see any performance
> > degradation.  But since the bug is not triggerable here, I can not
> > prove it helps the original problem, either.
> 
> I wonder why nobody of vmscan folks write actual performance improvement value
> in patch description.

That's why I made it RFC.  I haven't seriously tested it, I just
wanted to know what people that understand more than I do think of the
idea.

> I think this patch point to right direction.
> but, unfortunately, this implementation isn't fast as I mesured as.

Fair enough.

> > The level where it starts serializing is chosen pretty arbitrarily.
> > Suggestions welcome :)
> > 
> > 	Hannes
> > 
> > ---
> > 
> > Prevent over-reclaiming by serializing direct reclaimers below a
> > certain priority level.
> > 
> > Over-reclaiming happens when the sum of the reclaim targets of all
> > reclaiming processes is larger than the sum of the needed free pages,
> > thus leading to excessive eviction of more cache and anonymous pages
> > than required.
> > 
> > A scan iteration over all zones can not be aborted intermittently when
> > enough pages are reclaimed because that would mess up the scan balance
> > between the zones.  Instead, prevent that too many processes
> > simultaneously commit themselves to lower priority level scans in the
> > first place.
> > 
> > Chances are that after the exclusive reclaimer has finished, enough
> > memory has been freed that succeeding scanners don't need to drop to
> > lower priority levels at all anymore.
> > 
> > Signed-off-by: Johannes Weiner <hannes@...urebad.de>
> > ---
> >  mm/vmscan.c |   20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> > 
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -35,6 +35,7 @@
> >  #include <linux/notifier.h>
> >  #include <linux/rwsem.h>
> >  #include <linux/delay.h>
> > +#include <linux/wait.h>
> >  #include <linux/kthread.h>
> >  #include <linux/freezer.h>
> >  #include <linux/memcontrol.h>
> > @@ -42,6 +43,7 @@
> >  #include <linux/sysctl.h>
> >  
> >  #include <asm/tlbflush.h>
> > +#include <asm/atomic.h>
> >  #include <asm/div64.h>
> >  
> >  #include <linux/swapops.h>
> > @@ -1546,10 +1548,15 @@ static unsigned long shrink_zones(int pr
> >   * returns:	0, if no pages reclaimed
> >   * 		else, the number of pages reclaimed
> >   */
> > +
> > +static DECLARE_WAIT_QUEUE_HEAD(reclaim_wait);
> > +static atomic_t reclaim_exclusive = ATOMIC_INIT(0);
> > +
> >  static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> >  					struct scan_control *sc)
> >  {
> >  	int priority;
> > +	int exclusive = 0;
> >  	unsigned long ret = 0;
> >  	unsigned long total_scanned = 0;
> >  	unsigned long nr_reclaimed = 0;
> > @@ -1580,6 +1587,14 @@ static unsigned long do_try_to_free_page
> >  		sc->nr_scanned = 0;
> >  		if (!priority)
> >  			disable_swap_token();
> > +		/*
> > +		 * Serialize aggressive reclaimers
> > +		 */
> > +		if (priority <= DEF_PRIORITY / 2 && !exclusive) {
> 
> On large machine, DEF_PRIORITY / 2 is really catastrophe situation.
> 2^6 = 64. 
> if zone has 64GB memory, it mean 1GB reclaim.
> I think more early restriction is better.

I am just afraid that it kills parallelity.

> > +			wait_event(reclaim_wait,
> > +				!atomic_cmpxchg(&reclaim_exclusive, 0, 1));
> > +			exclusive = 1;
> > +		}
> 
> if you want to restrict to one task, you can use mutex.
> and this wait_queue should put on global variable. it should be zone variable.

Hm, global or per-zone?  Rik suggested to do it per-node and I like
that idea.

> In addision, you don't consider recursive relaim and several task can't sleep there.
> 
> 
> please believe me. I have richest experience about reclaim throttling in the planet.

Hehe, okay.  Than I am glad you don't hate the idea completely.  Do
you have any patches flying around that do something similar?

	Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ