lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Apr 2011 08:46:22 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
	"kosaki.motohiro@...fujitsu.com" <kosaki.motohiro@...fujitsu.com>,
	"minchan.kim@...il.com" <minchan.kim@...il.com>,
	"mgorman@...e.de" <mgorman@...e.de>, Ying Han <yinghan@...gle.com>
Subject: Re: [PATCH] fix get_scan_count for working well with small targets

On Tue, 26 Apr 2011 13:59:34 -0700
Andrew Morton <akpm@...ux-foundation.org> wrote:

> On Tue, 26 Apr 2011 18:17:24 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> 
> > At memory reclaim, we determine the number of pages to be scanned
> > per zone as
> > 	(anon + file) >> priority.
> > Assume 
> > 	scan = (anon + file) >> priority.
> > 
> > If scan < SWAP_CLUSTER_MAX, shlink_list will be skipped for this
> > priority and results no-sacn.  This has some problems.
> > 
> >   1. This increases priority as 1 without any scan.
> >      To do scan in DEF_PRIORITY always, amount of pages should be larger than
> >      512M. If pages>>priority < SWAP_CLUSTER_MAX, it's recorded and scan will be
> >      batched, later. (But we lose 1 priority.)
> >      But if the amount of pages is smaller than 16M, no scan at priority==0
> >      forever.
> > 
> >   2. If zone->all_unreclaimabe==true, it's scanned only when priority==0.
> >      So, x86's ZONE_DMA will never be recoverred until the user of pages
> >      frees memory by itself.
> > 
> >   3. With memcg, the limit of memory can be small. When using small memcg,
> >      it gets priority < DEF_PRIORITY-2 very easily and need to call
> >      wait_iff_congested().
> >      For doing scan before priorty=9, 64MB of memory should be used.
> > 
> > This patch tries to scan SWAP_CLUSTER_MAX of pages in force...when
> > 
> >   1. the target is enough small.
> >   2. it's kswapd or memcg reclaim.
> > 
> > Then we can avoid rapid priority drop and may be able to recover
> > all_unreclaimable in a small zones.
> 
> What about simply removing the nr_saved_scan logic and permitting small
> scans?  That simplifies the code and I bet it makes no measurable
> performance difference.
> 

When I considered memcg, I thought of that. But I noticed ZONE_DMA will not
be scanned even if we do so (and zone->all_unreclaimable will not be recovered
until someone free its page by himself.)

> (A good thing to do here would be to instrument the code and determine
> the frequency with which we perform short scans, as well as their
> shortness.  ie: a histogram).
> 

With memcg, I hope we can scan SWAP_CLUSTER_MAX always, at leaset. Considering
a bad case as
  - memory cgroup is small and the system is swapless, file cache is small.
doing SWAP_CLUSETE_MAX file cache scan always seems to make sense to me.

Thanks,
-Kame










--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ