lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 31 Aug 2011 10:33:32 +0200
From:	Johannes Weiner <jweiner@...hat.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
	Balbir Singh <bsingharora@...il.com>,
	Andrew Brestic <abrestic@...gle.com>,
	Ying Han <yinghan@...gle.com>, Michal Hocko <mhocko@...e.cz>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch] Revert "memcg: add memory.vmscan_stat"

On Wed, Aug 31, 2011 at 03:30:25PM +0900, KAMEZAWA Hiroyuki wrote:
> On Wed, 31 Aug 2011 08:23:54 +0200
> Johannes Weiner <jweiner@...hat.com> wrote:
> 
> > On Wed, Aug 31, 2011 at 08:29:24AM +0900, KAMEZAWA Hiroyuki wrote:
> > > On Tue, 30 Aug 2011 13:32:21 +0200
> > > Johannes Weiner <jweiner@...hat.com> wrote:
> > > 
> > > > On Tue, Aug 30, 2011 at 07:38:39PM +0900, KAMEZAWA Hiroyuki wrote:
> > > > > On Tue, 30 Aug 2011 12:17:26 +0200
> > > > > Johannes Weiner <jweiner@...hat.com> wrote:
> > > > > 
> > > > > > On Tue, Aug 30, 2011 at 05:56:09PM +0900, KAMEZAWA Hiroyuki wrote:
> > > > > > > On Tue, 30 Aug 2011 10:42:45 +0200
> > > > > > > Johannes Weiner <jweiner@...hat.com> wrote:
> >
> > > I'm confused. 
> > > 
> > > If vmscan is scanning in C's LRU,
> > > 	(memcg == root) : C_scan_internal ++
> > > 	(memcg != root) : C_scan_external ++
> > 
> > Yes.
> > 
> > > Why A_scan_external exists ? It's 0 ?
> > > 
> > > I think we can never get numbers.
> > 
> > Kswapd/direct reclaim should probably be accounted as A_external,
> > since A has no limit, so reclaim pressure can not be internal.
> > 
> 
> hmm, ok. All memory pressure from memcg/system other than the memcg itsef
> is all external.
>
> > On the other hand, one could see the amount of physical memory in the
> > machine as A's limit and account global reclaim as A_internal.
> > 
> > I think the former may be more natural.
> > 
> > That aside, all memcgs should have the same statistics, obviously.
> > Scripts can easily deal with counters being zero.  If items differ
> > between cgroups, that would suck a lot.
> 
> So, when I improve direct-reclaim path, I need to see score in scan_internal.

Direct reclaim because of the limit or because of global pressure?  I
am going to assume because of the limit because global reclaim is not
yet accounted to memcgs even though their pages are scanned.  Please
correct me if I'm wrong.

        A
       /
      B
     /
    C

If A hits the limit and does direct reclaim in A, B, and C, then the
scans in A get accounted as internal while the scans in B and C get
accounted as external.

> How do you think about background-reclaim-per-memcg ?
> Should be counted into scan_internal ?

Background reclaim is still triggered by the limit, just that the
condition is 'close to limit' instead of 'reached limit'.

So when per-memcg background reclaim goes off because A is close to
its limit, then it will scan A (internal) and B + C (external).

It's always the same code:

	record_reclaim_stat(culprit, victim, item, delta)

In direct limit reclaim, the culprit is the one hitting its limit.  In
background reclaim, the culprit is the one getting close to its limit.

And then again the accounting is

	culprit == victim -> victim_internal++ (own fault)
	culprit != victim -> victim_external++ (parent's fault)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ