lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Feb 2010 09:07:04 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Andrea Righi <arighi@...eler.com>,
	Suleiman Souhlal <suleiman@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	containers@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC] [PATCH 0/2] memcg: per cgroup dirty limit

On Mon, 22 Feb 2010 12:58:33 -0500
Vivek Goyal <vgoyal@...hat.com> wrote:

> On Mon, Feb 22, 2010 at 11:06:40PM +0530, Balbir Singh wrote:
> > * Vivek Goyal <vgoyal@...hat.com> [2010-02-22 09:27:45]:
> > 
> > 
> > > 
> > >   May be we can modify writeback_inodes_wbc() to check first dirty page
> > >   of the inode. And if it does not belong to same memcg as the task who
> > >   is performing balance_dirty_pages(), then skip that inode.
> > 
> > Do you expect all pages of an inode to be paged in by the same cgroup?
> 
> I guess at least in simple cases. Not sure whether it will cover majority
> of usage or not and up to what extent that matters.
> 
> If we start doing background writeout, on per page (like memory reclaim),
> the it probably will be slower and hence flusing out pages sequentially
> from inode makes sense. 
> 
> At one point I was thinking, like pages, can we have an inode list per
> memory cgroup so that writeback logic can traverse that inode list to
> determine which inodes need to be cleaned. But associating inodes to
> memory cgroup is not very intutive at the same time, we again have the
> issue of shared file pages from two differnent cgroups. 
> 
> But I guess, a simpler scheme would be to just check first dirty page from
> inode and if it does not belong to memory cgroup of task being throttled,
> skip it.
> 
> It will not cover the case of shared file pages across memory cgroups, but
> at least something relatively simple to begin with. Do you have more ideas
> on how it can be handeled better.
> 

If pagesa are "shared", it's hard to find _current_ owner. Then, what I'm
thinking as memcg's update is a memcg-for-page-cache and pagecache
migration between memcg.

The idea is
  - At first, treat page cache as what we do now.
  - When a process touches page cache, check process's memcg and page cache's
    memcg. If process-memcg != pagecache-memcg, we migrate it to a special
    container as memcg-for-page-cache.

Then,
  - read-once page caches are handled by local memcg.
  - shared page caches are handled in specail memcg for "shared".

But this will add significant overhead in native implementation.
(We may have to use page flags rather than page_cgroup's....)

I'm now wondering about
  - set "shared flag" to a page_cgroup if cached pages are accessed.
  - sweep them to special memcg in other (kernel) daemon when we hit thresh
    or some.

But hmm, I'm not sure that memcg-for-shared-page-cache is accepptable
for anyone.

Thanks,
-Kame






--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ