lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100226091421.8c15c210.kamezawa.hiroyu@jp.fujitsu.com>
Date:	Fri, 26 Feb 2010 09:14:21 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Andrea Righi <arighi@...eler.com>
Cc:	David Rientjes <rientjes@...gle.com>,
	Vivek Goyal <vgoyal@...hat.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Suleiman Souhlal <suleiman@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	containers@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] memcg: dirty pages instrumentation

On Thu, 25 Feb 2010 15:34:44 +0100
Andrea Righi <arighi@...eler.com> wrote:

> On Tue, Feb 23, 2010 at 02:22:12PM -0800, David Rientjes wrote:
> > On Tue, 23 Feb 2010, Vivek Goyal wrote:
> > 
> > > > > Because you have modified dirtyable_memory() and made it per cgroup, I
> > > > > think it automatically takes care of the cases of per cgroup dirty ratio,
> > > > > I mentioned in my previous mail. So we will use system wide dirty ratio
> > > > > to calculate the allowed dirty pages in this cgroup (dirty_ratio *
> > > > > available_memory()) and if this cgroup wrote too many pages start
> > > > > writeout? 
> > > > 
> > > > OK, if I've understood well, you're proposing to use per-cgroup
> > > > dirty_ratio interface and do something like:
> > > 
> > > I think we can use system wide dirty_ratio for per cgroup (instead of
> > > providing configurable dirty_ratio for each cgroup where each memory
> > > cgroup can have different dirty ratio. Can't think of a use case
> > > immediately).
> > 
> > I think each memcg should have both dirty_bytes and dirty_ratio, 
> > dirty_bytes defaults to 0 (disabled) while dirty_ratio is inherited from 
> > the global vm_dirty_ratio.  Changing vm_dirty_ratio would not change 
> > memcgs already using their own dirty_ratio, but new memcgs would get the 
> > new value by default.  The ratio would act over the amount of available 
> > memory to the cgroup as though it were its own "virtual system" operating 
> > with a subset of the system's RAM and the same global ratio.
> 
> Agreed.
> 
BTW, please add background_dirty_ratio in the same series of patches.
(or something other to kick background-writeback in proper manner.)

If not, we can't kick background write-back until we're caught by dirty_ratio.

Thanks,
-Kame





--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ