lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20091012113829.GD3007@balbir.in.ibm.com>
Date:	Mon, 12 Oct 2009 17:08:29 +0530
From:	Balbir Singh <balbir@...ux.vnet.ibm.com>
To:	Ying Han <yinghan@...gle.com>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH 0/2] memcg: improving scalability by reducing lock
 contention at charge/uncharge

* Ying Han <yinghan@...gle.com> [2009-10-11 11:34:39]:

> 2009/10/10 KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> 
> > Ying Han wrote:
> > > Hi KAMEZAWA-san: I tested your patch set based on 2.6.32-rc3 but I don't
> > > see
> > > much improvement on the page-faults rate.
> > > Here is the number I got:
> > >
> > > [Before]
> > >  Performance counter stats for './runpause.sh 10' (5 runs):
> > >
> > >   226272.271246  task-clock-msecs         #      3.768 CPUs    ( +-
> > > 0.193%
> > > )
> > >            4424  context-switches         #      0.000 M/sec   ( +-
> > > 14.418%
> > > )
> > >              25  CPU-migrations           #      0.000 M/sec   ( +-
> > > 23.077%
> > > )
> > >        80499059  page-faults              #      0.356 M/sec   ( +-
> > > 2.586%
> > > )
> > >    499246232482  cycles                   #   2206.396 M/sec   ( +-
> > > 0.055%
> > > )
> > >    193036122022  instructions             #      0.387 IPC     ( +-
> > > 0.281%
> > > )
> > >     76548856038  cache-references         #    338.304 M/sec   ( +-
> > > 0.832%
> > > )
> > >       480196860  cache-misses             #      2.122 M/sec   ( +-
> > > 2.741%
> > > )
> > >
> > >    60.051646892  seconds time elapsed   ( +-   0.010% )
> > >
> > > [After]
> > >  Performance counter stats for './runpause.sh 10' (5 runs):
> > >
> > >   226491.338475  task-clock-msecs         #      3.772 CPUs    ( +-
> > > 0.176%
> > > )
> > >            3377  context-switches         #      0.000 M/sec   ( +-
> > > 14.713%
> > > )
> > >              12  CPU-migrations           #      0.000 M/sec   ( +-
> > > 23.077%
> > > )
> > >        81867014  page-faults              #      0.361 M/sec   ( +-
> > > 3.201%
> > > )
> > >    499835798750  cycles                   #   2206.865 M/sec   ( +-
> > > 0.036%
> > > )
> > >    196685031865  instructions             #      0.393 IPC     ( +-
> > > 0.286%
> > > )
> > >     81143829910  cache-references         #    358.265 M/sec   ( +-
> > > 0.428%
> > > )
> > >       119362559  cache-misses             #      0.527 M/sec   ( +-
> > > 5.291%
> > > )
> > >
> > >    60.048917062  seconds time elapsed   ( +-   0.010% )
> > >
> > > I ran it on an 4 core machine with 16G of RAM. And I modified
> > > the runpause.sh to fork 4 pagefault process instead of 8. I mounted
> > cgroup
> > > with only memory subsystem and start running the test on the root cgroup.
> > >
> > > I believe that we might have different running environment including the
> > > cgroup configuration.  Any suggestions?
> > >
> >
> > This patch series is only for "child" cgroup. Sorry, I had to write it
> > clearer. No effects to root.
> >
> 
> Ok, Thanks for making it clearer. :) So Do you mind post the cgroup+memcg
> configuration
> while you are running on your host?
> 
> Thanks
>

Yes, root was fixed by another patchset now in mainline. Another check
is to see if resource_counter lock shows up in /proc/lock_stats.
 
-- 
	Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ