lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111124094532.GF6843@cmpxchg.org>
Date:	Thu, 24 Nov 2011 10:45:32 +0100
From:	Johannes Weiner <hannes@...xchg.org>
To:	Balbir Singh <bsingharora@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Michal Hocko <mhocko@...e.cz>, cgroups@...r.kernel.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 0/8] mm: memcg fixlets for 3.3

On Thu, Nov 24, 2011 at 11:39:39AM +0530, Balbir Singh wrote:
> On Wed, Nov 23, 2011 at 9:12 PM, Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > Here are some minor memcg-related cleanups and optimizations, nothing
> > too exciting.  The bulk of the diffstat comes from renaming the
> > remaining variables to describe a (struct mem_cgroup *) to "memcg".
> > The rest cuts down on the (un)charge fastpaths, as people start to get
> > annoyed by those functions showing up in the profiles of their their
> > non-memcg workloads.  More is to come, but I wanted to get the more
> > obvious bits out of the way.
> 
> Hi, Johannes
> 
> The renaming was a separate patch sent from Raghavendra as well, not
> sure if you've seen it.

I did and they are already in -mm, but unless I miss something, those
were only for memcontrol.[ch].  My patch is for the rest of mm.

> What tests are you using to test these patches?

I usually run concurrent kernbench jobs in separate memcgs as a smoke
test with these tools:

	http://git.cmpxchg.org/?p=statutils.git;a=summary

"runtest" takes a job-spec file that looks a bit like RPM spec to
define works with preparation and cleanup phases, and data collectors.
The memcg kernbench job I use is in the examples directory.  You just
need to put separate kernel source directories into place (linux-`seq
-w 04`) and then launch it like this:

	runtest -s memcg-kernbench.load `uname -r`

which will run the test and collect memory.stat of the parent memcg
every second, which you can then further evaluate with the other
tools:

	readdict < `uname -r`-memory.stat.data | columns 14 15 | plot

for example, where readdict translates the "key value" lines of
memory.stat into tables where each value is on its own row.  Columns
14 and 15 are total_cache and total_rss (find out with cat -n -- yeah,
still a bit rough).  You need python-matplotlib for plot to work.

Multiple runs can be collected into the same logfiles and then fold
ever-increasing counters with the "events" tool.  For example, to find
the average fault count, you would do something like this (19 =
total_pgfault, 20 = total_pgmajfault):

	for x in `seq 10`; do runtest -s foo.load foo`; done
	readdict < foo-memory.stat.data | columns 19 20 | events | mean -s

Oh, and workload runtime is always recorded in NAME.time, so

	events < `uname -r`.time

gives you the timings of each run, which you can then further process
with "mean" or "median" again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ