lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4CD1A986.2000508@us.ibm.com>
Date:	Wed, 03 Nov 2010 13:27:18 -0500
From:	Karl Rister <kmr@...ibm.com>
To:	pjt@...gle.com
CC:	linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ingo Molnar <mingo@...e.hu>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Chris Friesen <cfriesen@...tel.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Pierre Bourdon <pbourdon@...ellency.fr>,
	Paul Turner <p@...ibm.com>, habanero@...ibm.com
Subject: Re: [RFC tg_shares_up improvements - v1 00/12] [RFC tg_shares_up
 - v1 00/12] Reducing cost of tg->shares distribution

Hi All

Here is a some performance data for the previously posted patches 
running a LAMP workload in a cloud-like environment which show promising 
reductions in CPU utilization.  In this particular test, 32 groups 
equaling 64 KVM guests (each group consists of an Apache server guest 
and a MySQL server guest) are running a LAMP workload being driven by 
external load drivers.  When using the default values in /etc/cgconfig.conf:

mount {
         cpuset  = /cgroup/cpuset;
         cpu     = /cgroup/cpu;
         cpuacct = /cgroup/cpuacct;
         memory  = /cgroup/memory;
         devices = /cgroup/devices;
         freezer = /cgroup/freezer;
         net_cls = /cgroup/net_cls;
         blkio   = /cgroup/blkio;
}

which enable libvirt usage of cgroups the contents of /proc/cgroups 
looks like this before launching the guests:

#subsys_name    hierarchy       num_cgroups     enabled
cpuset  1       4       1
ns      0       1       1
cpu     2       4       1
cpuacct 3       4       1
memory  4       4       1
devices 5       4       1
freezer 6       4       1
net_cls 7       1       1
blkio   8       1       1

and like this after launching the guests:

#subsys_name    hierarchy       num_cgroups     enabled
cpuset  1       68      1
ns      0       1       1
cpu     2       68      1
cpuacct 3       68      1
memory  4       68      1
devices 5       68      1
freezer 6       68      1
net_cls 7       1       1
blkio   8       1       1

When running the workload the run with the patches used significantly 
less CPU:

Host CPU utilization with patches: 54.35%
Host CPU utilization without patches: 80.89%

Since the workload uses a fixed injection rate the achieved throughput 
for both test runs was the same, however the run with the patches 
applied did achieve better quality of service metrics.

NOTE: The runs were made using kvm.git changeset 
cec8b6b972a572b69d4902f57fb659e8a4c749af.

-- 
Karl Rister <kmr@...ibm.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ