lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1343384432-19903-1-git-send-email-handai.szj@taobao.com>
Date:	Fri, 27 Jul 2012 18:20:32 +0800
From:	Sha Zhengju <handai.szj@...il.com>
To:	linux-mm@...ck.org, cgroups@...r.kernel.org
Cc:	fengguang.wu@...el.com, gthelen@...gle.com,
	akpm@...ux-foundation.org, yinghan@...gle.com, mhocko@...e.cz,
	linux-kernel@...r.kernel.org, hannes@...xchg.org,
	Sha Zhengju <handai.szj@...bao.com>
Subject: [PATCH V2 0/6] Per-cgroup page stat accounting

From: Sha Zhengju <handai.szj@...bao.com>

Hi, list

This V2 patch series provide the ability for each memory cgroup to have independent
dirty/writeback page statistics which can provide information for per-cgroup
direct reclaim or some.

In the first three prepare patches, we have done some cleanup and reworked vfs
set page dirty routines to make "modify page info" and "dirty page accouting" stay
in one function as much as possible for the sake of memcg bigger lock(test numbers
are in the specific patch).

Kame, I tested these patches on linux mainline v3.5, because I cannot boot up the kernel
under linux-next :(. But these patches are cooked on top of your recent memcg patches
(I backport them to mainline) and I think there is no hunk with the mm tree.
So If there's no other problem, I think it could be considered for merging.



Following is performance comparison between before/after the series:

Test steps(Mem-24g, ext4):
drop_cache; sync
cat /proc/meminfo|grep Dirty (=4KB)
fio (buffered/randwrite/bs=4k/size=128m/filesize=1g/numjobs=8/sync) 
cat /proc/meminfo|grep Dirty(=648696kB)

We test it for 10 times and get the average numbers:
Before:
write: io=1024.0MB, bw=334678 KB/s, iops=83669.2 , runt=  3136 msec
lat (usec): min=1 , max=26203.1 , avg=81.473, stdev=275.754

After:
write: io=1024.0MB, bw=325219 KB/s, iops= 81304.1 , runt=  3226.9 msec
lat (usec): min=1 , max=17224 , avg=86.194, stdev=298.183



There is about 2.8% performance decrease. But I notice that once memcg is enabled,
the root_memcg exsits and all pages allocated are belong to it, so they will go
through the root memcg statistics routines which bring some overhead. 
Moreover,in case of memcg_is_enable && no cgroups, we can get root memcg stats
just from global numbers which can avoid both accounting overheads and many if-test
overheads. Later I'll work further into it.

Any comments are welcomed. : )



Change log:
v2 <-- v1:
	1. add test numbers
	2. some small fix and comments

Sha Zhengju (6):
	memcg-remove-MEMCG_NR_FILE_MAPPED.patch
	Make-TestSetPageDirty-and-dirty-page-accounting-in-o.patch
	Use-vfs-__set_page_dirty-interface-instead-of-doing-.patch
	memcg-add-per-cgroup-dirty-pages-accounting.patch
	memcg-add-per-cgroup-writeback-pages-accounting.patch
	memcg-Document-cgroup-dirty-writeback-memory-statist.patch

 Documentation/cgroups/memory.txt |    2 +
 fs/buffer.c                      |   36 +++++++++++++++--------
 fs/ceph/addr.c                   |   20 +------------
 include/linux/buffer_head.h      |    2 +
 include/linux/memcontrol.h       |   30 ++++++++++++++-----
 mm/filemap.c                     |    9 ++++++
 mm/memcontrol.c                  |   58 +++++++++++++++++++-------------------
 mm/page-writeback.c              |   48 ++++++++++++++++++++++++++++---
 mm/rmap.c                        |    4 +-
 mm/truncate.c                    |    6 ++++
 10 files changed, 141 insertions(+), 74 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ