lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jul 2014 13:31:22 +0400
From:	Vladimir Davydov <>
To:	<>
CC:	<>, <>, <>,
	<>, <>,
	<>, <>, <>,
Subject: [PATCH -mm 0/6] Per-memcg slab shrinkers

[ It's been a long time since I sent the last version of this set, so
  I'm restarting the versioning. For those, who are interested in the
  patch set history, see ]


This patch set introduces per-memcg slab shrinkers support and
implements per-memcg fs (dcache, icache) shrinkers. It was initially
proposed by Glauber Costa.

The idea lying behind this is to make the list_lru structure per-memcg
and put objects relating to a particular memcg to the corresponding
list. This way, to turn a shrinker using list_lru for organizing
reclaimable objects to memcg aware one it's enough to initialize its
list_lru as memcg aware.

Please, note that even with this set, current kmemcg implementation has
serious flaws, which make it unusable in production:

 - Kmem-only reclaim, which would trigger on hitting memory.kmem.limit,
   is not implemented yet. This makes memory.kmem.limite < memory.limit
   setups unusable. We are not quite sure if we really need a separate
   knob for kmem.limit though (see the discussion at

 - Since kmem cache self destruction patch set was withdrawn due to
   performance reasons (, per memcg
   kmem caches, which have objects on css offline, are still leaked. I'm
   planning to introduce a shrinker for such caches.

 - Per-memcg arrays of kmem_cache's and list_lru's can only grow and are
   never shrunk. Since the number of offline memcg's hanging around is
   practically unlimited, these arrays may become really huge and result
   in various problems even if nobody uses cgroups right now. I'm
   considering using flex_array's for those caches so that we could
   reclaim their parts on memory pressure.

That's why I still leave CONFIG_MEMCG_KMEM marked as "only for

The patch set is organized as follows:
 - patches 1 and 2 make list_lru and fs-private shrinker interfaces
   neater and suitable for extending towards per-memcg reclaim;
 - patch 3 introduces per-memcg slab shrinker core;
 - patch 4 makes list_lru memcg-aware and patch 5 marks dcache and
   icache shrinkers as memcg aware.
 - patch 6 extends memcg iterator to include offline css's to allow
   kmem reclaim from dead css's.


Vladimir Davydov (6):
  list_lru, shrinkers: introduce list_lru_shrink_{count,walk}
  fs: consolidate {nr,free}_cached_objects args in shrink_control
  vmscan: shrink slab on memcg pressure
  list_lru: add per-memcg lists
  fs: make shrinker memcg aware
  memcg: iterator: do not skip offline css

 fs/dcache.c                |   14 ++-
 fs/gfs2/main.c             |    2 +-
 fs/gfs2/quota.c            |    6 +-
 fs/inode.c                 |    7 +-
 fs/internal.h              |    7 +-
 fs/super.c                 |   45 ++++----
 fs/xfs/xfs_buf.c           |    9 +-
 fs/xfs/xfs_qm.c            |    9 +-
 fs/xfs/xfs_super.c         |    7 +-
 include/linux/fs.h         |    6 +-
 include/linux/list_lru.h   |   82 +++++++++-----
 include/linux/memcontrol.h |   64 +++++++++++
 include/linux/shrinker.h   |   10 +-
 mm/list_lru.c              |  132 +++++++++++++++++++----
 mm/memcontrol.c            |  258 ++++++++++++++++++++++++++++++++++++++++----
 mm/vmscan.c                |   94 ++++++++++++----
 mm/workingset.c            |    9 +-
 17 files changed, 615 insertions(+), 146 deletions(-)


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists