[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1422461573.git.vdavydov@parallels.com>
Date: Wed, 28 Jan 2015 19:22:48 +0300
From: Vladimir Davydov <vdavydov@...allels.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Christoph Lameter <cl@...ux.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: [PATCH -mm v2 0/3] slub: make dead caches discard free slabs immediately
Hi,
The kmem extension of the memory cgroup is almost usable now. There is,
in fact, the only serious issue left: per memcg kmem caches may pin the
owner cgroup for indefinitely long. This is, because a slab cache may
keep empty slab pages in its private structures to optimize performance,
while we take a css reference per each charged kmem page.
The issue is only relevant to SLUB, because SLAB periodically reaps
empty slabs. This patch set fixes this issue for SLUB. For details,
please see patch 3.
Changes in v2:
- address Christoph's concerns regarding kmem_cache_shrink
- fix race between put_cpu_partial reading ->cpu_partial and
kmem_cache_shrink updating it as proposed by Joonsoo
v1: https://lkml.org/lkml/2015/1/26/317
Thanks,
Vladimir Davydov (3):
slub: never fail to shrink cache
slub: fix kmem_cache_shrink return value
slub: make dead caches discard free slabs immediately
mm/slab.c | 4 +--
mm/slab.h | 2 +-
mm/slab_common.c | 15 +++++++--
mm/slob.c | 2 +-
mm/slub.c | 94 +++++++++++++++++++++++++++++++++++-------------------
5 files changed, 78 insertions(+), 39 deletions(-)
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists