[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140521150408.GB23193@esperanza>
Date: Wed, 21 May 2014 19:04:10 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: Christoph Lameter <cl@...ux.com>
CC: <hannes@...xchg.org>, <mhocko@...e.cz>,
<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH RFC 3/3] slub: reparent memcg caches' slabs on memcg
offline
On Wed, May 21, 2014 at 09:41:03AM -0500, Christoph Lameter wrote:
> On Mon, 19 May 2014, Vladimir Davydov wrote:
>
> > 3) Per cpu partial slabs. We can disable this feature for dead caches by
> > adding appropriate check to kmem_cache_has_cpu_partial.
>
> There is already a s->cpu_partial number in kmem_cache. If that is zero
> then no partial cpu slabs should be kept.
>
> > So far, everything looks very simple - it seems we don't have to modify
> > __slab_free at all if we follow the instruction above.
> >
> > However, there is one thing regarding preemptable kernels. The problem
> > is after forbidding the cache store free slabs in per-cpu/node partial
> > lists by setting min_partial=0 and kmem_cache_has_cpu_partial=false
> > (i.e. marking the cache as dead), we have to make sure that all frees
> > that saw the cache as alive are over, otherwise they can occasionally
> > add a free slab to a per-cpu/node partial list *after* the cache was
> > marked dead. For instance,
>
> Ok then lets switch off preeempt there? Preemption is not supported by
> most distribution and so will have the least impact.
Do I understand you correctly that the following change looks OK to you?
diff --git a/mm/slub.c b/mm/slub.c
index fdf0fe4da9a9..dc3582c2b5bb 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2676,31 +2676,31 @@ static __always_inline void slab_free(struct kmem_cache *s,
redo:
/*
* Determine the currently cpus per cpu slab.
* The cpu may change afterward. However that does not matter since
* data is retrieved via this pointer. If we are on the same cpu
* during the cmpxchg then the free will succedd.
*/
preempt_disable();
c = this_cpu_ptr(s->cpu_slab);
tid = c->tid;
- preempt_enable();
if (likely(page == c->page)) {
set_freepointer(s, object, c->freelist);
if (unlikely(!this_cpu_cmpxchg_double(
s->cpu_slab->freelist, s->cpu_slab->tid,
c->freelist, tid,
object, next_tid(tid)))) {
note_cmpxchg_failure("slab_free", s, tid);
goto redo;
}
stat(s, FREE_FASTPATH);
} else
__slab_free(s, page, x, addr);
+ preempt_enable();
}
void kmem_cache_free(struct kmem_cache *s, void *x)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists