[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160205165454.GB22456@esperanza>
Date: Fri, 5 Feb 2016 19:54:54 +0300
From: Vladimir Davydov <vdavydov@...tuozzo.com>
To: Dmitry Safonov <dsafonov@...tuozzo.com>
CC: <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <0x7f454c46@...il.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCHv6] mm: slab: free kmem_cache_node after destroy sysfs file
On Fri, Feb 05, 2016 at 07:44:33PM +0300, Dmitry Safonov wrote:
...
> >>@@ -2414,8 +2415,6 @@ int __kmem_cache_shrink(struct kmem_cache *cachep, bool deactivate)
> >> int __kmem_cache_shutdown(struct kmem_cache *cachep)
> >> {
> >>- int i;
> >>- struct kmem_cache_node *n;
> >> int rc = __kmem_cache_shrink(cachep, false);
> >> if (rc)
> >>@@ -2423,6 +2422,14 @@ int __kmem_cache_shutdown(struct kmem_cache *cachep)
> >> free_percpu(cachep->cpu_cache);
> >And how come ->cpu_cache (and ->cpu_slab in case of SLUB) is special?
> >Can't sysfs access it either? I propose to introduce a method called
> >__kmem_cache_release (instead of __kmem_cache_free_nodes), which would
> >do all freeing, both per-cpu and per-node.
> AFAICS, they aren't used by this sysfs.
They are: alloc_calls_show -> list_locations -> flush_all accesses
->cpu_slab.
Thanks,
Vladimir
> Anyway, seems reasonable, will do.
Powered by blists - more mailing lists