[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1501260944550.15849@gentwo.org>
Date: Mon, 26 Jan 2015 09:48:00 -0600 (CST)
From: Christoph Lameter <cl@...ux.com>
To: Vladimir Davydov <vdavydov@...allels.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH -mm 1/3] slub: don't fail kmem_cache_shrink if slab
placement optimization fails
On Mon, 26 Jan 2015, Vladimir Davydov wrote:
> SLUB's kmem_cache_shrink not only removes empty slabs from the cache,
> but also sorts slabs by the number of objects in-use to cope with
> fragmentation. To achieve that, it tries to allocate a temporary array.
> If it fails, it will abort the whole procedure.
I do not think its worth optimizing this. If we cannot allocate even a
small object then the system is in an extremely bad state anyways.
> @@ -3400,7 +3407,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
> * list_lock. page->inuse here is the upper limit.
> */
> list_for_each_entry_safe(page, t, &n->partial, lru) {
> - list_move(&page->lru, slabs_by_inuse + page->inuse);
> + if (page->inuse < objects)
> + list_move(&page->lru,
> + slabs_by_inuse + page->inuse);
> if (!page->inuse)
> n->nr_partial--;
> }
The condition is always true. A page that has page->inuse == objects
would not be on the partial list.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists