lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150126170147.GB28978@esperanza>
Date:	Mon, 26 Jan 2015 20:01:47 +0300
From:	Vladimir Davydov <vdavydov@...allels.com>
To:	Christoph Lameter <cl@...ux.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -mm 1/3] slub: don't fail kmem_cache_shrink if slab
 placement optimization fails

Hi Christoph,

On Mon, Jan 26, 2015 at 09:48:00AM -0600, Christoph Lameter wrote:
> On Mon, 26 Jan 2015, Vladimir Davydov wrote:
> 
> > SLUB's kmem_cache_shrink not only removes empty slabs from the cache,
> > but also sorts slabs by the number of objects in-use to cope with
> > fragmentation. To achieve that, it tries to allocate a temporary array.
> > If it fails, it will abort the whole procedure.
> 
> I do not think its worth optimizing this. If we cannot allocate even a
> small object then the system is in an extremely bad state anyways.

Hmm, I've just checked my /proc/slabinfo and seen that I have 512
objects per slab at max, so that the temporary array will be 2 pages at
max. So you're right - this kmalloc will never fail on my system, simply
because we never fail GFP_KERNEL allocations of order < 3. However,
theoretically we can have as much as MAX_OBJS_PER_PAGE=32767 objects per
slab, which would result in a huge allocation.

Anyways, I think that silently relying on the fact that the allocator
never fails small allocations is kind of unreliable. What if this
behavior will change one day? So I'd prefer to either make
kmem_cache_shrink fall back to using a variable on stack in case of the
kmalloc failure, like this patch does, or place an explicit BUG_ON after
it. The latter looks dangerous to me, because, as I mentioned above, I'm
not sure that we always have less than 2048 objects per slab.

> 
> > @@ -3400,7 +3407,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
> >  		 * list_lock. page->inuse here is the upper limit.
> >  		 */
> >  		list_for_each_entry_safe(page, t, &n->partial, lru) {
> > -			list_move(&page->lru, slabs_by_inuse + page->inuse);
> > +			if (page->inuse < objects)
> > +				list_move(&page->lru,
> > +					  slabs_by_inuse + page->inuse);
> >  			if (!page->inuse)
> >  				n->nr_partial--;
> >  		}
> 
> The condition is always true. A page that has page->inuse == objects
> would not be on the partial list.
> 

This is in case we failed to allocate the slabs_by_inuse array. We only
have a list for empty slabs then (on stack).

Thanks,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ