lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 31 May 2014 15:04:58 +0400
From:	Vladimir Davydov <vdavydov@...allels.com>
To:	Christoph Lameter <cl@...two.org>
CC:	<akpm@...ux-foundation.org>, <hannes@...xchg.org>,
	<mhocko@...e.cz>, <linux-kernel@...r.kernel.org>,
	<linux-mm@...ck.org>
Subject: Re: [PATCH -mm 7/8] slub: make dead caches discard free slabs
 immediately

On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
> On Fri, 30 May 2014, Vladimir Davydov wrote:
> 
> > (3) is a bit more difficult, because slabs are added to per-cpu partial
> > lists lock-less. Fortunately, we only have to handle the __slab_free
> > case, because, as there shouldn't be any allocation requests dispatched
> > to a dead memcg cache, get_partial_node() should never be called. In
> > __slab_free we use cmpxchg to modify kmem_cache_cpu->partial (see
> > put_cpu_partial) so that setting ->partial to a special value, which
> > will make put_cpu_partial bail out, will do the trick.
> >
> > Note, this shouldn't affect performance, because keeping empty slabs on
> > per node lists as well as using per cpu partials are only worthwhile if
> > the cache is used for allocations, which isn't the case for dead caches.
> 
> This all sounds pretty good to me but we still have some pretty extensive
> modifications that I would rather avoid.
> 
> In put_cpu_partial you can simply check that the memcg is dead right? This
> would avoid all the other modifications I would think and will not require
> a special value for the per cpu partial pointer.

That would be racy. The check if memcg is dead and the write to per cpu
partial ptr wouldn't proceed as one atomic operation. If we set the dead
flag from another thread between these two operations, put_cpu_partial
will add a slab to a per cpu partial list *after* the cache was zapped.

But aren't modifications this patch introduces that extensive?

In fact, it just adds the check if ->partial == CPU_SLAB_PARTIAL_DEAD in
a couple of places, namely put_cpu_partial and unfreeze_partials, where
it looks pretty natural, IMO. Other hunks of this patch just (1) move
some code w/o modifying it to a separate function, (2) add BUG_ON's to
alloc paths (get_partial_node and __slab_alloc), where we should never
see this value, and (3) add checks to sysfs/debug paths.
[ Now I guess I had to split this patch to make it more readable ]

(1) and (2) doesn't make the code slower or more difficult to
understand, IMO. (3) is a bit cumbersome, but we can make it neater by
introducing a special function for them that will return the partial
slab if it wasn't zapped, something like this:

static struct page *cpu_slab_partial(struct kmem_cache *s, int cpu)
{
	struct page = per_cpu_ptr(s->cpu_slab, cpu)->partial;l

	if (page == CPU_SLAB_PARTIAL_DEAD)
		page = NULL;
	return page;
}

Thus we would only have to check for this special value only in three
places in the code, namely put_cpu_partial, unfreeze_partials, and
cpu_slab_partial.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ