lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 May 2014 19:14:24 +0400
From:	Vladimir Davydov <vdavydov@...allels.com>
To:	Christoph Lameter <cl@...ux.com>
CC:	<hannes@...xchg.org>, <mhocko@...e.cz>,
	<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
	<linux-mm@...ck.org>
Subject: Re: [PATCH RFC 3/3] slub: reparent memcg caches' slabs on memcg
 offline

On Wed, May 21, 2014 at 09:45:54AM -0500, Christoph Lameter wrote:
> On Wed, 21 May 2014, Vladimir Davydov wrote:
> 
> > Seems I've found a better way to avoid this race, which does not involve
> > messing up free hot paths. The idea is to explicitly zap each per-cpu
> > partial list by setting it pointing to an invalid ptr. Since
> > put_cpu_partial(), which is called from __slab_free(), uses atomic
> > cmpxchg for adding a new partial slab to a per cpu partial list, it is
> > enough to add a check if partials are zapped there and bail out if so.
> >
> > The patch doing the trick is attached. Could you please take a look at
> > it once time permit?
> 
> Well if you set s->cpu_partial = 0 then the slab should not be added to
> the partial lists. Ok its put on there temporarily but then immediately
> moved to the node partial list in put_cpu_partial().

Don't think so. AFAIU put_cpu_partial() first checks if the per-cpu
partial list has more than s->cpu_partial objects draining it if so, but
then it adds the newly frozen slab there anyway.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists