lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140604094730.GH6013@esperanza>
Date:	Wed, 4 Jun 2014 13:47:33 +0400
From:	Vladimir Davydov <vdavydov@...allels.com>
To:	Joonsoo Kim <js1304@...il.com>
CC:	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Christoph Lameter <cl@...two.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>,
	LKML <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [PATCH -mm 7/8] slub: make dead caches discard free slabs
 immediately

On Wed, Jun 04, 2014 at 05:53:29PM +0900, Joonsoo Kim wrote:
> Consider __slab_free(). After put_cpu_partial() in __slab_free() is called,
> we attempt to update stat. There is possibility that this operation could be
> use-after-free with your solution. Until now, we have just stat operation, but
> it could be more. I don't like to impose this constraint to the slab free path.

We can move stats update before object free I guess, but I admit this is
not going to be a flexible solution, because every future modifications
to slab_free should be done with great care then, otherwise it may break
things.

> So IMHO, it is better that we should defer to destroy kmem_cache
> until last kfree() caller returns. Is it fair enough? :)

Actually, I was thinking about it (even discussed with Christoph), but
the problem is that there is currently no way to wait for all currently
executing kfree's to complete, because SLUB's version can be preempted
at any time.

One way to solve this is to make slab_free non-preemptable and call
synchronize_sched before kmem_cache_destroy (or use call_rcu_sched).
When I started to implement this approach I found the resulting code a
bit ugly. Also, Christoph had some concerns about it (see
https://lkml.org/lkml/2014/5/23/524). That's why I tried to go with this
patch set first, but that doesn't mean that I'm 100% sure in it :-) I'll
send the implementations of the other approach (with prempt_disable)
soon.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ