lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Jun 2018 12:21:39 -0700
From:   Shakeel Butt <shakeelb@...gle.com>
To:     "Jason A . Donenfeld" <Jason@...c4.com>
Cc:     Dmitry Vyukov <dvyukov@...gle.com>, aryabinin@...tuozzo.com,
        Alexander Potapenko <glider@...gle.com>, cl@...ux.com,
        penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
        Andrew Morton <akpm@...ux-foundation.org>,
        kasan-dev@...glegroups.com, Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Shakeel Butt <shakeelb@...gle.com>
Subject: Re: Possible regression in "slab, slub: skip unnecessary kasan_cache_shutdown()"

On Tue, Jun 19, 2018 at 8:19 AM Jason A. Donenfeld <Jason@...c4.com> wrote:
>
> On Tue, Jun 19, 2018 at 5:08 PM Shakeel Butt <shakeelb@...gle.com> wrote:
> > > > Are you using SLAB or SLUB? We stress kernel pretty heavily, but with
> > > > SLAB, and I suspect Shakeel may also be using SLAB. So if you are
> > > > using SLUB, there is significant chance that it's a bug in the SLUB
> > > > part of the change.
> > >
> > > Nice intuition; I am indeed using SLUB rather than SLAB...
> > >
> >
> > Can you try once with SLAB? Just to make sure that it is SLUB specific.
>
> Sorry, I meant to mention that earlier. I tried with SLAB; the crash
> does not occur. This appears to be SLUB-specific.

Jason, can you try the following patch?

---
 mm/slub.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index a3b8467c14af..746cfe4515c2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3673,9 +3673,17 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
 
 bool __kmem_cache_empty(struct kmem_cache *s)
 {
+	int cpu;
 	int node;
 	struct kmem_cache_node *n;
 
+	for_each_online_cpu(cpu) {
+		struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
+
+		if (c->page || slub_percpu_partial(c))
+			return false;
+	}
+
 	for_each_kmem_cache_node(s, node, n)
 		if (n->nr_partial || slabs_node(s, node))
 			return false;
-- 
2.18.0.rc1.244.gcf134e6275-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ