lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Dec 2011 04:57:06 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Christoph Lameter <cl@...ux.com>
Cc:	zhihua che <zhihua.che@...il.com>, linux-kernel@...r.kernel.org,
	Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [Slub allocator] There are chances that
 kmem_cache_cpu->freelist gets lost if the process happens to be rescheduled
 to a differenet cpu before the local_irq_save() completes in __slab_alloc()

Le lundi 12 décembre 2011 à 18:39 +0100, Eric Dumazet a écrit :
> Le lundi 12 décembre 2011 à 09:50 -0600, Christoph Lameter a écrit :
> 
> > Correct. Issue was introduced in 2.6.39.
> > 
> > Acked-by: Christoph Lameter <cl@...ux.com>
> > 
> 
> Indeed, I reproduced the leak with hackbench and lot of threads.
> 
> Thanks
> 
> [PATCH] slub: fix a possible memleak in __slab_alloc()
> 
> Zhihua Che reported a possible memleak in slub allocator on
> CONFIG_PREEMPT=y builds.
> 
> It is possible current thread migrates right before disabling irqs in
> __slab_alloc(). We must check again c->freelist, and perform a normal
> allocation instead of scratching c->freelist.
> 
> Many thanks to Zhihua Che for spotting this bug, introduced in 2.6.39
> 
> Reported-by: zhihua che <zhihua.che@...il.com>
> Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
> Acked-by: Christoph Lameter <cl@...ux.com>
> CC: Pekka Enberg <penberg@...helsinki.fi>
> CC: stable@...r.kernel.org
> ---
>  mm/slub.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index ed3334d..923d238 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2166,6 +2166,12 @@ redo:
>  		goto new_slab;
>  	}
>  
> +#ifdef CONFIG_PREEMPT
> +	object = c->freelist;
> +	if (object)
> +		goto load_freelist;
> +#endif
> +
>  	stat(s, ALLOC_SLOWPATH);
>  
>  	do {
> 

Thinking again about the issue, I believe its not a CONFIG_PREEMPT only
one.

We can be interrupted and the IRQ handler can free an object an populate
freelist too. So the check must always be done.

Thanks

[PATCH v2] slub: fix a possible memleak in __slab_alloc()

Zhihua Che reported a possible memleak in slub allocator on
CONFIG_PREEMPT=y builds.

It is possible current thread migrates right before disabling irqs in
__slab_alloc(). We must check again c->freelist, and perform a normal
allocation instead of scratching c->freelist.

Many thanks to Zhihua Che for spotting this bug, introduced in 2.6.39

V2: Its also possible an IRQ freed one (or several) object(s) and
populated c->freelist, so its not a CONFIG_PREEMPT only problem.

Reported-by: zhihua che <zhihua.che@...il.com>
Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
CC: Christoph Lameter <cl@...ux.com>
CC: Pekka Enberg <penberg@...helsinki.fi>
---
 mm/slub.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index ed3334d..1a919f0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2166,6 +2166,11 @@ redo:
 		goto new_slab;
 	}
 
+	/* must check again c->freelist in case of cpu migration or IRQ */
+	object = c->freelist;
+	if (object)
+		goto load_freelist;
+
 	stat(s, ALLOC_SLOWPATH);
 
 	do {


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists