[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1205181223460.3899@tux.localdomain>
Date: Fri, 18 May 2012 12:25:26 +0300 (EEST)
From: Pekka Enberg <penberg@...nel.org>
To: Joonsoo Kim <js1304@...il.com>
cc: Christoph Lameter <cl@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org
Subject: Re: [PATCH RESEND] slub: fix a memory leak in get_partial_node()
On Thu, 17 May 2012, Joonsoo Kim wrote:
> In the case which is below,
>
> 1. acquire slab for cpu partial list
> 2. free object to it by remote cpu
> 3. page->freelist = t
>
> then memory leak is occurred.
>
> Change acquire_slab() not to zap freelist when it works for cpu partial list.
> I think it is a sufficient solution for fixing a memory leak.
>
> Below is output of 'slabinfo -r kmalloc-256'
> when './perf stat -r 30 hackbench 50 process 4000 > /dev/null' is done.
>
> ***Vanilla***
> Sizes (bytes) Slabs Debug Memory
> ------------------------------------------------------------------------
> Object : 256 Total : 468 Sanity Checks : Off Total: 3833856
> SlabObj: 256 Full : 111 Redzoning : Off Used : 2004992
> SlabSiz: 8192 Partial: 302 Poisoning : Off Loss : 1828864
> Loss : 0 CpuSlab: 55 Tracking : Off Lalig: 0
> Align : 8 Objects: 32 Tracing : Off Lpadd: 0
>
> ***Patched***
> Sizes (bytes) Slabs Debug Memory
> ------------------------------------------------------------------------
> Object : 256 Total : 300 Sanity Checks : Off Total: 2457600
> SlabObj: 256 Full : 204 Redzoning : Off Used : 2348800
> SlabSiz: 8192 Partial: 33 Poisoning : Off Loss : 108800
> Loss : 0 CpuSlab: 63 Tracking : Off Lalig: 0
> Align : 8 Objects: 32 Tracing : Off Lpadd: 0
>
> Total and loss number is the impact of this patch.
>
> Cc: <stable@...r.kernel.org>
> Acked-by: Christoph Lameter <cl@...ux.com>
> Signed-off-by: Joonsoo Kim <js1304@...il.com>
Applied, thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists