[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180927154647.GB31654@kroah.com>
Date: Thu, 27 Sep 2018 17:46:47 +0200
From: Greg KH <gregkh@...ux-foundation.org>
To: zhong jiang <zhongjiang@...wei.com>
Cc: iamjoonsoo.kim@....com, rientjes@...gle.com, cl@...ux.com,
penberg@...nel.org, akpm@...ux-foundation.org, mhocko@...e.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
On Thu, Sep 27, 2018 at 10:43:40PM +0800, zhong jiang wrote:
> From: Alexey Dobriyan <adobriyan@...il.com>
>
> /*
> * cpu_partial determined the maximum number of objects
> * kept in the per cpu partial lists of a processor.
> */
>
> Can't be negative.
>
> I hit a real issue that it will result in a large number of memory leak.
> Because Freeing slabs are in interrupt context. So it can trigger this issue.
> put_cpu_partial can be interrupted more than once.
> due to a union struct of lru and pobjects in struct page, when other core handles
> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> result in pobjects being a negative value(0xdead0000). Therefore, a large number
> of slabs will be added to per_cpu partial list.
>
> I had posted the issue to community before. The detailed issue description is as follows.
>
> Link: https://www.spinics.net/lists/kernel/msg2870979.html
>
> After applying the patch, The issue is fixed. So the patch is a effective bugfix.
> It should go into stable.
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
Powered by blists - more mailing lists