[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <010001661ba398a8-f7e5b6c8-b7ff-4f01-8b18-0ad582344ea7-000000@email.amazonses.com>
Date: Thu, 27 Sep 2018 15:26:38 +0000
From: Christopher Lameter <cl@...ux.com>
To: zhong jiang <zhongjiang@...wei.com>
cc: gregkh@...ux-foundation.org, iamjoonsoo.kim@....com,
rientjes@...gle.com, penberg@...nel.org, akpm@...ux-foundation.org,
mhocko@...e.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int
On Thu, 27 Sep 2018, zhong jiang wrote:
> From: Alexey Dobriyan <adobriyan@...il.com>
>
> /*
> * cpu_partial determined the maximum number of objects
> * kept in the per cpu partial lists of a processor.
> */
>
> Can't be negative.
True.
> I hit a real issue that it will result in a large number of memory leak.
> Because Freeing slabs are in interrupt context. So it can trigger this issue.
> put_cpu_partial can be interrupted more than once.
> due to a union struct of lru and pobjects in struct page, when other core handles
> page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
> result in pobjects being a negative value(0xdead0000). Therefore, a large number
> of slabs will be added to per_cpu partial list.
That sounds like it needs more investigation. Concurrent use of page
fields for other purposes can cause serious bugs.
>
> I had posted the issue to community before. The detailed issue description is as follows.
I did not see it. Please make sure to CC the maintainers.
Powered by blists - more mailing lists