[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40c24455-02fd-4b4c-7740-bb7d2af0f5c7@huawei.com>
Date: Mon, 17 Aug 2020 17:19:54 +0800
From: Abel Wu <wuyun.wu@...wei.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
"David Rientjes" <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
"Andrew Morton" <akpm@...ux-foundation.org>
CC: <liu.xiang6@....com.cn>,
"open list:SLAB ALLOCATOR" <linux-mm@...ck.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/slub: make add_full() condition more explicit
ping :)
On 2020/8/11 10:02, wuyun.wu@...wei.com wrote:
> From: Abel Wu <wuyun.wu@...wei.com>
>
> The commit below is incomplete, as it didn't handle the add_full() part.
> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
>
> This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(),
> since that should be the only context in which we need the list_lock for
> add_full().
>
> Signed-off-by: Abel Wu <wuyun.wu@...wei.com>
> ---
> mm/slub.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index f226d66408ee..df93a5a0e9a4 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
> }
> } else {
> m = M_FULL;
> - if (kmem_cache_debug(s) && !lock) {
> +#ifdef CONFIG_SLUB_DEBUG
> + if ((s->flags & SLAB_STORE_USER) && !lock) {
> lock = 1;
> /*
> * This also ensures that the scanning of full
> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
> */
> spin_lock(&n->list_lock);
> }
> +#endif
> }
>
> if (l != m) {
>
Powered by blists - more mailing lists