[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200629024253.GB25523@casper.infradead.org>
Date: Mon, 29 Jun 2020 03:42:53 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Long Li <lonuxli.64@...il.com>
Cc: cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] mm:free unused pages in kmalloc_order
On Sat, Jun 27, 2020 at 04:55:07AM +0000, Long Li wrote:
> Environment using the slub allocator, 1G memory in my ARM32.
> kmalloc(1024, GFP_HIGHUSER) can allocate memory normally,
> kmalloc(64*1024, GFP_HIGHUSER) will cause a memory leak, because
> alloc_pages returns highmem physical pages, but it cannot be directly
> converted into a virtual address and return NULL, the pages has not
> been released. Usually driver developers will not use the
> GFP_HIGHUSER flag to allocate memory in kmalloc, but I think this
> memory leak is not perfect, it is best to be fixed. This is the
> first time I have posted a patch, there may be something wrong.
Slab used to disallow GFP_HIGHMEM allocations earlier than this,
so you'd never get here. See one moron's patch here, 20 years ago ...
https://lore.kernel.org/lkml/20001228145801.C19693@parcelfarce.linux.theplanet.co.uk/
Things changed a bit since then. SLAB_LEVEL_MASK became GFP_SLAB_BUG_MASK,
then GFP_SLAB_BUG_MASK moved to mm/internal.h. Nowadays, GFP_SLAB_BUG_MASK
is checked only in new_slab(), and it is definitely skipped when we go through
the kmalloc_large() patch.
Could you send a replacement patch which checks GFP_SLAB_BUG_MASK before
calling alloc_pages()?
> +++ b/mm/slab_common.c
> @@ -819,8 +819,12 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
> page = alloc_pages(flags, order);
> if (likely(page)) {
> ret = page_address(page);
> - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
> - PAGE_SIZE << order);
> + if (ret)
> + mod_node_page_state(page_pgdat(page),
> + NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
> + else
> + __free_pages(page, order);
> }
> ret = kasan_kmalloc_large(ret, size, flags);
> /* As ret might get tagged, call kmemleak hook after KASAN. */
> --
> 2.17.1
>
>
Powered by blists - more mailing lists