[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4bd5ae2b-4fc6-73dc-b83b-e71826990946@suse.cz>
Date: Mon, 9 Nov 2020 12:33:46 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Mike Rapoport <rppt@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Albert Ou <aou@...s.berkeley.edu>,
Andy Lutomirski <luto@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Christoph Lameter <cl@...ux.com>,
"David S. Miller" <davem@...emloft.net>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
David Rientjes <rientjes@...gle.com>,
"Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
"H. Peter Anvin" <hpa@...or.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Ingo Molnar <mingo@...hat.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Len Brown <len.brown@...el.com>,
Michael Ellerman <mpe@...erman.id.au>,
Mike Rapoport <rppt@...ux.ibm.com>,
Palmer Dabbelt <palmer@...belt.com>,
Paul Mackerras <paulus@...ba.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Pavel Machek <pavel@....cz>, Pekka Enberg <penberg@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Thomas Gleixner <tglx@...utronix.de>,
Vasily Gorbik <gor@...ux.ibm.com>,
Will Deacon <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-pm@...r.kernel.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, sparclinux@...r.kernel.org,
x86@...nel.org
Subject: Re: [PATCH v5 1/5] mm: introduce debug_pagealloc_{map,unmap}_pages()
helpers
On 11/8/20 7:57 AM, Mike Rapoport wrote:
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)
> return false;
> }
>
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)
> {
> if (!is_debug_pagealloc_cache(cachep))
> return;
Hmm, I didn't notice earlier, sorry.
The is_debug_pagealloc_cache() above includes a debug_pagealloc_enabled_static()
check, so it should be fine to use
__kernel_map_pages() directly below. Otherwise we generate two static key checks
for the same key needlessly.
>
> - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
> + if (map)
> + debug_pagealloc_map_pages(virt_to_page(objp),
> + cachep->size / PAGE_SIZE);
> + else
> + debug_pagealloc_unmap_pages(virt_to_page(objp),
> + cachep->size / PAGE_SIZE);
> }
>
> -#else
> -static inline void slab_kernel_map(struct kmem_cache *cachep, void *objp,
> - int map) {}
> -
> -#endif
> -
> static void poison_obj(struct kmem_cache *cachep, void *addr, unsigned char val)
> {
> int size = cachep->object_size;
> @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
>
> #if DEBUG
> /*
> - * If we're going to use the generic kernel_map_pages()
> + * If we're going to use the generic debug_pagealloc_map_pages()
> * poisoning, then it's going to smash the contents of
> * the redzone and userword anyhow, so switch them off.
> */
>
Powered by blists - more mailing lists