[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGLZ88KMTH93gxfFC++AMCEcObyS1FG_S_w6Ce+koai9A@mail.gmail.com>
Date: Thu, 25 Apr 2024 20:46:13 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Kees Cook <keescook@...omium.org>,
Catalin Marinas <catalin.marinas@....com>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>, Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org
Subject: Re: [PATCH] mm/slub: Avoid recursive loop with kmemleak
On Thu, Apr 25, 2024 at 5:19 PM Kent Overstreet
<kent.overstreet@...ux.dev> wrote:
>
> On Thu, Apr 25, 2024 at 04:49:17PM -0700, Andrew Morton wrote:
> > On Thu, 25 Apr 2024 14:30:55 -0700 Suren Baghdasaryan <surenb@...gle.com> wrote:
> >
> > > > > --- a/mm/kmemleak.c
> > > > > +++ b/mm/kmemleak.c
> > > > > @@ -463,7 +463,7 @@ static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
> > > > >
> > > > > /* try the slab allocator first */
> > > > > if (object_cache) {
> > > > > - object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> > > > > + object = kmem_cache_alloc_noprof(object_cache, gfp_kmemleak_mask(gfp));
> > > >
> > > > What do these get accounted to, or does this now pop a warning with
> > > > CONFIG_MEM_ALLOC_PROFILING_DEBUG?
> > >
> > > Thanks for the fix, Kees!
> > > I'll look into this recursion more closely to see if there is a better
> > > way to break it. As a stopgap measure seems ok to me. I also think
> > > it's unlikely that one would use both tracking mechanisms on the same
> > > system.
> >
> > I'd really like to start building mm-stable without having to route
> > around memprofiling. How about I include Kees's patch in that for now?
>
> Agreed
Yes, please. When I figure out a better way, I'll post a separate patch. Thanks!
Powered by blists - more mailing lists