[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190329161637.GC48010@arrakis.emea.arm.com>
Date: Fri, 29 Mar 2019 16:16:38 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Matthew Wilcox <willy@...radead.org>, Qian Cai <cai@....pw>,
akpm@...ux-foundation.org, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, iamjoonsoo.kim@....com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] kmemleak: survive in a low-memory situation
On Fri, Mar 29, 2019 at 01:02:37PM +0100, Michal Hocko wrote:
> On Thu 28-03-19 14:59:17, Catalin Marinas wrote:
> [...]
> > >From 09eba8f0235eb16409931e6aad77a45a12bedc82 Mon Sep 17 00:00:00 2001
> > From: Catalin Marinas <catalin.marinas@....com>
> > Date: Thu, 28 Mar 2019 13:26:07 +0000
> > Subject: [PATCH] mm: kmemleak: Use mempool allocations for kmemleak objects
> >
> > This patch adds mempool allocations for struct kmemleak_object and
> > kmemleak_scan_area as slightly more resilient than kmem_cache_alloc()
> > under memory pressure. The patch also masks out all the gfp flags passed
> > to kmemleak other than GFP_KERNEL|GFP_ATOMIC.
>
> Using mempool allocator is better than inventing its own implementation
> but there is one thing to be slightly careful/worried about.
>
> This allocator expects that somebody will refill the pool in a finit
> time. Most users are OK with that because objects in flight are going
> to return in the pool in a relatively short time (think of an IO) but
> kmemleak is not guaranteed to comply with that AFAIU. Sure ephemeral
> allocations are happening all the time so there should be some churn
> in the pool all the time but if we go to an extreme where there is a
> serious memory leak then I suspect we might get stuck here without any
> way forward. Page/slab allocator would eventually back off even though
> small allocations never fail because a user context would get killed
> sooner or later but there is no fatal_signal_pending backoff in the
> mempool alloc path.
We could improve the mempool code slightly to refill itself (from some
workqueue or during a mempool_alloc() which allows blocking) but it's
really just a best effort for a debug tool under OOM conditions. It may
be sufficient just to make the mempool size tunable (via
/sys/kernel/debug/kmemleak).
> Anyway, I believe this is a step in the right direction and should the
> above ever materializes as a relevant problem we can tune the mempool
> to backoff for _some_ callers or do something similar.
>
> Btw. there is kmemleak_update_trace call in mempool_alloc, is this ok
> for the kmemleak allocation path?
It's not a problem, maybe only a small overhead in searching an rbtree
in kmemleak but it cannot find anything since the kmemleak metadata is
not tracked. And this only happens if a normal allocation fails and
takes an existing object from the pool.
I thought about passing the mempool back into kmemleak and checking
whether it's one of the two pools it uses but concluded that it's not
worth it.
--
Catalin
Powered by blists - more mailing lists