[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68cff59d-2b0e-5a7b-bca9-36784522059b@lca.pw>
Date: Wed, 27 Mar 2019 09:05:31 -0400
From: Qian Cai <cai@....pw>
To: Michal Hocko <mhocko@...nel.org>
Cc: akpm@...ux-foundation.org, catalin.marinas@....com, cl@...ux.com,
willy@...radead.org, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] kmemleak: survive in a low-memory situation
On 3/27/19 7:44 AM, Michal Hocko wrote> What? Normal spin lock implementation
doesn't disable interrupts. So
> either I misunderstand what you are saying or you seem to be confused.
> the thing is that in_atomic relies on preempt_count to work properly and
> if you have CONFIG_PREEMPT_COUNT=n then you simply never know whether
> preemption is disabled so you do not know that a spin_lock is held.
> irqs_disabled on the other hand checks whether arch specific flag for
> IRQs handling is set (or cleared). So you would only catch irq safe spin
> locks with the above check.
Exactly, because kmemleak_alloc() is only called in a few call sites, slab
allocation, neigh_hash_alloc(), alloc_page_ext(), sg_kmalloc(),
early_amd_iommu_init() and blk_mq_alloc_rqs(), my review does not yield any of
those holding irq unsafe spinlocks.
Could future code changes suddenly call kmemleak_alloc() with a irq unsafe
spinlock held? Always possible, but it is unlikely to happen. I could put some
comments on kmemleak_alloc() about this though.
Powered by blists - more mailing lists