[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181130181956.eewrlaabtceekzyu@linutronix.de>
Date: Fri, 30 Nov 2018 19:19:57 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: He Zhe <zhe.he@...driver.com>
Cc: catalin.marinas@....com, tglx@...utronix.de, rostedt@...dmis.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-rt-users@...r.kernel.org
Subject: Re: [PATCH v2] kmemleak: Turn kmemleak_lock to raw spinlock on RT
On 2018-11-24 22:26:46 [+0800], He Zhe wrote:
> On latest v4.19.1-rt3, both of the call traces can be reproduced with kmemleak
> enabied. And none can be reproduced with kmemleak disabled.
okay. So it needs attention.
> On latest mainline tree, none can be reproduced no matter kmemleak is enabled
> or disabled.
>
> I don't get why kfree from a preempt-disabled section should cause a warning
> without kmemleak, since kfree can't sleep.
it might. It will acquire a sleeping lock if it has go down to the
memory allocator to actually give memory back.
> If I understand correctly, the call trace above is caused by trying to schedule
> after preemption is disabled, which cannot be reached in mainline kernel. So
> we might need to turn to use raw lock to keep preemption disabled.
The buddy-allocator runs with spin locks so it is okay on !RT. So you
can use kfree() with disabled preemption or disabled interrupts.
I don't think that we want to use raw-locks in the buddy-allocator.
> >From what I reached above, this is RT-only and happens on v4.18 and v4.19.
>
> The call trace above is caused by grabbing kmemleak_lock and then getting
> scheduled and then re-grabbing kmemleak_lock. Using raw lock can also solve
> this problem.
But this is a reader / writer lock. And if I understand the other part
of the thread then it needs multiple readers.
Couldn't we just get rid of that kfree() or move it somewhere else?
I mean if the free() memory on CPU-down and allocate it again CPU-up
then we could skip that, rigth? Just allocate it and don't free it
because the CPU will likely get up again.
> Thanks,
> Zhe
Sebastian
Powered by blists - more mailing lists