[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13f78472-2fa8-4af9-9d5f-a93cb16cc7ca@redhat.com>
Date: Mon, 7 Oct 2024 11:54:54 -0400
From: Waiman Long <llong@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>, Waiman Long <llong@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>, Luis Goncalves <lgoncalv@...hat.com>,
Chunyu Hu <chuhu@...hat.com>
Subject: Re: [PATCH] locking/rtmutex: Always use trylock in rt_mutex_trylock()
On 10/7/24 11:33 AM, Peter Zijlstra wrote:
> On Mon, Oct 07, 2024 at 11:23:32AM -0400, Waiman Long wrote:
>
>>> Is the problem that:
>>>
>>> sched_tick()
> raw_spin_lock(&rq->__lock);
>>> task_tick_mm_cid()
>>> task_work_add()
>>> kasan_save_stack()
>>> idiotic crap while holding rq->__lock ?
>>>
>>> Because afaict that is completely insane. And has nothing to do with
>>> rtmutex.
>>>
>>> We are not going to change rtmutex because instrumentation shit is shit.
>> Yes, it is because of KASAN that causes page allocation while holding the
>> rq->__lock. Maybe we can blame KASAN for this. It is actually not a problem
>> for non-PREEMPT_RT kernel because only trylock is being used. However, we
>> don't use trylock all the way when rt_spin_trylock() is being used with
>> PREEMPT_RT Kernel.
> It has nothing to do with trylock, an everything to do with scheduler
> locks being special.
>
> But even so, trying to squirrel a spinlock inside a raw_spinlock is
> dodgy at the best of times, yes it mostly works, but should be avoided
> whenever possible.
>
> And instrumentation just doesn't count.
>
>> This is certainly a problem that we need to fix as there
>> may be other similar case not involving rq->__lock lurking somewhere.
> There cannot be, lock order is:
>
> rtmutex->wait_lock
> task->pi_lock
> rq->__lock
>
> Trying to subvert that order gets you a splat, any other:
>
> raw_spin_lock(&foo);
> spin_trylock(&bar);
>
> will 'work', despite probably not being a very good idea.
>
> Any case involving the scheduler locks needs to be eradicated, not
> worked around.
OK, I will see what I can do to work around this issue.
Cheers,
Longman
Powered by blists - more mailing lists