[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874kuipsbw.fsf@nanos.tec.linutronix.de>
Date: Fri, 20 Mar 2020 23:36:03 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: paulmck@...nel.org
Cc: LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>, Will Deacon <will@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Steven Rostedt <rostedt@...dmis.org>,
Randy Dunlap <rdunlap@...radead.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Logan Gunthorpe <logang@...tatee.com>,
Kurt Schwemmer <kurt.schwemmer@...rosemi.com>,
Bjorn Helgaas <bhelgaas@...gle.com>, linux-pci@...r.kernel.org,
Felipe Balbi <balbi@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-usb@...r.kernel.org, Kalle Valo <kvalo@...eaurora.org>,
"David S. Miller" <davem@...emloft.net>,
linux-wireless@...r.kernel.org, netdev@...r.kernel.org,
Oleg Nesterov <oleg@...hat.com>,
Davidlohr Bueso <dave@...olabs.net>,
Michael Ellerman <mpe@...erman.id.au>,
Arnd Bergmann <arnd@...db.de>, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [patch V2 08/15] Documentation: Add lock ordering and nesting documentation
"Paul E. McKenney" <paulmck@...nel.org> writes:
> On Fri, Mar 20, 2020 at 08:51:44PM +0100, Thomas Gleixner wrote:
>> "Paul E. McKenney" <paulmck@...nel.org> writes:
>> >
>> > - The soft interrupt related suffix (_bh()) still disables softirq
>> > handlers. However, unlike non-PREEMPT_RT kernels (which disable
>> > preemption to get this effect), PREEMPT_RT kernels use a per-CPU
>> > lock to exclude softirq handlers.
>>
>> I've made that:
>>
>> - The soft interrupt related suffix (_bh()) still disables softirq
>> handlers.
>>
>> Non-PREEMPT_RT kernels disable preemption to get this effect.
>>
>> PREEMPT_RT kernels use a per-CPU lock for serialization. The lock
>> disables softirq handlers and prevents reentrancy by a preempting
>> task.
>
> That works! At the end, I would instead say "prevents reentrancy
> due to task preemption", but what you have works.
Yours is better.
>> - Task state is preserved across spinlock acquisition, ensuring that the
>> task-state rules apply to all kernel configurations. Non-PREEMPT_RT
>> kernels leave task state untouched. However, PREEMPT_RT must change
>> task state if the task blocks during acquisition. Therefore, it
>> saves the current task state before blocking and the corresponding
>> lock wakeup restores it. A regular not lock related wakeup sets the
>> task state to RUNNING. If this happens while the task is blocked on
>> a spinlock then the saved task state is changed so that correct
>> state is restored on lock wakeup.
>>
>> Hmm?
>
> I of course cannot resist editing the last two sentences:
>
> ... Other types of wakeups unconditionally set task state to RUNNING.
> If this happens while a task is blocked while acquiring a spinlock,
> then the task state is restored to its pre-acquisition value at
> lock-wakeup time.
Errm no. That would mean
state = UNINTERRUPTIBLE
lock()
block()
real_state = state
state = SLEEPONLOCK
non lock wakeup
state = RUNNING <--- FAIL #1
lock wakeup
state = real_state <--- FAIL #2
How it works is:
state = UNINTERRUPTIBLE
lock()
block()
real_state = state
state = SLEEPONLOCK
non lock wakeup
real_state = RUNNING
lock wakeup
state = real_state == RUNNING
If there is no 'non lock wakeup' before the lock wakeup:
state = UNINTERRUPTIBLE
lock()
block()
real_state = state
state = SLEEPONLOCK
lock wakeup
state = real_state == UNINTERRUPTIBLE
I agree that what I tried to express is hard to parse, but it's at least
halfways correct :)
Thanks,
tglx
Powered by blists - more mailing lists