[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r1g0mqir.mognet@arm.com>
Date: Thu, 15 Jul 2021 00:20:28 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [patch 03/50] sched: Prepare for RT sleeping spin/rwlocks
Hi,
On 13/07/21 17:10, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@...utronix.de>
>
> Waiting for spinlocks and rwlocks on non RT enabled kernels is task::state
> preserving. Any wakeup which matches the state is valid.
>
> RT enabled kernels substitutes them with 'sleeping' spinlocks. This creates
> an issue vs. task::state.
>
> In order to block on the lock the task has to overwrite task::state and a
> consecutive wakeup issued by the unlocker sets the state back to
> TASK_RUNNING. As a consequence the task loses the state which was set
> before the lock acquire and also any regular wakeup targeted at the task
> while it is blocked on the lock.
>
I'm not sure I get this for spinlocks - p->__state != TASK_RUNNING means
task is stopped (or about to be), IMO that doesn't go with spinning. I was
thinking perhaps ptrace could be an issue, but I don't have a clear picture
on that either. What am I missing?
> @@ -213,6 +234,47 @@ struct task_group;
> raw_spin_unlock_irqrestore(¤t->pi_lock, flags); \
> } while (0)
>
> +/*
> + * PREEMPT_RT specific variants for "sleeping" spin/rwlocks
> + *
> + * RT's spin/rwlock substitutions are state preserving. The state of the
> + * task when blocking on the lock is saved in task_struct::saved_state and
> + * restored after the lock has been acquired. These operations are
> + * serialized by task_struct::pi_lock against try_to_wake_up(). Any non RT
> + * lock related wakeups while the task is blocked on the lock are
> + * redirected to operate on task_struct::saved_state to ensure that these
> + * are not dropped. On restore task_struct::saved_state is set to
> + * TASK_RUNNING so any wakeup attempt redirected to saved_state will fail.
> + *
> + * The lock operation looks like this:
> + *
> + * current_save_and_set_rtlock_wait_state();
> + * for (;;) {
> + * if (try_lock())
> + * break;
> + * raw_spin_unlock_irq(&lock->wait_lock);
> + * schedule_rtlock();
> + * raw_spin_lock_irq(&lock->wait_lock);
> + * set_current_state(TASK_RTLOCK_WAIT);
> + * }
> + * current_restore_rtlock_saved_state();
> + */
> +#define current_save_and_set_rtlock_wait_state() \
> + do { \
> + raw_spin_lock(¤t->pi_lock); \
> + current->saved_state = current->state; \
^^^^^
That one somehow survived the s/state/__state/ renaming.
Powered by blists - more mailing lists