[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170104100720.GA4985@tardis.cn.ibm.com>
Date: Wed, 4 Jan 2017 18:07:20 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 6/7] locking/rtqspinlock: Voluntarily yield CPU when
need_sched()
On Tue, Jan 03, 2017 at 01:00:29PM -0500, Waiman Long wrote:
> Ideally we want the CPU to be preemptible even when inside or waiting
> for a lock. We cannot make it preemptible when inside a lock critical
> section, but we can try to make the task voluntarily yield the CPU
> when waiting for a lock.
>
> This patch checks the need_sched() flag and yields the CPU when the
> preemption count is 1. IOW, the spin_lock() call isn't done in a
> region that doesn't allow preemption. Otherwise, it will just perform
> RT spinning with a minimum priority of 1.
>
> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
> kernel/locking/qspinlock_rt.h | 68 +++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 65 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/qspinlock_rt.h b/kernel/locking/qspinlock_rt.h
> index 0c4d051..18ec1f8 100644
> --- a/kernel/locking/qspinlock_rt.h
> +++ b/kernel/locking/qspinlock_rt.h
> @@ -43,6 +43,16 @@
> * it will have to break out of the MCS wait queue just like what is done
> * in the OSQ lock. Then it has to retry RT spinning if it has been boosted
> * to RT priority.
> + *
> + * Another RT requirement is that the CPU need to be preemptible even when
> + * waiting for a spinlock. If the task has already acquired the lock, we
> + * will let it run to completion to release the lock and reenable preemption.
> + * For non-nested spinlock, a spinlock waiter will periodically check
> + * need_resched flag to see if it should break out of the waiting loop and
> + * yield the CPU as long as the preemption count indicates just one
> + * preempt_disabled(). For nested spinlock with outer lock acquired, it will
> + * boost its priority to the highest RT priority level to try to acquire the
> + * inner lock, finish up its work, release the locks and reenable preemption.
> */
> #include <linux/sched.h>
>
> @@ -51,6 +61,15 @@
> #endif
>
> /*
> + * Rescheduling is only needed when it is in the task context, the
> + * PREEMPT_NEED_RESCHED flag is set and the preemption count is one.
> + * If only the TIF_NEED_RESCHED flag is set, it will be moved to RT
> + * spinning with a minimum priority of 1.
> + */
> +#define rt_should_resched() (preempt_count() == \
> + (PREEMPT_OFFSET | PREEMPT_NEED_RESCHED))
> +
Maybe I am missing something... but
On x86, PREEMPT_NEED_RESCHED is used in an inverting style, i.e. 0
indicates "need to reschedule" and preempt_count() masks away this very
bit, which makes rt_should_resched() always false. So...
Regards,
Boqun
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists