lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 4 Jan 2017 16:57:16 -0500
From:   Waiman Long <longman@...hat.com>
To:     Boqun Feng <boqun.feng@...il.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 6/7] locking/rtqspinlock: Voluntarily yield CPU when
 need_sched()

On 01/04/2017 05:07 AM, Boqun Feng wrote:
> On Tue, Jan 03, 2017 at 01:00:29PM -0500, Waiman Long wrote:
>> Ideally we want the CPU to be preemptible even when inside or waiting
>> for a lock. We cannot make it preemptible when inside a lock critical
>> section, but we can try to make the task voluntarily yield the CPU
>> when waiting for a lock.
>>
>> This patch checks the need_sched() flag and yields the CPU when the
>> preemption count is 1. IOW, the spin_lock() call isn't done in a
>> region that doesn't allow preemption. Otherwise, it will just perform
>> RT spinning with a minimum priority of 1.
>>
>> Signed-off-by: Waiman Long <longman@...hat.com>
>> ---
>>  kernel/locking/qspinlock_rt.h | 68 +++++++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 65 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/locking/qspinlock_rt.h b/kernel/locking/qspinlock_rt.h
>> index 0c4d051..18ec1f8 100644
>> --- a/kernel/locking/qspinlock_rt.h
>> +++ b/kernel/locking/qspinlock_rt.h
>> @@ -43,6 +43,16 @@
>>   * it will have to break out of the MCS wait queue just like what is done
>>   * in the OSQ lock. Then it has to retry RT spinning if it has been boosted
>>   * to RT priority.
>> + *
>> + * Another RT requirement is that the CPU need to be preemptible even when
>> + * waiting for a spinlock. If the task has already acquired the lock, we
>> + * will let it run to completion to release the lock and reenable preemption.
>> + * For non-nested spinlock, a spinlock waiter will periodically check
>> + * need_resched flag to see if it should break out of the waiting loop and
>> + * yield the CPU as long as the preemption count indicates just one
>> + * preempt_disabled(). For nested spinlock with outer lock acquired, it will
>> + * boost its priority to the highest RT priority level to try to acquire the
>> + * inner lock, finish up its work, release the locks and reenable preemption.
>>   */
>>  #include <linux/sched.h>
>>  
>> @@ -51,6 +61,15 @@
>>  #endif
>>  
>>  /*
>> + * Rescheduling is only needed when it is in the task context, the
>> + * PREEMPT_NEED_RESCHED flag is set and the preemption count is one.
>> + * If only the TIF_NEED_RESCHED flag is set, it will be moved to RT
>> + * spinning with a minimum priority of 1.
>> + */
>> +#define rt_should_resched()	(preempt_count() == \
>> +				(PREEMPT_OFFSET | PREEMPT_NEED_RESCHED))
>> +
> Maybe I am missing something... but
>
> On x86, PREEMPT_NEED_RESCHED is used in an inverting style, i.e. 0
> indicates "need to reschedule" and preempt_count() masks away this very
> bit, which makes rt_should_resched() always false. So...
>
> Regards,
> Boqun

You are right. I think I misunderstood what the preemption code is
doing. I think I need to revise the code to fix that. Thanks for
spotting this.

Cheers,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ