lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5771EDCD.5070400@linux.vnet.ibm.com>
Date:	Tue, 28 Jun 2016 11:23:57 +0800
From:	xinhui <xinhui.pan@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
	paulmck@...ux.vnet.ibm.com, mingo@...hat.com, mpe@...erman.id.au,
	paulus@...ba.org, benh@...nel.crashing.org, Waiman.Long@....com,
	boqun.feng@...il.com, will.deacon@....com, dave@...olabs.net
Subject: Re: [PATCH 2/3] powerpc/spinlock: support vcpu preempted check



On 2016年06月27日 22:17, Peter Zijlstra wrote:
> On Mon, Jun 27, 2016 at 01:41:29PM -0400, Pan Xinhui wrote:
>> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
>> index 523673d..ae938ee 100644
>> --- a/arch/powerpc/include/asm/spinlock.h
>> +++ b/arch/powerpc/include/asm/spinlock.h
>> @@ -52,6 +52,21 @@
>>   #define SYNC_IO
>>   #endif
>>
>> +/* For fixing some spinning issues in a guest.
>> + * kernel would check if vcpu is preempted during a spin loop.
>> + * we support that.
>> + */
>
> If you look around in that file you'll notice that the above comment
> style is inconsistent.
>
> Nor is the comment really clarifying things, for one you fail to mention
> the problem by its known name. You also forget to explain how this
> interface will help. How about something like this:
>
> /*
>   * In order to deal with a various lock holder preemption issues provide
>   * an interface to see if a vCPU is currently running or not.
>   *
>   * This allows us to terminate optimistic spin loops and block,
>   * analogous to the native optimistic spin heuristic of testing if the
>   * lock owner task is running or not.
>   */
thanks!!!

>
> Also, since you now have a useful comment, which is not architecture
> specific, I would place it with the common vcpu_is_preempted()
> definition in sched.h.
>
agree with you. Will do that. I will also add Suggested-by with you.
thanks

> Hmm?
>
>> +#define arch_vcpu_is_preempted arch_vcpu_is_preempted
>> +static inline bool arch_vcpu_is_preempted(int cpu)
>> +{
>> +	struct lppaca *lp = &lppaca_of(cpu);
>> +
>> +	if (unlikely(!(lppaca_shared_proc(lp) ||
>> +			lppaca_dedicated_proc(lp))))
>> +		return false;
>> +	return !!(be32_to_cpu(lp->yield_count) & 1);
>> +}
>> +
>>   static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>>   {
>>   	return lock.slock == 0;
>> --
>> 2.4.11
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ