[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <d640763a-ed52-feea-41d6-b570794018b4@de.ibm.com>
Date: Thu, 29 Sep 2016 15:21:58 +0200
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Martin Schwidefsky <schwidefsky@...ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
linux-s390@...r.kernel.org, kvm@...r.kernel.org,
xen-devel-request@...ts.xenproject.org, benh@...nel.crashing.org,
paulus@...ba.org, mpe@...erman.id.au, mingo@...hat.com,
paulmck@...ux.vnet.ibm.com, will.deacon@....com,
kernellwp@...il.com, jgross@...e.com, pbonzini@...hat.com,
bsingharora@...il.com
Subject: Re: [PATCH] s390x/spinlock: Provide vcpu_is_preempted globally
On 09/29/2016 03:11 PM, Martin Schwidefsky wrote:
> On Thu, 29 Sep 2016 13:54:16 +0200
> Christian Borntraeger <borntraeger@...ibm.com> wrote:
>
>> this implements the s390 backend for commit
>> "kernel/sched: introduce vcpu preempted check interface"
>> by simply reusing the existing cpu_is_preempted function.
>>
>> Signed-off-by: Christian Borntraeger <borntraeger@...ibm.com>
>> ---
>> Martin, Heiko,
>>
>> this patch is a minimal change by not touching all existing users of
>> cpu_is_preempted in spinlock.c. If you want it differently, let me
>> know.
>>
>>
>> arch/s390/include/asm/spinlock.h | 7 +++++++
>> arch/s390/lib/spinlock.c | 3 ++-
>> 2 files changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/s390/include/asm/spinlock.h b/arch/s390/include/asm/spinlock.h
>> index 63ebf37..6e82986 100644
>> --- a/arch/s390/include/asm/spinlock.h
>> +++ b/arch/s390/include/asm/spinlock.h
>> @@ -21,6 +21,13 @@ _raw_compare_and_swap(unsigned int *lock, unsigned int old, unsigned int new)
>> return __sync_bool_compare_and_swap(lock, old, new);
>> }
>>
>> +int arch_vcpu_is_preempted(int cpu);
>> +#define vcpu_is_preempted cpu_is_preempted
>> +static inline bool cpu_is_preempted(int cpu)
>> +{
>> + return arch_vcpu_is_preempted(cpu);
>> +}
>> +
>> /*
>> * Simple spin lock operations. There are two variants, one clears IRQ's
>> * on the local processor, one does not.
>> diff --git a/arch/s390/lib/spinlock.c b/arch/s390/lib/spinlock.c
>> index e5f50a7..9f473c8 100644
>> --- a/arch/s390/lib/spinlock.c
>> +++ b/arch/s390/lib/spinlock.c
>> @@ -37,7 +37,7 @@ static inline void _raw_compare_and_delay(unsigned int *lock, unsigned int old)
>> asm(".insn rsy,0xeb0000000022,%0,0,%1" : : "d" (old), "Q" (*lock));
>> }
>>
>> -static inline int cpu_is_preempted(int cpu)
>> +int arch_vcpu_is_preempted(int cpu)
>> {
>> if (test_cpu_flag_of(CIF_ENABLED_WAIT, cpu))
>> return 0;
>> @@ -45,6 +45,7 @@ static inline int cpu_is_preempted(int cpu)
>> return 0;
>> return 1;
>> }
>> +EXPORT_SYMBOL(arch_vcpu_is_preempted);
>>
>> void arch_spin_lock_wait(arch_spinlock_t *lp)
>> {
>
> Hmm, if I look at the code we now have an additional function for
> the spinlock loops. The call arch_vcpu_is_preempted which test
> CIF_ENABLED_WAIT and then calls smp_vcpu_scheduled(). The test
> used to be inline.
>
> A better solution would be to move the CIF_ENABLED_WAIT test to the
> smp_vcpu_scheduled() function, rename it to arch_vcpu_is_preempted()
> and then export that function.
The cpu_is_preempted() function is
> replaced by arch_vcpu_is_preempted() which does make a lot of sense,
> no?
>
Yes that makes sense, will spin a v2.
Powered by blists - more mailing lists