[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f0d2671a-1ce7-d499-47cf-8dc9163f1e17@loongson.cn>
Date: Fri, 30 Jan 2026 09:22:01 +0800
From: Bibo Mao <maobibo@...ngson.cn>
To: Huacai Chen <chenhuacai@...nel.org>
Cc: Juergen Gross <jgross@...e.com>, Tianrui Zhao <zhaotianrui@...ngson.cn>,
WANG Xuerui <kernel@...0n.name>, kvm@...r.kernel.org,
loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/2] LoongArch: Add paravirt support with
vcpu_is_preempted() in guest side
On 2026/1/29 下午8:55, Huacai Chen wrote:
> Hi, Bibo,
>
> On Fri, Dec 19, 2025 at 2:30 PM Bibo Mao <maobibo@...ngson.cn> wrote:
>>
>> Function vcpu_is_preempted() is used to check whether vCPU is preempted
>> or not. Here add implementation with vcpu_is_preempted() when option
>> CONFIG_PARAVIRT is enabled.
>>
>> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
>> Acked-by: Juergen Gross <jgross@...e.com>
>> ---
>> arch/loongarch/include/asm/qspinlock.h | 3 +++
>> arch/loongarch/kernel/paravirt.c | 21 ++++++++++++++++++++-
>> 2 files changed, 23 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/loongarch/include/asm/qspinlock.h b/arch/loongarch/include/asm/qspinlock.h
>> index e76d3aa1e1eb..fa3eaf7e48f2 100644
>> --- a/arch/loongarch/include/asm/qspinlock.h
>> +++ b/arch/loongarch/include/asm/qspinlock.h
>> @@ -34,6 +34,9 @@ static inline bool virt_spin_lock(struct qspinlock *lock)
>> return true;
>> }
>>
>> +#define vcpu_is_preempted vcpu_is_preempted
>> +bool vcpu_is_preempted(int cpu);
>> +
>> #endif /* CONFIG_PARAVIRT */
>>
>> #include <asm-generic/qspinlock.h>
>> diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c
>> index b1b51f920b23..a81a3e871dd1 100644
>> --- a/arch/loongarch/kernel/paravirt.c
>> +++ b/arch/loongarch/kernel/paravirt.c
>> @@ -12,6 +12,7 @@ static int has_steal_clock;
>> struct static_key paravirt_steal_enabled;
>> struct static_key paravirt_steal_rq_enabled;
>> static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
>> +static DEFINE_STATIC_KEY_FALSE(virt_preempt_key);
>> DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key);
>>
>> static u64 native_steal_clock(int cpu)
>> @@ -267,6 +268,18 @@ static int pv_time_cpu_down_prepare(unsigned int cpu)
>>
>> return 0;
>> }
>> +
>> +bool notrace vcpu_is_preempted(int cpu)
> Is "notrace" really needed? Only S390 do this.
The prefix "notrace" is copied from S390, it is inline function on x86.
Here is git log information with arch/s390/kernel/smp.c
commit 8ebf6da9db1b2a20bb86cc1bee2552e894d03308
Author: Philipp Rudo <prudo@...ux.ibm.com>
Date: Mon Apr 6 20:47:48 2020
s390/ftrace: fix potential crashes when switching tracers
Switching tracers include instruction patching. To prevent that a
instruction is patched while it's read the instruction patching is done
in stop_machine 'context'. This also means that any function called
during stop_machine must not be traced. Thus add 'notrace' to all
functions called within stop_machine.
Fixes: 1ec2772e0c3c ("s390/diag: add a statistic for diagnose calls")
Fixes: 38f2c691a4b3 ("s390: improve wait logic of stop_machine")
Fixes: 4ecf0a43e729 ("processor: get rid of cpu_relax_yield")
Signed-off-by: Philipp Rudo <prudo@...ux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@...ux.ibm.com>
However I am not familiar with tracer and have no idea about this, that
is both OK to me. You are Linux kernel expert, what is your opinion
about notrace prefix?
Regards
Bibo Mao
>
> Huacai
>
>> +{
>> + struct kvm_steal_time *src;
>> +
>> + if (!static_branch_unlikely(&virt_preempt_key))
>> + return false;
>> +
>> + src = &per_cpu(steal_time, cpu);
>> + return !!(src->preempted & KVM_VCPU_PREEMPTED);
>> +}
>> +EXPORT_SYMBOL(vcpu_is_preempted);
>> #endif
>>
>> static void pv_cpu_reboot(void *unused)
>> @@ -308,6 +321,9 @@ int __init pv_time_init(void)
>> pr_err("Failed to install cpu hotplug callbacks\n");
>> return r;
>> }
>> +
>> + if (kvm_para_has_feature(KVM_FEATURE_PREEMPT))
>> + static_branch_enable(&virt_preempt_key);
>> #endif
>>
>> static_call_update(pv_steal_clock, paravt_steal_clock);
>> @@ -318,7 +334,10 @@ int __init pv_time_init(void)
>> static_key_slow_inc(¶virt_steal_rq_enabled);
>> #endif
>>
>> - pr_info("Using paravirt steal-time\n");
>> + if (static_key_enabled(&virt_preempt_key))
>> + pr_info("Using paravirt steal-time with preempt enabled\n");
>> + else
>> + pr_info("Using paravirt steal-time with preempt disabled\n");
>>
>> return 0;
>> }
>> --
>> 2.39.3
>>
Powered by blists - more mailing lists