[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4039d1ff-8e9f-42cf-a8fa-b326102fcbf5@xen0n.name>
Date: Wed, 6 Mar 2024 17:22:43 +0800
From: WANG Xuerui <kernel@...0n.name>
To: maobibo <maobibo@...ngson.cn>, Tianrui Zhao <zhaotianrui@...ngson.cn>,
Juergen Gross <jgross@...e.com>, Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>
Cc: loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, kvm@...r.kernel.org
Subject: Re: [PATCH v6 7/7] Documentation: KVM: Add hypercall for LoongArch
On 3/6/24 11:28, maobibo wrote:
> On 2024/3/6 上午2:26, WANG Xuerui wrote:
>> On 3/4/24 17:10, maobibo wrote:
>>> On 2024/3/2 下午5:41, WANG Xuerui wrote:
>>>> On 3/2/24 16:47, Bibo Mao wrote:
>>>>> [snip]
>>>>> +
>>>>> +KVM hypercall ABI
>>>>> +=================
>>>>> +
>>>>> +Hypercall ABI on KVM is simple, only one scratch register a0 (v0)
>>>>> and at most
>>>>> +five generic registers used as input parameter. FP register and
>>>>> vector register
>>>>> +is not used for input register and should not be modified during
>>>>> hypercall.
>>>>> +Hypercall function can be inlined since there is only one scratch
>>>>> register.
>>>>
>>>> It should be pointed out explicitly that on hypercall return all
>>> Well, return value description will added. What do think about the
>>> meaning of return value for KVM_HCALL_FUNC_PV_IPI hypercall? The
>>> number of CPUs with IPI delivered successfully like kvm x86 or simply
>>> success/failure?
I just noticed I've forgotten to comment on this question. FYI, RISC-V
SBI's equivalent [1] doesn't even indicate errors. And from my
perspective, we can always add a new hypercall returning more info
should that info is needed in the future; for now I don't have a problem
whether the return type is void, bool or number of CPUs that are
successfully reached.
[1]:
https://github.com/riscv-non-isa/riscv-sbi-doc/blob/v2.0/src/ext-ipi.adoc
>>>> architectural state except ``$a0`` is preserved. Or is the whole
>>>> ``$a0 - $t8`` range clobbered, just like with Linux syscalls?
>>>>
>>> what is advantage with $a0 - > $t8 clobbered?
>>
>> Because then a hypercall is going to behave identical as an ordinary C
>> function call, which is easy for people and compilers to understand.
>>
> If you really understand detailed behavior about hypercall/syscall, the
> conclusion may be different.
>
> If T0 - T8 is clobbered with hypercall instruction, hypercall caller
> need save clobbered register, now hypercall exception save/restore all
> the registers during VM exits. If so, hypercall caller need not save
> general registers and it is not necessary scratched for hypercall ABI.
>
> Until now all the discussion the macro level, no detail code level.
>
> Can you show me some example code where T0-T8 need not save/restore
> during LoongArch hypercall exception?
I was emphasizing that consistency is generally good, and yes that's
"macroscopic" level talk. Of course, the hypercall client code would
have to do *less* work if *more* registers than the minimum are
preserved -- if right now everything is already preserved, nothing needs
to change.
But please also notice that the context switch cost is paid for every
hypercall, and we can't reduce the number of preserved registers without
breaking compatibility. So I think we can keep the current
implementation behavior, but promise less in the spec: this way we'll
keep the possibility of reducing the context switch overhead.
--
WANG "xen0n" Xuerui
Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
Powered by blists - more mailing lists