[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhV-H47DCmYeT0jx3zkk=48amaMTxHDbDyGB0=WNPbsDR=dXQ@mail.gmail.com>
Date: Tue, 27 Feb 2024 22:57:05 +0800
From: Huacai Chen <chenhuacai@...nel.org>
To: WANG Xuerui <kernel@...0n.name>, Paolo Bonzini <pbonzini@...hat.com>
Cc: maobibo <maobibo@...ngson.cn>, Jiaxun Yang <jiaxun.yang@...goat.com>,
Tianrui Zhao <zhaotianrui@...ngson.cn>, Juergen Gross <jgross@...e.com>, loongarch@...ts.linux.dev,
linux-kernel@...r.kernel.org, virtualization@...ts.linux.dev,
kvm@...r.kernel.org
Subject: Re: [PATCH v5 3/6] LoongArch: KVM: Add cpucfg area for kvm hypervisor
Hi, Paolo,
I'm very sorry to bother you, now we have a controversy about how to
query hypervisor features (PV IPI, PV timer and maybe PV spinlock) on
LoongArch, and we have two choices:
1, Get from CPUCFG registers;
2, Get from a hypercall.
CPUCFG are unprivileged registers which can be read from user space,
so the first method can unnecessarily leak information to userspace,
but this method is more or less similar to x86's CPUID solution.
Hypercall is of course a privileged method (so no info leak issues),
and this method is used by ARM/ARM64 and most other architectures.
Besides, LoongArch's CPUCFG is supposed to provide per-core features,
while not all hypervisor features are per-core information (Of course
PV IPI is, but others may be or may not be, while 'extioi' is
obviously not). Bibo thinks that only CPUCFG has enough space to
contain all hypervisor features (CSR and IOCSR are not enough).
However it is obvious that we don't need any register space to hold
these features, because they are held in the hypervisor's memory. The
only information that needs register space is "whether I am in a VM"
(in this case we really cannot use hypercall), but this feature is
already in IOCSR (LOONGARCH_IOCSR_FEATURES).
Now my question is: for a new architecture, which method is
preferable, maintainable and extensible? Of course "they both OK" for
the current purpose in this patch, but I think you can give us more
useful information from a professor's view.
More details are available in this thread, about the 3rd patch. Any
suggestions are welcome.
Huacai
On Tue, Feb 27, 2024 at 6:19 PM WANG Xuerui <kernel@...0n.name> wrote:
>
> On 2/27/24 18:12, maobibo wrote:
> >
> >
> > On 2024/2/27 下午5:10, WANG Xuerui wrote:
> >> On 2/27/24 11:14, maobibo wrote:
> >>>
> >>>
> >>> On 2024/2/27 上午4:02, Jiaxun Yang wrote:
> >>>>
> >>>>
> >>>> 在2024年2月26日二月 上午8:04,maobibo写道:
> >>>>> On 2024/2/26 下午2:12, Huacai Chen wrote:
> >>>>>> On Mon, Feb 26, 2024 at 10:04 AM maobibo <maobibo@...ngsoncn> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On 2024/2/24 下午5:13, Huacai Chen wrote:
> >>>>>>>> Hi, Bibo,
> >>>>>>>>
> >>>>>>>> On Thu, Feb 22, 2024 at 11:28 AM Bibo Mao <maobibo@...ngson.cn>
> >>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>> Instruction cpucfg can be used to get processor features. And
> >>>>>>>>> there
> >>>>>>>>> is trap exception when it is executed in VM mode, and also it is
> >>>>>>>>> to provide cpu features to VM. On real hardware cpucfg area 0 - 20
> >>>>>>>>> is used. Here one specified area 0x40000000 -- 0x400000ff is used
> >>>>>>>>> for KVM hypervisor to privide PV features, and the area can be
> >>>>>>>>> extended
> >>>>>>>>> for other hypervisors in future. This area will never be used for
> >>>>>>>>> real HW, it is only used by software.
> >>>>>>>> After reading and thinking, I find that the hypercall method
> >>>>>>>> which is
> >>>>>>>> used in our productive kernel is better than this cpucfg method.
> >>>>>>>> Because hypercall is more simple and straightforward, plus we don't
> >>>>>>>> worry about conflicting with the real hardware.
> >>>>>>> No, I do not think so. cpucfg is simper than hypercall, hypercall
> >>>>>>> can
> >>>>>>> be in effect when system runs in guest mode. In some scenario
> >>>>>>> like TCG
> >>>>>>> mode, hypercall is illegal intruction, however cpucfg can work.
> >>>>>> Nearly all architectures use hypercall except x86 for its historical
> >>>>> Only x86 support multiple hypervisors and there is multiple hypervisor
> >>>>> in x86 only. It is an advantage, not historical reason.
> >>>>
> >>>> I do believe that all those stuff should not be exposed to guest
> >>>> user space
> >>>> for security reasons.
> >>> Can you add PLV checking when cpucfg 0x40000000-0x400000FF is
> >>> emulated? if it is user mode return value is zero and it is kernel
> >>> mode emulated value will be returned. It can avoid information leaking.
> >>
> >> I've suggested this approach in another reply [1], but I've rechecked
> >> the manual, and it turns out this behavior is not permitted by the
> >> current wording. See LoongArch Reference Manual v1.10, Volume 1,
> >> Section 2.2.10.5 "CPUCFG":
> >>
> >> > CPUCFG 访问未定义的配置字将读回全 0 值。
> >> >
> >> > Reads of undefined CPUCFG configuration words shall return all-zeroes.
> >>
> >> This sentence mentions no distinction based on privilege modes, so it
> >> can only mean the behavior applies universally regardless of privilege
> >> modes.
> >>
> >> I think if you want to make CPUCFG behavior PLV-dependent, you may
> >> have to ask the LoongArch spec editors, internally or in public, for a
> >> new spec revision.
> > No, CPUCFG behavior between CPUCFG0-CPUCFG21 is unchanged, only that it
> > can be defined by software since CPUCFG 0x400000000 is used by software.
>
> The 0x40000000 range is not mentioned in the manuals. I know you've
> confirmed privately with HW team but this needs to be properly
> documented for public projects to properly rely on.
>
> >> (There are already multiple third-party LoongArch implementers as of
> >> late 2023, so any ISA-level change like this would best be
> >> coordinated, to minimize surprises.)
> > With document Vol 4-23
> > https://www.intel.com/content/dam/develop/external/us/en/documents/335592-sdm-vol-4.pdf
> >
> > There is one line "MSR address range between 40000000H - 400000FFH is
> > marked as a specially reserved range. All existing and
> > future processors will not implement any features using any MSR in this
> > range."
>
> Thanks for providing this info, now at least we know why it's this
> specific range of 0x400000XX that's chosen.
>
> >
> > It only says that it is reserved, it does not say detailed software
> > behavior. Software behavior is defined in hypervisor such as:
> > https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/main/tlfs/Requirements%20for%20Implementing%20the%20Microsoft%20Hypervisor%20Interface.pdf
> > https://kb.vmware.com/s/article/1009458
> >
> > If hypercall method is used, there should be ABI also like aarch64:
> > https://documentation-service.arm.com/static/6013e5faeee5236980d08619
>
> Yes proper documentation of public API surface is always necessary
> *before* doing real work. Because right now the hypercall provider is
> Linux KVM, maybe we can document the existing and planned hypercall
> usage and ABI in the kernel docs along with code changes.
>
> --
> WANG "xen0n" Xuerui
>
> Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
>
Powered by blists - more mailing lists