[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8d8bf00-3965-d4a1-c464-59ffcf20bfa3@linux.ibm.com>
Date: Wed, 1 Sep 2021 11:43:00 +0200
From: Pierre Morel <pmorel@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>, kvm@...r.kernel.org
Cc: linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
borntraeger@...ibm.com, frankja@...ux.ibm.com, cohuck@...hat.com,
thuth@...hat.com, imbrenda@...ux.ibm.com, hca@...ux.ibm.com,
gor@...ux.ibm.com
Subject: Re: [PATCH v3 1/3] s390x: KVM: accept STSI for CPU topology
information
On 8/31/21 3:59 PM, David Hildenbrand wrote:
> On 03.08.21 10:26, Pierre Morel wrote:
>> STSI(15.1.x) gives information on the CPU configuration topology.
>> Let's accept the interception of STSI with the function code 15 and
>> let the userland part of the hypervisor handle it when userland
>> support the CPU Topology facility.
>>
>> Signed-off-by: Pierre Morel <pmorel@...ux.ibm.com>
>> ---
>> arch/s390/kvm/priv.c | 7 ++++++-
>> 1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
>> index 9928f785c677..8581b6881212 100644
>> --- a/arch/s390/kvm/priv.c
>> +++ b/arch/s390/kvm/priv.c
>> @@ -856,7 +856,8 @@ static int handle_stsi(struct kvm_vcpu *vcpu)
>> if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
>> return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
>> - if (fc > 3) {
>> + if ((fc > 3 && fc != 15) ||
>> + (fc == 15 && !test_kvm_facility(vcpu->kvm, 11))) {
>> kvm_s390_set_psw_cc(vcpu, 3);
>> return 0;
>> }
>> @@ -893,6 +894,10 @@ static int handle_stsi(struct kvm_vcpu *vcpu)
>> goto out_no_data;
>> handle_stsi_3_2_2(vcpu, (void *) mem);
>> break;
>> + case 15:
>> + trace_kvm_s390_handle_stsi(vcpu, fc, sel1, sel2, operand2);
>> + insert_stsi_usr_data(vcpu, operand2, ar, fc, sel1, sel2);
>> + return -EREMOTE;
>> }
>> if (kvm_s390_pv_cpu_is_protected(vcpu)) {
>> memcpy((void *)sida_origin(vcpu->arch.sie_block), (void *)mem,
>>
>
> Sorry, I'm a bit rusty on s390x kvm facility handling.
>
>
> For test_kvm_facility() to succeed, the facility has to be in both:
>
> a) fac_mask: actually available on the HW and supported by KVM
> (kvm_s390_fac_base via FACILITIES_KVM, kvm_s390_fac_ext via
> FACILITIES_KVM_CPUMODEL)
>
> b) fac_list: enabled for a VM
>
> AFAIU, facility 11 is neither in FACILITIES_KVM nor
> FACILITIES_KVM_CPUMODEL, and I remember it's a hypervisor-managed bit.
>
> So unless we unlock facility 11 in FACILITIES_KVM_CPUMODEL, will
> test_kvm_facility(vcpu->kvm, 11) ever successfully trigger here?
>
>
> I'm pretty sure I am messing something up :)
>
I think it is the same remark that Christian did as wanted me to use the
arch/s390/tools/gen_facilities.c to activate the facility.
The point is that CONFIGURATION_TOPOLOGY, STFL, 11, is already defined
inside QEMU since full_GEN10_GA1, so the test_kvm_facility() will
succeed with the next patch setting the facility 11 in the mask when
getting the KVM_CAP_S390_CPU_TOPOLOGY from userland.
But if we activate it in KVM via any of the FACILITIES_KVM_xxx in the
gen_facilities.c we will activate it for the guest what ever userland
hypervizor we have, including old QEMU which will generate an exception.
In this circumstances we have the choice between:
- use FACILITY_KVM and handle everything in kernel
- use FACILITY_KVM and use an extra CAPABILITY to handle part in kernel
to avoid guest crash and part in userland
- use only the extra CAPABILITY and handle almost everything in userland
I want to avoid kernel code when not really necessary so I eliminated
the first option.
The last two are not very different but I found a better integration
using the last one, allowing to use standard test_[kvm_]facility()
Thanks for reviewing.
Regards,
Pierre
--
Pierre Morel
IBM Lab Boeblingen
Powered by blists - more mailing lists