lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <606b9734-7f1f-5fc6-69e0-8f2c72f9fc55@huawei.com>
Date:   Mon, 2 Jul 2018 17:13:01 +0800
From:   gengdongjiu <gengdongjiu@...wei.com>
To:     James Morse <james.morse@....com>
CC:     <rkrcmar@...hat.com>, <corbet@....net>, <christoffer.dall@....com>,
        <marc.zyngier@....com>, <linux@...linux.org.uk>,
        <catalin.marinas@....com>, <will.deacon@....com>,
        <kvm@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>, <linux-acpi@...r.kernel.org>
Subject: Re: [PATCH v5 1/2] arm/arm64: KVM: Add KVM_GET/SET_VCPU_EVENTS

Hi James,

On 2018/6/29 23:59, James Morse wrote:
> Hi Dongjiu Geng,
> 
> On 25/06/18 21:58, Dongjiu Geng wrote:
>> For the migrating VMs, user space may need to know the exception
>> state. For example, in the machine A, KVM make an SError pending,
>> when migrate to B, KVM also needs to pend an SError.
>>
>> This new IOCTL exports user-invisible states related to SError.
>> Together with appropriate user space changes, user space can get/set
>> the SError exception state to do migrate/snapshot/suspend.
> 
> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>> index 469de8a..357304a 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -335,6 +335,11 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
>>  int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
>>  int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
>>  int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
>> +int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
>> +			struct kvm_vcpu_events *events);
>> +
>> +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
>> +			struct kvm_vcpu_events *events);
>>  
> 
> (Nit: funny indentation)

The indentation is in order to not over 80 characters, I will change it as below to make it more reasonable

 +int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
 +			      struct kvm_vcpu_events *events);
 +
 +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 +			      struct kvm_vcpu_events *events);

> 
> 
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index 56a0260..8be14cc 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -289,6 +289,49 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
> 
>> +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
>> +			struct kvm_vcpu_events *events)
>> +{
>> +	int i;
>> +	bool serror_pending = events->exception.serror_pending;
>> +	bool has_esr = events->exception.serror_has_esr;
>> +
>> +	/* check whether the reserved field is zero */
>> +	for (i = 0; i < ARRAY_SIZE(events->reserved); i++)
>> +		if (events->reserved[i])
>> +			return -EINVAL;
>> +
>> +	/* check whether the pad field is zero */
>> +	for (i = 0; i < ARRAY_SIZE(events->exception.pad); i++)
>> +		if (events->exception.pad[i])
>> +			return -EINVAL;
>> +
>> +	if (serror_pending && has_esr) {
>> +		if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN))
>> +			return -EINVAL;
>> +
> 
>> +		kvm_set_sei_esr(vcpu, events->exception.serror_esr);
> 
> This silently discards all but the bottom 24 bits of serror_esr.
> 
> It makes sense that this field is 64 bit, because the register is 64 bit, and it
> would let us use this API to migrate any new state that appears in the higher
> bits... But those bits will come with an ID/feature field, we shouldn't accept
> an attempt to restore them on a CPU that doesn't support the feature. If that
> happens here, it silently succeeds, but the kernel just threw the extra bits away.
> 
> You added documentation that only the bottom 24bits can be set, can we add
> checks to enforce this, so the bits can be used later.
yes, sure! I will check it. thanks for the suggestion.


> 
> 
>> +	} else if (serror_pending) {
>> +		kvm_inject_vabt(vcpu);
>> +	}
>> +
>> +	return 0;
>> +}
> 
>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>> index a4c1b76..4e6f366 100644
>> --- a/virt/kvm/arm/arm.c
>> +++ b/virt/kvm/arm/arm.c
>> @@ -1107,6 +1107,27 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>>  		r = kvm_arm_vcpu_has_attr(vcpu, &attr);
>>  		break;
>>  	}
>> +#ifdef __KVM_HAVE_VCPU_EVENTS
> 
> So its this #ifdef, or a uapi struct for a feature 32bit doesn't support.
> I think the right thing to do is wire this up for 32bit, it also calls
> kvm_inject_vabt() in handle_exit.c, so must have the same migration problems.
> 
> I'll post a patch to do this as I've got something I can test it on.
Ok, so here I will temporarily use "#ifdef __KVM_HAVE_VCPU_EVENTS" to avoid the 32bit platform build errors,
when you post a new patch, you can remove the "#ifdef". thanks.


> 
> 
>> +	case KVM_GET_VCPU_EVENTS: {
>> +		struct kvm_vcpu_events events;
>> +
>> +		if (kvm_arm_vcpu_get_events(vcpu, &events))
>> +			return -EINVAL;
>> +
>> +		if (copy_to_user(argp, &events, sizeof(events)))
>> +			return -EFAULT;
>> +
>> +		return 0;
>> +	}
>> +	case KVM_SET_VCPU_EVENTS: {
>> +		struct kvm_vcpu_events events;
>> +
>> +		if (copy_from_user(&events, argp, sizeof(events)))
>> +			return -EFAULT;
>> +
>> +		return kvm_arm_vcpu_set_events(vcpu, &events);
>> +	}
>> +#endif
> 
> (It bugs me that the architecture has some rules about merging multiple
> architected ESR values, that we neither enforce, nor document as user-space's
> problem. It doesn't matter for RAS, but might for any future ESR encodings. But
> I guess user-space wouldn't be aware of them anyway, and it can already put
> bogus values in SPSR/ESR/ELR etc.)
> 
> 
> With a check against the top bits of ESR:
> Reviewed-by: James Morse <james.morse@....com>
Thanks for this "Reviewed-by", I will check against the top bits of ESR

> 
> 
> Thanks,
> 
> James
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ