lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 Feb 2021 15:20:15 -0800
From:   Sean Christopherson <seanjc@...gle.com>
To:     Ashish Kalra <Ashish.Kalra@....com>
Cc:     pbonzini@...hat.com, tglx@...utronix.de, mingo@...hat.com,
        hpa@...or.com, rkrcmar@...hat.com, joro@...tes.org, bp@...e.de,
        thomas.lendacky@....com, x86@...nel.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, srutherford@...gle.com,
        venu.busireddy@...cle.com, brijesh.singh@....com
Subject: Re: [PATCH v10 12/16] KVM: x86: Introduce new
 KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR.

On Thu, Feb 04, 2021, Ashish Kalra wrote:
> diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
> index 950afebfba88..f6bfa138874f 100644
> --- a/arch/x86/include/uapi/asm/kvm_para.h
> +++ b/arch/x86/include/uapi/asm/kvm_para.h
> @@ -33,6 +33,7 @@
>  #define KVM_FEATURE_PV_SCHED_YIELD	13
>  #define KVM_FEATURE_ASYNC_PF_INT	14
>  #define KVM_FEATURE_MSI_EXT_DEST_ID	15
> +#define KVM_FEATURE_SEV_LIVE_MIGRATION	16
>  
>  #define KVM_HINTS_REALTIME      0
>  
> @@ -54,6 +55,7 @@
>  #define MSR_KVM_POLL_CONTROL	0x4b564d05
>  #define MSR_KVM_ASYNC_PF_INT	0x4b564d06
>  #define MSR_KVM_ASYNC_PF_ACK	0x4b564d07
> +#define MSR_KVM_SEV_LIVE_MIGRATION	0x4b564d08
>  
>  struct kvm_steal_time {
>  	__u64 steal;
> @@ -136,4 +138,6 @@ struct kvm_vcpu_pv_apf_data {
>  #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
>  #define KVM_PV_EOI_DISABLED 0x0
>  
> +#define KVM_SEV_LIVE_MIGRATION_ENABLED BIT_ULL(0)
> +
>  #endif /* _UAPI_ASM_X86_KVM_PARA_H */
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index b0d324aed515..93f42b3d3e33 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1627,6 +1627,16 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>  	return ret;
>  }
>  
> +void sev_update_migration_flags(struct kvm *kvm, u64 data)
> +{

I don't see the point for a helper.  It's actually going to make the code
less readable once proper error handling is added.  Given that it's not static
and exposed via svm.h, without an external user, I assume this got left behind
when the implicit enabling was removed.

> +	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +
> +	if (!sev_guest(kvm))

I 100% agree with Steve, this needs to check guest_cpuid_has() in addition to
sev_guest().  And it should return '1', i.e. signal #GP to the guest, not
silently eat the bad WRMSR.

> +		return;
> +
> +	sev->live_migration_enabled = !!(data & KVM_SEV_LIVE_MIGRATION_ENABLED);

The value needs to be checked as well, i.e. all bits except LIVE_MIGRATION...
should to be reserved to zero.

> +}
> +
>  int svm_get_shared_pages_list(struct kvm *kvm,
>  			      struct kvm_shared_pages_list *list)
>  {
> @@ -1639,6 +1649,9 @@ int svm_get_shared_pages_list(struct kvm *kvm,
>  	if (!sev_guest(kvm))
>  		return -ENOTTY;
>  
> +	if (!sev->live_migration_enabled)
> +		return -EINVAL;

EINVAL is a weird return value for something that is controlled by the guest,
especially since it's possible for the guest to support migration, just not
yet.  EBUSY maybe?  EOPNOTSUPP?

> +
>  	if (!list->size)
>  		return -EINVAL;
>  
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 58f89f83caab..43ea5061926f 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2903,6 +2903,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  		svm->msr_decfg = data;
>  		break;
>  	}
> +	case MSR_KVM_SEV_LIVE_MIGRATION:
> +		sev_update_migration_flags(vcpu->kvm, data);
> +		break;

There shuld be a svm_get_msr() entry as well, I don't see any reason to prevent
the guest from reading the MSR.

>  	case MSR_IA32_APICBASE:
>  		if (kvm_vcpu_apicv_active(vcpu))
>  			avic_update_vapic_bar(to_svm(vcpu), data);
> @@ -3976,6 +3979,19 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>  			vcpu->arch.cr3_lm_rsvd_bits &= ~(1UL << (best->ebx & 0x3f));
>  	}
>  
> +	/*
> +	 * If SEV guest then enable the Live migration feature.
> +	 */
> +	if (sev_guest(vcpu->kvm)) {
> +		struct kvm_cpuid_entry2 *best;
> +
> +		best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
> +		if (!best)
> +			return;
> +
> +		best->eax |= (1 << KVM_FEATURE_SEV_LIVE_MIGRATION);

Again echoing Steve's concern, userspace is the ultimate authority on what
features are exposed to the VM.  I don't see any motivation for forcing live
migration to be enabled.

And as I believe was pointed out elsewhere, this bit needs to be advertised to
userspace via kvm_cpu_caps.

> +	}
> +
>  	if (!kvm_vcpu_apicv_active(vcpu))
>  		return;
>  
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 066ca2a9f1e6..e1bffc11e425 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -79,6 +79,7 @@ struct kvm_sev_info {
>  	unsigned long pages_locked; /* Number of pages locked */
>  	struct list_head regions_list;  /* List of registered regions */
>  	u64 ap_jump_table;	/* SEV-ES AP Jump Table address */
> +	bool live_migration_enabled;
>  	/* List and count of shared pages */
>  	int shared_pages_list_count;
>  	struct list_head shared_pages_list;
> @@ -592,6 +593,7 @@ int svm_unregister_enc_region(struct kvm *kvm,
>  void pre_sev_run(struct vcpu_svm *svm, int cpu);
>  void __init sev_hardware_setup(void);
>  void sev_hardware_teardown(void);
> +void sev_update_migration_flags(struct kvm *kvm, u64 data);
>  void sev_free_vcpu(struct kvm_vcpu *vcpu);
>  int sev_handle_vmgexit(struct vcpu_svm *svm);
>  int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
> -- 
> 2.17.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ