lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 May 2015 11:00:44 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
CC:	guangrong.xiao@...ux.intel.com, rkrcmar@...hat.com, bdas@...hat.com
Subject: Re: [PATCH v2 04/13] KVM: x86: API changes for SMM support



On 27/05/2015 19:05, Paolo Bonzini wrote:
> This patch includes changes to the external API for SMM support.
> All the changes are predicated by the availability of a new
> capability, KVM_CAP_X86_SMM, which is added at the end of the
> patch series.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>

Radim, I forgot to change flags to u8 in this patch.  However, I am
thinking that it's just as good to leave it as u16.  The reason is that
the other two fields in this 32-bit word were just 1-bit flags that were
expanded to 8 bits for no particular reasons.

If we make it now

		u8 flags;
		u8 padding2;

chances are that in due time it will become simply

		u8 flags;
		u8 flags2;

and then it's nicer to just have an u16.  On the other hand, your
argument that KVM_RUN has very little free space is also a good one.
What do you think?  I have already done the change in my local repo, but
I can change it back too.

Paolo

> ---
> 	RFC->v1: add smi.pending and smi.rsm_unmasks_nmi fields, reduce padding
> 		 in struct kvm_vcpu_events; remove memset of events struct,
> 		 instead zero smi.pad.  KVM_CAP_X86_SMM frozen at 117.
> ---
>  Documentation/virtual/kvm/api.txt | 40 +++++++++++++++++++++++++++++++++------
>  arch/x86/include/asm/kvm_host.h   |  3 +++
>  arch/x86/include/uapi/asm/kvm.h   | 11 ++++++++++-
>  arch/x86/kvm/kvm_cache_regs.h     |  5 +++++
>  arch/x86/kvm/x86.c                | 34 +++++++++++++++++++++++++++++++--
>  include/uapi/linux/kvm.h          |  5 ++++-
>  6 files changed, 88 insertions(+), 10 deletions(-)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index 695544420ff2..51523b70b6b2 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -820,11 +820,21 @@ struct kvm_vcpu_events {
>  	} nmi;
>  	__u32 sipi_vector;
>  	__u32 flags;
> +	struct {
> +		__u8 smm;
> +		__u8 pending;
> +		__u8 smm_inside_nmi;
> +		__u8 pad;
> +	} smi;
>  };
>  
> -KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
> -interrupt.shadow contains a valid state. Otherwise, this field is undefined.
> +Only two fields are defined in the flags field:
> +
> +- KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
> +  interrupt.shadow contains a valid state.
>  
> +- KVM_VCPUEVENT_VALID_SMM may be set in the flags field to signal that
> +  smi contains a valid state.
>  
>  4.32 KVM_SET_VCPU_EVENTS
>  
> @@ -841,17 +851,20 @@ vcpu.
>  See KVM_GET_VCPU_EVENTS for the data structure.
>  
>  Fields that may be modified asynchronously by running VCPUs can be excluded
> -from the update. These fields are nmi.pending and sipi_vector. Keep the
> -corresponding bits in the flags field cleared to suppress overwriting the
> -current in-kernel state. The bits are:
> +from the update. These fields are nmi.pending, sipi_vector, smi.smm,
> +smi.pending. Keep the corresponding bits in the flags field cleared to
> +suppress overwriting the current in-kernel state. The bits are:
>  
>  KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
>  KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
> +KVM_VCPUEVENT_VALID_SMM         - transfer the smi sub-struct.
>  
>  If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
>  the flags field to signal that interrupt.shadow contains a valid state and
>  shall be written into the VCPU.
>  
> +KVM_VCPUEVENT_VALID_SMM can only be set if KVM_CAP_X86_SMM is available.
> +
>  
>  4.33 KVM_GET_DEBUGREGS
>  
> @@ -2979,6 +2992,16 @@ len must be a multiple of sizeof(struct kvm_s390_irq). It must be > 0
>  and it must not exceed (max_vcpus + 32) * sizeof(struct kvm_s390_irq),
>  which is the maximum number of possibly pending cpu-local interrupts.
>  
> +4.90 KVM_SMI
> +
> +Capability: KVM_CAP_X86_SMM
> +Architectures: x86
> +Type: vcpu ioctl
> +Parameters: none
> +Returns: 0 on success, -1 on error
> +
> +Queues an SMI on the thread's vcpu.
> +
>  5. The kvm_run structure
>  ------------------------
>  
> @@ -3014,7 +3037,12 @@ an interrupt can be injected now with KVM_INTERRUPT.
>  The value of the current interrupt flag.  Only valid if in-kernel
>  local APIC is not used.
>  
> -	__u8 padding2[2];
> +	__u16 flags;
> +
> +More architecture-specific flags detailing state of the VCPU that may
> +affect the device's behavior.  The only currently defined flag is
> +KVM_RUN_X86_SMM, which is valid on x86 machines and is set if the
> +VCPU is in system management mode.
>  
>  	/* in (pre_kvm_run), out (post_kvm_run) */
>  	__u64 cr8;
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4e299fcd0eb6..d52d7aea375f 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -471,6 +471,7 @@ struct kvm_vcpu_arch {
>  	atomic_t nmi_queued;  /* unprocessed asynchronous NMIs */
>  	unsigned nmi_pending; /* NMI queued after currently running handler */
>  	bool nmi_injected;    /* Trying to inject an NMI this entry */
> +	bool smi_pending;    /* SMI queued after currently running handler */
>  
>  	struct mtrr_state_type mtrr_state;
>  	u64 pat;
> @@ -1115,6 +1116,8 @@ enum {
>  #define HF_NMI_MASK		(1 << 3)
>  #define HF_IRET_MASK		(1 << 4)
>  #define HF_GUEST_MASK		(1 << 5) /* VCPU is in guest-mode */
> +#define HF_SMM_MASK		(1 << 6)
> +#define HF_SMM_INSIDE_NMI_MASK	(1 << 7)
>  
>  /*
>   * Hardware virtualization extension instructions may fault if a
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index 2fec75e4b1e1..30100a3c1bed 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -106,6 +106,8 @@ struct kvm_ioapic_state {
>  #define KVM_IRQCHIP_IOAPIC       2
>  #define KVM_NR_IRQCHIPS          3
>  
> +#define KVM_RUN_X86_SMM		 (1 << 0)
> +
>  /* for KVM_GET_REGS and KVM_SET_REGS */
>  struct kvm_regs {
>  	/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
> @@ -281,6 +283,7 @@ struct kvm_reinject_control {
>  #define KVM_VCPUEVENT_VALID_NMI_PENDING	0x00000001
>  #define KVM_VCPUEVENT_VALID_SIPI_VECTOR	0x00000002
>  #define KVM_VCPUEVENT_VALID_SHADOW	0x00000004
> +#define KVM_VCPUEVENT_VALID_SMM		0x00000008
>  
>  /* Interrupt shadow states */
>  #define KVM_X86_SHADOW_INT_MOV_SS	0x01
> @@ -309,7 +312,13 @@ struct kvm_vcpu_events {
>  	} nmi;
>  	__u32 sipi_vector;
>  	__u32 flags;
> -	__u32 reserved[10];
> +	struct {
> +		__u8 smm;
> +		__u8 pending;
> +		__u8 smm_inside_nmi;
> +		__u8 pad;
> +	} smi;
> +	__u32 reserved[9];
>  };
>  
>  /* for KVM_GET/SET_DEBUGREGS */
> diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
> index 544076c4f44b..e1e89ee4af75 100644
> --- a/arch/x86/kvm/kvm_cache_regs.h
> +++ b/arch/x86/kvm/kvm_cache_regs.h
> @@ -99,4 +99,9 @@ static inline bool is_guest_mode(struct kvm_vcpu *vcpu)
>  	return vcpu->arch.hflags & HF_GUEST_MASK;
>  }
>  
> +static inline bool is_smm(struct kvm_vcpu *vcpu)
> +{
> +	return vcpu->arch.hflags & HF_SMM_MASK;
> +}
> +
>  #endif
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 70072f94318e..0deb83f526c0 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3097,6 +3097,11 @@ static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu)
>  	return 0;
>  }
>  
> +static int kvm_vcpu_ioctl_smi(struct kvm_vcpu *vcpu)
> +{
> +	return 0;
> +}
> +
>  static int vcpu_ioctl_tpr_access_reporting(struct kvm_vcpu *vcpu,
>  					   struct kvm_tpr_access_ctl *tac)
>  {
> @@ -3202,8 +3207,15 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
>  
>  	events->sipi_vector = 0; /* never valid when reporting to user space */
>  
> +	events->smi.smm = is_smm(vcpu);
> +	events->smi.pending = vcpu->arch.smi_pending;
> +	events->smi.smm_inside_nmi =
> +		!!(vcpu->arch.hflags & HF_SMM_INSIDE_NMI_MASK);
> +	events->smi.pad = 0;
> +
>  	events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING
> -			 | KVM_VCPUEVENT_VALID_SHADOW);
> +			 | KVM_VCPUEVENT_VALID_SHADOW
> +			 | KVM_VCPUEVENT_VALID_SMM);
>  	memset(&events->reserved, 0, sizeof(events->reserved));
>  }
>  
> @@ -3212,7 +3224,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>  {
>  	if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING
>  			      | KVM_VCPUEVENT_VALID_SIPI_VECTOR
> -			      | KVM_VCPUEVENT_VALID_SHADOW))
> +			      | KVM_VCPUEVENT_VALID_SHADOW
> +			      | KVM_VCPUEVENT_VALID_SMM))
>  		return -EINVAL;
>  
>  	process_nmi(vcpu);
> @@ -3237,6 +3250,18 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
>  	    kvm_vcpu_has_lapic(vcpu))
>  		vcpu->arch.apic->sipi_vector = events->sipi_vector;
>  
> +	if (events->flags & KVM_VCPUEVENT_VALID_SMM) {
> +		if (events->smi.smm)
> +			vcpu->arch.hflags |= HF_SMM_MASK;
> +		else
> +			vcpu->arch.hflags &= ~HF_SMM_MASK;
> +		vcpu->arch.smi_pending = events->smi.pending;
> +		if (events->smi.smm_inside_nmi)
> +			vcpu->arch.hflags |= ~HF_SMM_INSIDE_NMI_MASK;
> +		else
> +			vcpu->arch.hflags &= ~HF_SMM_INSIDE_NMI_MASK;
> +	}
> +
>  	kvm_make_request(KVM_REQ_EVENT, vcpu);
>  
>  	return 0;
> @@ -3496,6 +3521,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  		r = kvm_vcpu_ioctl_nmi(vcpu);
>  		break;
>  	}
> +	case KVM_SMI: {
> +		r = kvm_vcpu_ioctl_smi(vcpu);
> +		break;
> +	}
>  	case KVM_SET_CPUID: {
>  		struct kvm_cpuid __user *cpuid_arg = argp;
>  		struct kvm_cpuid cpuid;
> @@ -6178,6 +6207,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
>  	struct kvm_run *kvm_run = vcpu->run;
>  
>  	kvm_run->if_flag = (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0;
> +	kvm_run->flags = is_smm(vcpu) ? KVM_RUN_X86_SMM : 0;
>  	kvm_run->cr8 = kvm_get_cr8(vcpu);
>  	kvm_run->apic_base = kvm_get_apic_base(vcpu);
>  	if (irqchip_in_kernel(vcpu->kvm))
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 75bd9f7fd846..eace8babd227 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -202,7 +202,7 @@ struct kvm_run {
>  	__u32 exit_reason;
>  	__u8 ready_for_interrupt_injection;
>  	__u8 if_flag;
> -	__u8 padding2[2];
> +	__u16 flags;
>  
>  	/* in (pre_kvm_run), out (post_kvm_run) */
>  	__u64 cr8;
> @@ -815,6 +815,7 @@ struct kvm_ppc_smmu_info {
>  #define KVM_CAP_S390_IRQ_STATE 114
>  #define KVM_CAP_PPC_HWRNG 115
>  #define KVM_CAP_DISABLE_QUIRKS 116
> +#define KVM_CAP_X86_SMM 117
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> @@ -1200,6 +1201,8 @@ struct kvm_s390_ucas_mapping {
>  /* Available with KVM_CAP_S390_IRQ_STATE */
>  #define KVM_S390_SET_IRQ_STATE	  _IOW(KVMIO, 0xb5, struct kvm_s390_irq_state)
>  #define KVM_S390_GET_IRQ_STATE	  _IOW(KVMIO, 0xb6, struct kvm_s390_irq_state)
> +/* Available with KVM_CAP_X86_SMM */
> +#define KVM_SMI                   _IO(KVMIO,   0xb7)
>  
>  #define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
>  #define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ