lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Sep 2017 13:06:11 +0200
From:   Borislav Petkov <bp@...e.de>
To:     Brijesh Singh <brijesh.singh@....com>
Cc:     linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>,
        Andy Lutomirski <luto@...nel.org>,
        Tom Lendacky <thomas.lendacky@....com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [Part1 PATCH v4 16/17] X86/KVM: Unencrypt shared per-cpu
 variables when SEV is active

On Sat, Sep 16, 2017 at 07:34:17AM -0500, Brijesh Singh wrote:
> When SEV is active, guest memory is encrypted with guest-specific key, a

						with a guest-specific key

> guest memory region shared with hypervisor must be mapped as unencrypted

			     with the hypervisor must be mapped as decrypted

> before we share it.

before we can share it.

> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: Borislav Petkov <bp@...e.de>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: "Radim Krčmář" <rkrcmar@...hat.com>
> Cc: Tom Lendacky <thomas.lendacky@....com>
> Cc: x86@...nel.org
> Cc: linux-kernel@...r.kernel.org
> Cc: kvm@...r.kernel.org
> Signed-off-by: Brijesh Singh <brijesh.singh@....com>
> ---
>  arch/x86/kernel/kvm.c | 46 +++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 43 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 874827b0d7ca..9ccb48b027e4 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -75,8 +75,8 @@ static int parse_no_kvmclock_vsyscall(char *arg)
>  
>  early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
>  
> -static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);
> -static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
> +static DEFINE_PER_CPU_UNENCRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);
> +static DEFINE_PER_CPU_UNENCRYPTED(struct kvm_steal_time, steal_time) __aligned(64);
>  static int has_steal_clock = 0;
>  
>  /*
> @@ -305,7 +305,7 @@ static void kvm_register_steal_time(void)
>  		cpu, (unsigned long long) slow_virt_to_phys(st));
>  }
>  
> -static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
> +static DEFINE_PER_CPU_UNENCRYPTED(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
>  
>  static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val)
>  {
> @@ -419,9 +419,46 @@ void kvm_disable_steal_time(void)
>  	wrmsr(MSR_KVM_STEAL_TIME, 0, 0);
>  }
>  
> +static inline void __init __set_percpu_var_unencrypted(
> +		void *var, int size)

Yuck, line ending with opening brace. Do the obvious thing:

static inline void __init __set_percpu_decrypted(void *var, int size)

> +{
> +	unsigned long pa = slow_virt_to_phys(var);
> +
> +	/* decrypt the memory in-place */
> +	sme_early_decrypt(pa, size);
> +
> +	/* clear the C-bit from the page table */
> +	early_set_memory_decrypted(pa, size);

So those two do a lot of work like TLB flushing and WBINVD for each
per-CPU variable and normally I'd say you do this on one go instead of
variable per variable and thus save yourself the subsequent expensive
invalidation calls but we do it once only during boot so maybe something
to think about later, when there's more time and boredom.

:)

> +}
> +
> +/*
> + * Iterate through all possible CPUs and map the memory region pointed
> + * by apf_reason, steal_time and kvm_apic_eoi as unencrypted at once.

s/unencrypted/decrypted/g

> + *
> + * Note: we iterate through all possible CPUs to ensure that CPUs
> + * hotplugged will have their per-cpu variable already mapped as
> + * unencrypted.
> + */
> +static void __init set_percpu_unencrypted(void)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		__set_percpu_var_unencrypted(&per_cpu(apf_reason, cpu),
> +			sizeof(struct kvm_vcpu_pv_apf_data));
> +		__set_percpu_var_unencrypted(&per_cpu(steal_time, cpu),
> +			sizeof(struct kvm_steal_time));
> +		__set_percpu_var_unencrypted(&per_cpu(kvm_apic_eoi, cpu),
> +			sizeof(unsigned long));
> +	}

Let it stick out and shorten function name:

	for_each_possible_cpu(cpu) {
                __set_percpu_decrypted(&per_cpu(apf_reason, cpu),   sizeof(struct kvm_vcpu_pv_apf_data));
                __set_percpu_decrypted(&per_cpu(steal_time, cpu),   sizeof(struct kvm_steal_time));
                __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(unsigned long));
        }

Also, we agreed to call everything that's not encrypted "decrypted" so
that we have only two different states: encrypted and decrypted and thus
less confusion.

> +}
> +
>  #ifdef CONFIG_SMP
>  static void __init kvm_smp_prepare_boot_cpu(void)
>  {
> +	if (sev_active())
> +		set_percpu_unencrypted();
> +

Move that sev_active() check into the function and call the latter
sev_map_percpu_data().

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ