lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 28 May 2024 15:28:28 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Borislav Petkov <bp@...en8.de>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, linux-coco@...ts.linux.dev,
 svsm-devel@...onut-svsm.dev, Thomas Gleixner <tglx@...utronix.de>,
 Ingo Molnar <mingo@...hat.com>, Dave Hansen <dave.hansen@...ux.intel.com>,
 "H. Peter Anvin" <hpa@...or.com>, Andy Lutomirski <luto@...nel.org>,
 Peter Zijlstra <peterz@...radead.org>,
 Dan Williams <dan.j.williams@...el.com>, Michael Roth
 <michael.roth@....com>, Ashish Kalra <ashish.kalra@....com>
Subject: Re: [PATCH v4 07/15] x86/sev: Use the SVSM to create a vCPU when not
 in VMPL0

On 5/27/24 07:33, Borislav Petkov wrote:
> On Wed, Apr 24, 2024 at 10:58:03AM -0500, Tom Lendacky wrote:
>> -static int snp_set_vmsa(void *va, bool vmsa)
>> +static int base_snp_set_vmsa(void *va, bool vmsa)
> 
> s/base_/__/

Ok.

> 
> The svsm_-prefixed ones are already a good enough distinction...
> 
>>   {
>>   	u64 attrs;
>>   
>> @@ -1013,6 +1013,40 @@ static int snp_set_vmsa(void *va, bool vmsa)
>>   	return rmpadjust((unsigned long)va, RMP_PG_SIZE_4K, attrs);
>>   }
>>   
>> +static int svsm_snp_set_vmsa(void *va, void *caa, int apic_id, bool vmsa)
> 								  ^^^^^^^^^^^
> 
> bool create_vmsa or so, to denote what this arg means.

Ok. I'll change it on the original function, too.

> 
>> +{
>> +	struct svsm_call call = {};
>> +	unsigned long flags;
>> +	int ret;
>> +
>> +	local_irq_save(flags);
>> +
>> +	call.caa = this_cpu_read(svsm_caa);
>> +	call.rcx = __pa(va);
>> +
>> +	if (vmsa) {
>> +		/* Protocol 0, Call ID 2 */
>> +		call.rax = SVSM_CORE_CALL(SVSM_CORE_CREATE_VCPU);
>> +		call.rdx = __pa(caa);
>> +		call.r8  = apic_id;
>> +	} else {
>> +		/* Protocol 0, Call ID 3 */
>> +		call.rax = SVSM_CORE_CALL(SVSM_CORE_DELETE_VCPU);
>> +	}
>> +
>> +	ret = svsm_protocol(&call);
>> +
>> +	local_irq_restore(flags);
>> +
>> +	return ret;
>> +}
>> +
>> +static int snp_set_vmsa(void *va, void *caa, int apic_id, bool vmsa)
>> +{
>> +	return vmpl ? svsm_snp_set_vmsa(va, caa, apic_id, vmsa)
>> +		    : base_snp_set_vmsa(va, vmsa);
> 
> Why do you even need helpers if you're not going to use them somewhere
> else? Just put the whole logic inside snp_set_vmsa().

I just think it's easier to follow, with specific functions for the 
situation and less indentation. But if you want, I can put it all in one 
function.

> 
>> +}
>> +
>>   #define __ATTR_BASE		(SVM_SELECTOR_P_MASK | SVM_SELECTOR_S_MASK)
>>   #define INIT_CS_ATTRIBS		(__ATTR_BASE | SVM_SELECTOR_READ_MASK | SVM_SELECTOR_CODE_MASK)
>>   #define INIT_DS_ATTRIBS		(__ATTR_BASE | SVM_SELECTOR_WRITE_MASK)
>> @@ -1044,11 +1078,11 @@ static void *snp_alloc_vmsa_page(int cpu)
>>   	return page_address(p + 1);
>>   }
>>   
>> -static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa)
>> +static void snp_cleanup_vmsa(struct sev_es_save_area *vmsa, int apic_id)
>>   {
>>   	int err;
>>   
>> -	err = snp_set_vmsa(vmsa, false);
>> +	err = snp_set_vmsa(vmsa, NULL, apic_id, false);
>>   	if (err)
>>   		pr_err("clear VMSA page failed (%u), leaking page\n", err);
>>   	else
>> @@ -1059,6 +1093,7 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)
>>   {
>>   	struct sev_es_save_area *cur_vmsa, *vmsa;
>>   	struct ghcb_state state;
>> +	struct svsm_ca *caa;
>>   	unsigned long flags;
>>   	struct ghcb *ghcb;
>>   	u8 sipi_vector;
>> @@ -1105,6 +1140,12 @@ static int wakeup_cpu_via_vmgexit(u32 apic_id, unsigned long start_ip)
>>   	if (!vmsa)
>>   		return -ENOMEM;
>>   
>> +	/*
>> +	 * If an SVSM is present, then the SVSM CAA per-CPU variable will
>> +	 * have a value, otherwise it will be NULL.
>> +	 */
> 
> 	/* If an SVSM is present, the SVSM per-CPU CAA will be !NULL. */
> 
> Shorter.

Yep.

Thanks,
Tom

> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ