lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241128160742.GAZ0iVTp1thcQA5jFM@fat_crate.local>
Date: Thu, 28 Nov 2024 17:07:42 +0100
From: Borislav Petkov <bp@...en8.de>
To: Tom Lendacky <thomas.lendacky@....com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, x86@...nel.org,
	linux-coco@...ts.linux.dev, Paolo Bonzini <pbonzini@...hat.com>,
	Sean Christopherson <seanjc@...gle.com>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Michael Roth <michael.roth@....com>,
	Ashish Kalra <ashish.kalra@....com>, Joerg Roedel <jroedel@...e.de>,
	Roy Hopkins <roy.hopkins@...e.com>
Subject: Re: [RFC PATCH 1/7] KVM: SVM: Implement GET_AP_APIC_IDS NAE event

On Tue, Aug 27, 2024 at 04:59:25PM -0500, Tom Lendacky wrote:
> @@ -4124,6 +4130,77 @@ static int snp_handle_ext_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t r
>  	return 1; /* resume guest */
>  }
>  
> +struct sev_apic_id_desc {
> +	u32	num_entries;

"count" - like in the spec. :-P

> +	u32	apic_ids[];
> +};
> +
> +static void sev_get_apic_ids(struct vcpu_svm *svm)
> +{
> +	struct ghcb *ghcb = svm->sev_es.ghcb;
> +	struct kvm_vcpu *vcpu = &svm->vcpu, *loop_vcpu;
> +	struct kvm *kvm = vcpu->kvm;
> +	unsigned int id_desc_size;
> +	struct sev_apic_id_desc *desc;
> +	kvm_pfn_t pfn;
> +	gpa_t gpa;
> +	u64 pages;
> +	unsigned long i;
> +	int n;
> +
> +	pages = vcpu->arch.regs[VCPU_REGS_RAX];

Probably should be "num_pages" and a comment should explain what it is:

"State to Hypervisor: is the
number of guest contiguous pages
provided to hold the list of APIC
IDs"

Makes it much easier to follow the code.

> +	/* Each APIC ID is 32-bits in size, so make sure there is room */
> +	n = atomic_read(&kvm->online_vcpus);
> +	/*TODO: is this possible? */
> +	if (n < 0)
> +		return;

It doesn't look like it but if you wanna be real paranoid you can slap
a WARN_ONCE() here or so to scream loudly.

> +	id_desc_size = sizeof(*desc);
> +	id_desc_size += n * sizeof(desc->apic_ids[0]);
> +	if (id_desc_size > (pages * PAGE_SIZE)) {
> +		vcpu->arch.regs[VCPU_REGS_RAX] = PFN_UP(id_desc_size);
> +		return;
> +	}
> +
> +	gpa = svm->vmcb->control.exit_info_1;
> +
> +	ghcb_set_sw_exit_info_1(ghcb, 2);
> +	ghcb_set_sw_exit_info_2(ghcb, 5);

Uuh, more magic numbers. I guess we need this:

https://lore.kernel.org/r/20241113204425.889854-1-huibo.wang@amd.com

and more.

And can we write those only once at the end of the function?

> +	if (!page_address_valid(vcpu, gpa))
> +		return;
> +
> +	pfn = gfn_to_pfn(kvm, gpa_to_gfn(gpa));

Looking at the tree, that gfn_to_pfn() thing is gone now and we're supposed to
it this way.  Not tested ofc:

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 5af227ba15a3..47e1f72a574d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -4134,7 +4134,7 @@ static void sev_get_apic_ids(struct vcpu_svm *svm)
 	struct kvm *kvm = vcpu->kvm;
 	unsigned int id_desc_size;
 	struct sev_apic_id_desc *desc;
-	kvm_pfn_t pfn;
+	struct page *page;
 	gpa_t gpa;
 	u64 pages;
 	unsigned long i;
@@ -4163,8 +4163,8 @@ static void sev_get_apic_ids(struct vcpu_svm *svm)
 	if (!page_address_valid(vcpu, gpa))
 		return;
 
-	pfn = gfn_to_pfn(kvm, gpa_to_gfn(gpa));
-	if (is_error_noslot_pfn(pfn))
+	page = gfn_to_page(kvm, gpa_to_gfn(gpa));
+	if (!page)
 		return;
 
 	if (!pages)

> +	if (is_error_noslot_pfn(pfn))
> +		return;
> +
> +	if (!pages)
> +		return;

That test needs to go right under the assignment of "pages".

> +	/* Allocate a buffer to hold the APIC IDs */
> +	desc = kvzalloc(id_desc_size, GFP_KERNEL_ACCOUNT);
> +	if (!desc)
> +		return;
> +
> +	desc->num_entries = n;
> +	kvm_for_each_vcpu(i, loop_vcpu, kvm) {
> +		/*TODO: is this possible? */

Well:

#define kvm_for_each_vcpu(idx, vcpup, kvm)                 \
        xa_for_each_range(&kvm->vcpu_array, idx, vcpup, 0, \
                          (atomic_read(&kvm->online_vcpus) - 1))
			   ^^^^^^^^^^^^^^

but, what's stopping kvm_vm_ioctl_create_vcpu() from incrementing it?

I'm guessing this would happen when you start the guest only but I haz no
idea.

> +		if (i > n)
> +			break;
> +
> +		desc->apic_ids[i] = loop_vcpu->vcpu_id;
> +	}
> +
> +	if (!kvm_write_guest(kvm, gpa, desc, id_desc_size)) {
> +		/* IDs were successfully written */
> +		ghcb_set_sw_exit_info_1(ghcb, 0);
> +		ghcb_set_sw_exit_info_2(ghcb, 0);
> +	}
> +
> +	kvfree(desc);
> +}


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ