[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMkAt6rfnjqPZHUqJM1r6GjnBMy-Gbw8VtGv4gd-G0Rjq+ALnQ@mail.gmail.com>
Date: Mon, 10 May 2021 13:59:08 -0600
From: Peter Gonda <pgonda@...gle.com>
To: Brijesh Singh <brijesh.singh@....com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
kvm list <kvm@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, jroedel@...e.de,
"Lendacky, Thomas" <thomas.lendacky@....com>,
Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
David Rientjes <rientjes@...gle.com>,
Sean Christopherson <seanjc@...gle.com>, peterz@...radead.org,
"H. Peter Anvin" <hpa@...or.com>, tony.luck@...el.com
Subject: Re: [PATCH Part2 RFC v2 32/37] KVM: SVM: Add support to handle MSR
based Page State Change VMGEXIT
On Mon, May 10, 2021 at 11:51 AM Brijesh Singh <brijesh.singh@....com> wrote:
>
> Hi Peter,
>
> On 5/10/21 12:30 PM, Peter Gonda wrote:
> >> +static int snp_make_page_shared(struct kvm_vcpu *vcpu, gpa_t gpa, kvm_pfn_t pfn, int level)
> >> +{
> >> + struct rmpupdate val;
> >> + int rc, rmp_level;
> >> + struct rmpentry *e;
> >> +
> >> + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &rmp_level);
> >> + if (!e)
> >> + return -EINVAL;
> >> +
> >> + if (!rmpentry_assigned(e))
> >> + return 0;
> >> +
> >> + /* Log if the entry is validated */
> >> + if (rmpentry_validated(e))
> >> + pr_debug_ratelimited("Remove RMP entry for a validated gpa 0x%llx\n", gpa);
> >> +
> >> + /*
> >> + * Is the page part of an existing 2M RMP entry ? Split the 2MB into multiple
> >> + * of 4K-page before making the memory shared.
> >> + */
> >> + if ((level == PG_LEVEL_4K) && (rmp_level == PG_LEVEL_2M)) {
> >> + rc = snp_rmptable_psmash(vcpu, pfn);
> >> + if (rc)
> >> + return rc;
> >> + }
> >> +
> >> + memset(&val, 0, sizeof(val));
> >> + val.pagesize = X86_TO_RMP_PG_LEVEL(level);
> > This is slightly different from Rev 2.00 of the GHCB spec. This
> > defaults to 2MB page sizes, when the spec says the only valid settings
> > for level are 0 -> 4k pages or 1 -> 2MB pages. Should this enforce the
> > same strictness as the spec?
>
>
> The caller of the snp_make_page_shared() must pass the x86 page level.
> We should reach here after all the guest provide value have passed
> through checks.
>
> The call sequence in this case should be:
>
> snp_handle_vmgexit_msr_protocol()
>
> __snp_handle_page_state_change(vcpu, gfn_to_gpa(gfn), PG_LEVEL_4K)
>
> snp_make_page_shared(..., level)
>
> Am I missing something ?
Thanks Brijesh. I think my comment was misplaced. Looking at 33/37
+static unsigned long snp_handle_page_state_change(struct vcpu_svm
*svm, struct ghcb *ghcb)
+{
...
+ while (info->header.cur_entry <= info->header.end_entry) {
+ entry = &info->entry[info->header.cur_entry];
+ gpa = gfn_to_gpa(entry->gfn);
+ level = RMP_TO_X86_PG_LEVEL(entry->pagesize);
+ op = entry->operation;
This call to RMP_TO_X86_PG_LEVEL is not as strict as the spec. Is that OK?
>
> >> + return rmpupdate(pfn_to_page(pfn), &val);
> >> +}
> >> +
> >> +static int snp_make_page_private(struct kvm_vcpu *vcpu, gpa_t gpa, kvm_pfn_t pfn, int level)
> >> +{
> >> + struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
> >> + struct rmpupdate val;
> >> + struct rmpentry *e;
> >> + int rmp_level;
> >> +
> >> + e = snp_lookup_page_in_rmptable(pfn_to_page(pfn), &rmp_level);
> >> + if (!e)
> >> + return -EINVAL;
> >> +
> >> + /* Log if the entry is validated */
> >> + if (rmpentry_validated(e))
> >> + pr_err_ratelimited("Asked to make a pre-validated gpa %llx private\n", gpa);
> >> +
> >> + memset(&val, 0, sizeof(val));
> >> + val.gpa = gpa;
> >> + val.asid = sev->asid;
> >> + val.pagesize = X86_TO_RMP_PG_LEVEL(level);
> > Same comment as above.
>
> See my above response.
>
>
> >
> >> + val.assigned = true;
> >> +
> >> + return rmpupdate(pfn_to_page(pfn), &val);
> >> +}
Powered by blists - more mailing lists