[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53010b89-ae47-0065-9238-0ab065b70a44@amd.com>
Date: Wed, 12 Oct 2022 15:15:15 -0500
From: "Kalra, Ashish" <ashish.kalra@....com>
To: Peter Gonda <pgonda@...gle.com>, Ashish Kalra <ashkalra@....com>
Cc: Tom Lendacky <thomas.lendacky@....com>,
Alper Gun <alpergun@...gle.com>,
the arch/x86 maintainers <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>, linux-coco@...ts.linux.dev,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
"H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sergio Lopez <slp@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dov Murik <dovmurik@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@....com>,
Borislav Petkov <bp@...en8.de>,
Michael Roth <michael.roth@....com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andi Kleen <ak@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>, Marc Orr <marcorr@...gle.com>,
Sathyanarayanan Kuppuswamy
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
"Dr. David Alan Gilbert" <dgilbert@...hat.com>, jarkko@...nel.org
Subject: Re: [PATCH Part2 v6 37/49] KVM: SVM: Add support to handle MSR based
Page State Change VMGEXIT
On 9/26/2022 10:19 AM, Peter Gonda wrote:
> On Mon, Sep 19, 2022 at 5:47 PM Ashish Kalra <ashkalra@....com> wrote:
>>
>>
>> On 9/19/22 22:18, Tom Lendacky wrote:
>>> On 9/19/22 17:02, Alper Gun wrote:
>>>> On Mon, Sep 19, 2022 at 2:38 PM Tom Lendacky
>>>> <thomas.lendacky@....com> wrote:
>>>>>
>>>>> On 9/19/22 12:53, Alper Gun wrote:
>>>>>> On Fri, Aug 19, 2022 at 9:54 AM Peter Gonda <pgonda@...gle.com> wrote:
>>>>>>>
>>>>>>>> +
>>>>>>>> +static int __snp_handle_page_state_change(struct kvm_vcpu *vcpu,
>>>>>>>> enum psc_op op, gpa_t gpa,
>>>>>>>> + int level)
>>>>>>>> +{
>>>>>>>> + struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
>>>>>>>> + struct kvm *kvm = vcpu->kvm;
>>>>>>>> + int rc, npt_level;
>>>>>>>> + kvm_pfn_t pfn;
>>>>>>>> + gpa_t gpa_end;
>>>>>>>> +
>>>>>>>> + gpa_end = gpa + page_level_size(level);
>>>>>>>> +
>>>>>>>> + while (gpa < gpa_end) {
>>>>>>>> + /*
>>>>>>>> + * If the gpa is not present in the NPT then
>>>>>>>> build the NPT.
>>>>>>>> + */
>>>>>>>> + rc = snp_check_and_build_npt(vcpu, gpa, level);
>>>>>>>> + if (rc)
>>>>>>>> + return -EINVAL;
>>>>>>>> +
>>>>>>>> + if (op == SNP_PAGE_STATE_PRIVATE) {
>>>>>>>> + hva_t hva;
>>>>>>>> +
>>>>>>>> + if (snp_gpa_to_hva(kvm, gpa, &hva))
>>>>>>>> + return -EINVAL;
>>>>>>>> +
>>>>>>>> + /*
>>>>>>>> + * Verify that the hva range is
>>>>>>>> registered. This enforcement is
>>>>>>>> + * required to avoid the cases where a
>>>>>>>> page is marked private
>>>>>>>> + * in the RMP table but never gets
>>>>>>>> cleanup during the VM
>>>>>>>> + * termination path.
>>>>>>>> + */
>>>>>>>> + mutex_lock(&kvm->lock);
>>>>>>>> + rc = is_hva_registered(kvm, hva,
>>>>>>>> page_level_size(level));
>>>>>>>> + mutex_unlock(&kvm->lock);
>>>>>>>> + if (!rc)
>>>>>>>> + return -EINVAL;
>>>>>>>> +
>>>>>>>> + /*
>>>>>>>> + * Mark the userspace range unmerable
>>>>>>>> before adding the pages
>>>>>>>> + * in the RMP table.
>>>>>>>> + */
>>>>>>>> + mmap_write_lock(kvm->mm);
>>>>>>>> + rc = snp_mark_unmergable(kvm, hva,
>>>>>>>> page_level_size(level));
>>>>>>>> + mmap_write_unlock(kvm->mm);
>>>>>>>> + if (rc)
>>>>>>>> + return -EINVAL;
>>>>>>>> + }
>>>>>>>> +
>>>>>>>> + write_lock(&kvm->mmu_lock);
>>>>>>>> +
>>>>>>>> + rc = kvm_mmu_get_tdp_walk(vcpu, gpa, &pfn,
>>>>>>>> &npt_level);
>>>>>>>> + if (!rc) {
>>>>>>>> + /*
>>>>>>>> + * This may happen if another vCPU
>>>>>>>> unmapped the page
>>>>>>>> + * before we acquire the lock. Retry the
>>>>>>>> PSC.
>>>>>>>> + */
>>>>>>>> + write_unlock(&kvm->mmu_lock);
>>>>>>>> + return 0;
>>>>>>>> + }
>>>>>>>
>>>>>>> I think we want to return -EAGAIN or similar if we want the caller to
>>>>>>> retry, right? I think returning 0 here hides the error.
>>>>>>>
>>>>>>
>>>>>> The problem here is that the caller(linux guest kernel) doesn't retry
>>>>>> if PSC fails. The current implementation in the guest kernel is that
>>>>>> if a page state change request fails, it terminates the VM with
>>>>>> GHCB_TERM_PSC reason.
>>>>>> Returning 0 here is not a good option because it will fail the PSC
>>>>>> silently and will probably cause a nested RMP fault later. Returning
>>>>>
>>>>> Returning 0 here is ok because the PSC current index into the PSC
>>>>> structure will not be updated and the guest will then retry (see the
>>>>> loop
>>>>> in vmgexit_psc() in arch/x86/kernel/sev.c).
>>>>>
>>>>> Thanks,
>>>>> Tom
>>>>
>>>> But the host code updates the index. It doesn't leave the loop because
>>>> rc is 0. The guest will think that it is successful.
>>>> rc = __snp_handle_page_state_change(vcpu, op, gpa, level);
>>>> if (rc)
>>>> goto out;
>>>>
>>>> Also the page state change request with MSR is not retried. It
>>>> terminates the VM if the MSR request fails.
>>>
>>> Ah, right. I see what you mean. It should probably return a -EAGAIN
>>> instead of 0 and then the if (rc) check should be modified to
>>> specifically look for -EAGAIN and goto out after setting rc to 0.
>>>
>>> But that does leave the MSR protocol open to the problem that you
>>> mention, so, yes, retry logic in snp_handle_page_state_change() for a
>>> -EAGAIN seems reasonable.
>>>
>>> Thanks,
>>> Tom
>>
>> I believe it makes more sense to add the retry logic within
>> __snp_handle_page_state_change() itself, as that will make it work for
>> both the GHCB MSR protocol and the GHCB VMGEXIT requests.
>
> You are suggesting we just retry 'kvm_mmu_get_tdp_walk' inside of
> __snp_handle_page_state_change()? That should work but how many times
> do we retry? If we return EAGAIN or error we can leave it up to the
> caller
>
Ok, we return -EAGAIN here and then let the caller in
snp_handle_page_state_change() or sev_handle_vmgexit_msr_protocol()
(in case of GHCB MSR protocol) do the retries.
But, the question still remains, how may retry attempts should the
caller attempt ?
Thanks,
Ashish
>>>
>>>>
>>>>>
>>>>>> an error also terminates the guest immediately with current guest
>>>>>> implementation. I think the best approach here is adding a retry logic
>>>>>> to this function. Retrying without returning an error should help it
>>>>>> work because snp_check_and_build_npt will be called again and in the
>>>>>> second attempt this should work.
>>>>>>
>>>>>>>> +
>>>>>>>> + /*
>>>>>>>> + * Adjust the level so that we don't go higher
>>>>>>>> than the backing
>>>>>>>> + * page level.
>>>>>>>> + */
>>>>>>>> + level = min_t(size_t, level, npt_level);
>>>>>>>> +
>>>>>>>> + trace_kvm_snp_psc(vcpu->vcpu_id, pfn, gpa, op,
>>>>>>>> level);
>>>>>>>> +
>>>>>>>> + switch (op) {
>>>>>>>> + case SNP_PAGE_STATE_SHARED:
>>>>>>>> + rc = snp_make_page_shared(kvm, gpa, pfn,
>>>>>>>> level);
>>>>>>>> + break;
>>>>>>>> + case SNP_PAGE_STATE_PRIVATE:
>>>>>>>> + rc = rmp_make_private(pfn, gpa, level,
>>>>>>>> sev->asid, false);
>>>>>>>> + break;
>>>>>>>> + default:
>>>>>>>> + rc = -EINVAL;
>>>>>>>> + break;
>>>>>>>> + }
>>>>>>>> +
>>>>>>>> + write_unlock(&kvm->mmu_lock);
>>>>>>>> +
>>>>>>>> + if (rc) {
>>>>>>>> + pr_err_ratelimited("Error op %d gpa %llx
>>>>>>>> pfn %llx level %d rc %d\n",
>>>>>>>> + op, gpa, pfn, level, rc);
>>>>>>>> + return rc;
>>>>>>>> + }
>>>>>>>> +
>>>>>>>> + gpa = gpa + page_level_size(level);
>>>>>>>> + }
>>>>>>>> +
>>>>>>>> + return 0;
>>>>>>>> +}
>>>>>>>> +
Powered by blists - more mailing lists