[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68ea014c-51bc-6ed4-a77e-dd7ce1a09aaf@amd.com>
Date: Tue, 20 Jul 2021 09:37:32 -0500
From: Brijesh Singh <brijesh.singh@....com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: brijesh.singh@....com, x86@...nel.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-efi@...r.kernel.org, platform-driver-x86@...r.kernel.org,
linux-coco@...ts.linux.dev, linux-mm@...ck.org,
linux-crypto@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Joerg Roedel <jroedel@...e.de>,
Tom Lendacky <thomas.lendacky@....com>,
"H. Peter Anvin" <hpa@...or.com>, Ard Biesheuvel <ardb@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sergio Lopez <slp@...hat.com>, Peter Gonda <pgonda@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Dov Murik <dovmurik@...ux.ibm.com>,
Tobin Feldman-Fitzthum <tobin@....com>,
Borislav Petkov <bp@...en8.de>,
Michael Roth <michael.roth@....com>,
Vlastimil Babka <vbabka@...e.cz>, tony.luck@...el.com,
npmccallum@...hat.com, brijesh.ksingh@...il.com
Subject: Re: [PATCH Part2 RFC v4 38/40] KVM: SVM: Provide support for
SNP_GUEST_REQUEST NAE event
On 7/19/21 5:50 PM, Sean Christopherson wrote:
...
>
> IIUC, this snippet in the spec means KVM can't restrict what requests are made
> by the guests. If so, that makes it difficult to detect/ratelimit a misbehaving
> guest, and also limits our options if there are firmware issues (hopefully there
> aren't). E.g. ratelimiting a guest after KVM has explicitly requested it to
> migrate is not exactly desirable.
>
The guest message page contains a message header followed by the
encrypted payload. So, technically KVM can peek into the message header
format to determine the message request type. If needed, we can
ratelimit based on the message type.
In the current series we don't support migration etc so I decided to
ratelimit unconditionally.
...
>
>> Now that KVM supports all the VMGEXIT NAEs required for the base SEV-SNP
>> feature, set the hypervisor feature to advertise it.
>
> It would helpful if this changelog listed the Guest Requests that are required
> for "base" SNP, e.g. to provide some insight as to why we care about guest
> requests.
>
Sure, I'll add more.
>> static int snp_bind_asid(struct kvm *kvm, int *error)
>> @@ -1618,6 +1631,12 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
>> if (rc)
>> goto e_free_context;
>>
>> + /* Used for rate limiting SNP guest message request, use the default settings */
>> + ratelimit_default_init(&sev->snp_guest_msg_rs);
>
> Is this exposed to userspace in any way? This feels very much like a knob that
> needs to be configurable per-VM.
>
It's not exposed to the userspace and I am not sure if userspace care
about this knob.
> Also, what are the estimated latencies of a guest request? If the worst case
> latency is >200ms, a default ratelimit frequency of 5hz isn't going to do a whole
> lot.
>
The latency will depend on what else is going in the system at the time
the request comes to the hypervisor. Access to the PSP is serialized so
other parallel PSP command execution will contribute to the latency.
...
>> +
>> + if (!__ratelimit(&sev->snp_guest_msg_rs)) {
>> + pr_info_ratelimited("svm: too many guest message requests\n");
>> + rc = -EAGAIN;
>
> What guarantee do we have that the guest actually understands -EAGAIN? Ditto
> for -EINVAL returned by snp_build_guest_buf(). AFAICT, our options are to return
> one of the error codes defined in "Table 95. Status Codes for SNP_GUEST_REQUEST"
> of the firmware ABI, kill the guest, or ratelimit the guest without returning
> control to the guest.
>
Yes, let me look into passing one of the status code defined in the spec.
>> + goto e_fail;
>> + }
>> +
>> + rc = snp_build_guest_buf(svm, &data, req_gpa, resp_gpa);
>> + if (rc)
>> + goto e_fail;
>> +
>> + sev = &to_kvm_svm(kvm)->sev_info;
>> +
>> + mutex_lock(&kvm->lock);
>
> Question on the VMPCK sequences. The firmware ABI says:
>
> Each guest has four VMPCKs ... Each message contains a sequence number per
> VMPCK. The sequence number is incremented with each message sent. Messages
> sent by the guest to the firmware and by the firmware to the guest must be
> delivered in order. If not, the firmware will reject subsequent messages ...
>
> Does that mean there are four independent sequences, i.e. four streams the guest
> can use "concurrently", or does it mean the overall freshess/integrity check is
> composed from four VMPCK sequences, all of which must be correct for the message
> to be valid?
>
There are four independent sequence counter and in theory guest can use
them concurrently. But the access to the PSP must be serialized.
Currently, the guest driver uses the VMPCK0 key to communicate with the PSP.
> If it's the latter, then a traditional mutex isn't really necessary because the
> guest must implement its own serialization, e.g. it's own mutex or whatever, to
> ensure there is at most one request in-flight at any given time.
The guest driver uses the its own serialization to ensure that there is
*exactly* one request in-flight.
The mutex used here is to protect the KVM's internal firmware response
buffer.
And on the KVM
> side it means KVM can simpy reject requests if there is already an in-flight
> request. It might also give us more/better options for ratelimiting?
>
I don't think we should be running into this scenario unless there is a
bug in the guest kernel. The guest kernel support and CCP driver both
ensure that request to the PSP is serialized.
In normal operation we may see 1 to 2 quest requests for the entire
guest lifetime. I am thinking first request maybe for the attestation
report and second maybe to derive keys etc. It may change slightly when
we add the migration command; I have not looked into a great detail yet.
thanks
Powered by blists - more mailing lists