[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200402222457.GA659325@vbusired-dt>
Date: Thu, 2 Apr 2020 17:24:57 -0500
From: Venu Busireddy <venu.busireddy@...cle.com>
To: Ashish Kalra <Ashish.Kalra@....com>
Cc: pbonzini@...hat.com, tglx@...utronix.de, mingo@...hat.com,
hpa@...or.com, joro@...tes.org, bp@...e.de,
thomas.lendacky@....com, x86@...nel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, rientjes@...gle.com,
srutherford@...gle.com, luto@...nel.org, brijesh.singh@....com
Subject: Re: [PATCH v6 06/14] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
On 2020-03-30 06:21:36 +0000, Ashish Kalra wrote:
> From: Brijesh Singh <Brijesh.Singh@....com>
>
> The command finalize the guest receiving process and make the SEV guest
> ready for the execution.
>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: "Radim Krčmář" <rkrcmar@...hat.com>
> Cc: Joerg Roedel <joro@...tes.org>
> Cc: Borislav Petkov <bp@...e.de>
> Cc: Tom Lendacky <thomas.lendacky@....com>
> Cc: x86@...nel.org
> Cc: kvm@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
> Signed-off-by: Brijesh Singh <brijesh.singh@....com>
> Signed-off-by: Ashish Kalra <ashish.kalra@....com>
> ---
> .../virt/kvm/amd-memory-encryption.rst | 8 +++++++
> arch/x86/kvm/svm.c | 23 +++++++++++++++++++
> 2 files changed, 31 insertions(+)
>
> diff --git a/Documentation/virt/kvm/amd-memory-encryption.rst b/Documentation/virt/kvm/amd-memory-encryption.rst
> index 554aa33a99cc..93cd95d9a6c0 100644
> --- a/Documentation/virt/kvm/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/amd-memory-encryption.rst
> @@ -375,6 +375,14 @@ Returns: 0 on success, -negative on error
> __u32 trans_len;
> };
>
> +15. KVM_SEV_RECEIVE_FINISH
> +------------------------
> +
> +After completion of the migration flow, the KVM_SEV_RECEIVE_FINISH command can be
> +issued by the hypervisor to make the guest ready for execution.
> +
> +Returns: 0 on success, -negative on error
> +
> References
> ==========
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 5fc5355536d7..7c2721e18b06 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -7573,6 +7573,26 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
> return ret;
> }
>
> +static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> + struct sev_data_receive_finish *data;
> + int ret;
> +
> + if (!sev_guest(kvm))
> + return -ENOTTY;
Noticed this in earlier patches too. Is -ENOTTY the best return value?
Aren't one of -ENXIO, or -ENODEV, or -EINVAL a better choice? What is
the rationale for using -ENOTTY?
> +
> + data = kzalloc(sizeof(*data), GFP_KERNEL);
> + if (!data)
> + return -ENOMEM;
> +
> + data->handle = sev->handle;
> + ret = sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, data, &argp->error);
> +
> + kfree(data);
> + return ret;
> +}
> +
> static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
> {
> struct kvm_sev_cmd sev_cmd;
> @@ -7632,6 +7652,9 @@ static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
> case KVM_SEV_RECEIVE_UPDATE_DATA:
> r = sev_receive_update_data(kvm, &sev_cmd);
> break;
> + case KVM_SEV_RECEIVE_FINISH:
> + r = sev_receive_finish(kvm, &sev_cmd);
> + break;
> default:
> r = -EINVAL;
> goto out;
> --
> 2.17.1
>
Powered by blists - more mailing lists