lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YQMpChJVo13/Njnc@google.com>
Date:   Thu, 29 Jul 2021 22:17:46 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Peter Gonda <pgonda@...gle.com>
Cc:     kvm@...r.kernel.org, Lars Bull <larsbull@...gle.com>,
        Brijesh Singh <brijesh.singh@....com>,
        Marc Orr <marcorr@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        David Rientjes <rientjes@...gle.com>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3 V3] KVM, SEV: Add support for SEV intra host migration

On Mon, Jul 26, 2021, Peter Gonda wrote:
> To avoid exposing this internal state to userspace and prevent other
> processes from importing state they shouldn't have access to, the send
> returns a token to userspace that is handed off to the target VM. The
> target passes in this token to receive the sent state. The token is only
> valid for one-time use. Functionality on the source becomes limited
> after send has been performed. If the source is destroyed before the
> target has received, the token becomes invalid.

...

> +11. KVM_SEV_INTRA_HOST_RECEIVE
> +-------------------------------------
> +
> +The KVM_SEV_INTRA_HOST_RECEIVE command is used to transfer staged SEV
> +info to a target VM from some source VM. SEV on the target VM should be active
> +when receive is performed, but not yet launched and without any pinned memory.
> +The launch commands should be skipped after receive because they should have
> +already been performed on the source.
> +
> +Parameters (in/out): struct kvm_sev_intra_host_receive
> +
> +Returns: 0 on success, -negative on error
> +
> +::
> +
> +    struct kvm_sev_intra_host_receive {
> +        __u64 info_token;    /* token referencing the staged info */

Sorry to belatedly throw a wrench in things, but why use a token approach?  This
is only intended for migrating between two userspace VMMs using the same KVM 
module, which can access both the source and target KVM instances (VMs/guests).
Rather than indirectly communicate through a token, why not communidate directly?
Same idea as svm_vm_copy_asid_from().

The locking needs special consideration, e.g. attempting to take kvm->lock on
both the source and dest could deadlock if userspace is malicious and
double-migrates, but I think a flag and global spinlock to state that migration
is in-progress would suffice.                                                                                 

Locking aside, this would reduce the ABI to a single ioctl(), should avoid most 
if not all temporary memory allocations, and would obviate the need for patch 1 
since there's no limbo state, i.e. the encrypted regions are either owned by the
source or the dest.

I think the following would work?  Another thought would be to make the helpers
and "lock for multi-lock" flag arch-agnostic, e.g. the logic below works iff
this is the only path that takes two kvm->locks simultaneous.

static int svm_sev_lock_for_migration(struct kvm *kvm)
{
	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
	int ret = 0;

	/*
	 * Bail if this VM is already involved in a migration to avoid deadlock
	 * between two VMs trying to migrate to/from each other.
	 */
	spin_lock(&sev_migration_lock);
	if (sev->migration_in_progress)
		ret = -EINVAL;
	else
		sev->migration_in_progress = true;
	spin_unlock(&sev_migration_lock);

	if (!ret)
		mutex_lock(&kvm->lock);

	return ret;
}

static void svm_unlock_after_migration(struct kvm *kvm)
{
	mutex_unlock(&kvm->lock);
	WRITE_ONCE(sev->migration_in_progress, false);
}

int svm_sev_migrate_from(struct kvm *kvm, unsigned int source_fd)
{
	struct file *source_kvm_file;
	struct kvm *source_kvm;
	int ret = -EINVAL;

	ret = svm_sev_lock_for_migration(kvm);
	if (ret)
		return ret;

	if (!sev_guest(kvm))
		goto out_unlock;

	source_kvm_file = fget(source_fd);
	if (!file_is_kvm(source_kvm_file)) {
		ret = -EBADF;
		goto out_fput;
	}

	source_kvm = source_kvm_file->private_data;
	ret = svm_sev_lock_for_migration(source_kvm);
	if (ret)
		goto out_fput;

	if (!sev_guest(source_kvm)) {
		ret = -EINVAL;
		goto out_source;
	}

	<migration magic>

out_source:
	svm_unlock_after_migration(&source_kvm->lock);
out_fpu:
	if (source_kvm_file)
		fput(source_kvm_file);
out_unlock:
	svm_unlock_after_migration(kvm);
	return ret;
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ