lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <63a25b80-9dcf-7948-e2c3-88b8f9064736@linux.vnet.ibm.com>
Date:   Thu, 26 Aug 2021 10:33:39 +0200
From:   Janis Schoetterl-Glausch <scgl@...ux.vnet.ibm.com>
To:     Claudio Imbrenda <imbrenda@...ux.ibm.com>, kvm@...r.kernel.org
Cc:     cohuck@...hat.com, borntraeger@...ibm.com, frankja@...ux.ibm.com,
        thuth@...hat.com, pasic@...ux.ibm.com, david@...hat.com,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
        Ulrich.Weigand@...ibm.com
Subject: Re: [PATCH v4 13/14] KVM: s390: pv: lazy destroy for reboot

Am 18.08.21 um 15:26 schrieb Claudio Imbrenda:
> Until now, destroying a protected guest was an entirely synchronous
> operation that could potentially take a very long time, depending on
> the size of the guest, due to the time needed to clean up the address
> space from protected pages.
> 
> This patch implements a lazy destroy mechanism, that allows a protected
> guest to reboot significantly faster than previously.
> 
> This is achieved by clearing the pages of the old guest in background.
> In case of reboot, the new guest will be able to run in the same
> address space almost immediately.
> 
> The old protected guest is then only destroyed when all of its memory has
> been destroyed or otherwise made non protected.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@...ux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c |   6 +-
>  arch/s390/kvm/kvm-s390.h |   2 +-
>  arch/s390/kvm/pv.c       | 132 ++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 134 insertions(+), 6 deletions(-)
> 
[...]
> 
> +static int kvm_s390_pv_destroy_vm_thread(void *priv)
> +{
> +	struct deferred_priv *p = priv;
> +	u16 rc, rrc;
> +	int r;
> +
> +	/* Clear all the pages as long as we are not the only users of the mm */
> +	s390_uv_destroy_range(p->mm, 1, 0, TASK_SIZE_MAX);
> +	/*
> +	 * If we were the last user of the mm, synchronously free (and clear
> +	 * if needed) all pages.
> +	 * Otherwise simply decrease the reference counter; in this case we
> +	 * have already cleared all pages.
> +	 */
> +	mmput(p->mm);
> +
> +	r = uv_cmd_nodata(p->handle, UVC_CMD_DESTROY_SEC_CONF, &rc, &rrc);
> +	WARN_ONCE(r, "protvirt destroy vm failed rc %x rrc %x", rc, rrc);
> +	if (r) {
> +		mmdrop(p->mm);

The comment about leaking makes more sense here, no?
Also
		goto out_dont_free;
> +		return r;
> +	}
> +	atomic_dec(&p->mm->context.is_protected);
> +	mmdrop(p->mm);
> +
> +	/*
> +	 * Intentional leak in case the destroy secure VM call fails. The
> +	 * call should never fail if the hardware is not broken.
> +	 */
> +	free_pages(p->stor_base, get_order(uv_info.guest_base_stor_len));
> +	free_pages(p->old_table, CRST_ALLOC_ORDER);
> +	vfree(p->stor_var);
out_dont_free:
> +	kfree(p);
> +	return 0;
> +}
> +
[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ