[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0939aac3-9427-ed04-17e4-3c1e4195d509@linux.ibm.com>
Date: Mon, 7 Feb 2022 09:56:39 +0100
From: Janosch Frank <frankja@...ux.ibm.com>
To: Claudio Imbrenda <imbrenda@...ux.ibm.com>, kvm@...r.kernel.org
Cc: borntraeger@...ibm.com, thuth@...hat.com, pasic@...ux.ibm.com,
david@...hat.com, linux-s390@...r.kernel.org,
linux-kernel@...r.kernel.org, scgl@...ux.ibm.com
Subject: Re: [PATCH v7 01/17] KVM: s390: pv: leak the topmost page table when
destroy fails
On 2/4/22 16:53, Claudio Imbrenda wrote:
> Each secure guest must have a unique ASCE (address space control
> element); we must avoid that new guests use the same page for their
> ASCE, to avoid errors.
>
> Since the ASCE mostly consists of the address of the topmost page table
> (plus some flags), we must not return that memory to the pool unless
> the ASCE is no longer in use.
>
> Only a successful Destroy Secure Configuration UVC will make the ASCE
> reusable again.
>
> If the Destroy Configuration UVC fails, the ASCE cannot be reused for a
> secure guest (either for the ASCE or for other memory areas). To avoid
> a collision, it must not be used again. This is a permanent error and
> the page becomes in practice unusable, so we set it aside and leak it.
> On failure we already leak other memory that belongs to the ultravisor
> (i.e. the variable and base storage for a guest) and not leaking the
> topmost page table was an oversight.
>
> This error (and thus the leakage) should not happen unless the hardware
> is broken or KVM has some unknown serious bug.
>
> Signed-off-by: Claudio Imbrenda <imbrenda@...ux.ibm.com>
> Fixes: 29b40f105ec8d55 ("KVM: s390: protvirt: Add initial vm and cpu lifecycle handling")
> ---
> arch/s390/include/asm/gmap.h | 2 ++
> arch/s390/kvm/pv.c | 9 +++--
> arch/s390/mm/gmap.c | 69 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 77 insertions(+), 3 deletions(-)
>
> diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
> index 40264f60b0da..746e18bf8984 100644
> --- a/arch/s390/include/asm/gmap.h
> +++ b/arch/s390/include/asm/gmap.h
> @@ -148,4 +148,6 @@ void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long dirty_bitmap[4],
> unsigned long gaddr, unsigned long vmaddr);
> int gmap_mark_unmergeable(void);
> void s390_reset_acc(struct mm_struct *mm);
> +void s390_remove_old_asce(struct gmap *gmap);
> +int s390_replace_asce(struct gmap *gmap);
> #endif /* _ASM_S390_GMAP_H */
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index 7f7c0d6af2ce..3c59ef763dde 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -166,10 +166,13 @@ int kvm_s390_pv_deinit_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> atomic_set(&kvm->mm->context.is_protected, 0);
> KVM_UV_EVENT(kvm, 3, "PROTVIRT DESTROY VM: rc %x rrc %x", *rc, *rrc);
> WARN_ONCE(cc, "protvirt destroy vm failed rc %x rrc %x", *rc, *rrc);
> - /* Inteded memory leak on "impossible" error */
> - if (!cc)
> + /* Intended memory leak on "impossible" error */
> + if (!cc) {
> kvm_s390_pv_dealloc_vm(kvm);
> - return cc ? -EIO : 0;
> + return 0;
> + }
> + s390_replace_asce(kvm->arch.gmap);
> + return -EIO;
> }
>
> int kvm_s390_pv_init_vm(struct kvm *kvm, u16 *rc, u16 *rrc)
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index dfee0ebb2fac..ce6cac4463f2 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2714,3 +2714,72 @@ void s390_reset_acc(struct mm_struct *mm)
> mmput(mm);
> }
> EXPORT_SYMBOL_GPL(s390_reset_acc);
> +
> +/**
> + * s390_remove_old_asce - Remove the topmost level of page tables from the
> + * list of page tables of the gmap.
> + * @gmap the gmap whose table is to be removed
> + *
> + * This means that it will not be freed when the VM is torn down, and needs
> + * to be handled separately by the caller, unless an intentional leak is
> + * intended.
> + */
> +void s390_remove_old_asce(struct gmap *gmap)
> +{
> + struct page *old;
> +
> + old = virt_to_page(gmap->table);
> + spin_lock(&gmap->guest_table_lock);
> + list_del(&old->lru);
> + /*
> + * in case the ASCE needs to be "removed" multiple times, for example
> + * if the VM is rebooted into secure mode several times
> + * concurrently.
> + */
> + INIT_LIST_HEAD(&old->lru);
> + spin_unlock(&gmap->guest_table_lock);
The patch itself looks fine to me, but there's one oddity which made me
look twice:
You're not overwriting gmap->table here so you can use it in the
function below. I guess that's intentional so it can still be used as a
reference until we switch over to the new ASCE page?
> +}
> +EXPORT_SYMBOL_GPL(s390_remove_old_asce);
> +
> +/**
> + * s390_replace_asce - Try to replace the current ASCE of a gmap with
> + * another equivalent one.
> + * @gmap the gmap
> + *
> + * If the allocation of the new top level page table fails, the ASCE is not
> + * replaced.
> + * In any case, the old ASCE is always removed from the list. Therefore the
> + * caller has to make sure to save a pointer to it beforehands, unless an > + * intentional leak is intended.
> + */
> +int s390_replace_asce(struct gmap *gmap)
> +{
> + unsigned long asce;
> + struct page *page;
> + void *table;
> +
> + s390_remove_old_asce(gmap);
> +
> + page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
> + if (!page)
> + return -ENOMEM;
> + table = page_to_virt(page);
> + memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
> +
> + /*
> + * The caller has to deal with the old ASCE, but here we make sure
> + * the new one is properly added to the list of page tables, so that
> + * it will be freed when the VM is torn down.
> + */
> + spin_lock(&gmap->guest_table_lock);
> + list_add(&page->lru, &gmap->crst_list);
> + spin_unlock(&gmap->guest_table_lock);
> +
> + asce = (gmap->asce & ~PAGE_MASK) | __pa(table);
Please add a comment:
Set the new table origin while preserving ASCE control bits like table
type and length.
> + WRITE_ONCE(gmap->asce, asce);
> + WRITE_ONCE(gmap->mm->context.gmap_asce, asce);
> + WRITE_ONCE(gmap->table, table);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(s390_replace_asce);
>
Powered by blists - more mailing lists