[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210908205027.4f595c6e@p-imbrenda>
Date: Wed, 8 Sep 2021 20:50:27 +0200
From: Claudio Imbrenda <imbrenda@...ux.ibm.com>
To: Christian Borntraeger <borntraeger@...ibm.com>
Cc: kvm@...r.kernel.org, cohuck@...hat.com, frankja@...ux.ibm.com,
thuth@...hat.com, pasic@...ux.ibm.com, david@...hat.com,
linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
Ulrich.Weigand@...ibm.com
Subject: Re: [PATCH v4 02/14] KVM: s390: pv: avoid double free of sida page
On Tue, 31 Aug 2021 15:55:07 +0200
Christian Borntraeger <borntraeger@...ibm.com> wrote:
> On 18.08.21 15:26, Claudio Imbrenda wrote:
> > If kvm_s390_pv_destroy_cpu is called more than once, we risk calling
> > free_page on a random page, since the sidad field is aliased with the
> > gbea, which is not guaranteed to be zero.
> >
> > The solution is to simply return successfully immediately if the vCPU
> > was already non secure.
> >
> > Signed-off-by: Claudio Imbrenda <imbrenda@...ux.ibm.com>
> > Fixes: 19e1227768863a1469797c13ef8fea1af7beac2c ("KVM: S390: protvirt: Introduce instruction data area bounce buffer")
>
> Patch looks good. Do we have any potential case where we call this twice? In other words,
> do we need the Fixes tag with the code as of today or not?
I think so.
if QEMU calls KVM_PV_DISABLE, and it fails, some VCPUs might have been
made non secure, but the VM itself still counts as secure. QEMU can
then call KVM_PV_DISABLE again, which will try to convert all VCPUs to
non secure again, triggering this bug.
this scenario will not happen in practice (unless the hardware is
broken)
> > ---
> > arch/s390/kvm/pv.c | 19 +++++++++----------
> > 1 file changed, 9 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> > index c8841f476e91..0a854115100b 100644
> > --- a/arch/s390/kvm/pv.c
> > +++ b/arch/s390/kvm/pv.c
> > @@ -16,18 +16,17 @@
> >
> > int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc)
> > {
> > - int cc = 0;
> > + int cc;
> >
> > - if (kvm_s390_pv_cpu_get_handle(vcpu)) {
> > - cc = uv_cmd_nodata(kvm_s390_pv_cpu_get_handle(vcpu),
> > - UVC_CMD_DESTROY_SEC_CPU, rc, rrc);
> > + if (!kvm_s390_pv_cpu_get_handle(vcpu))
> > + return 0;
> > +
> > + cc = uv_cmd_nodata(kvm_s390_pv_cpu_get_handle(vcpu), UVC_CMD_DESTROY_SEC_CPU, rc, rrc);
> > +
> > + KVM_UV_EVENT(vcpu->kvm, 3, "PROTVIRT DESTROY VCPU %d: rc %x rrc %x",
> > + vcpu->vcpu_id, *rc, *rrc);
> > + WARN_ONCE(cc, "protvirt destroy cpu failed rc %x rrc %x", *rc, *rrc);
> >
> > - KVM_UV_EVENT(vcpu->kvm, 3,
> > - "PROTVIRT DESTROY VCPU %d: rc %x rrc %x",
> > - vcpu->vcpu_id, *rc, *rrc);
> > - WARN_ONCE(cc, "protvirt destroy cpu failed rc %x rrc %x",
> > - *rc, *rrc);
> > - }
> > /* Intended memory leak for something that should never happen. */
> > if (!cc)
> > free_pages(vcpu->arch.pv.stor_base,
> >
Powered by blists - more mailing lists