[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCK96ohvWRY12zZ3@li-a450e7cc-27df-11b2-a85c-b5a9ac31e8ef.ibm.com>
Date: Tue, 28 Mar 2023 15:44:02 +0530
From: Kautuk Consul <kconsul@...ux.vnet.ibm.com>
To: Michael Ellerman <mpe@...erman.id.au>
Cc: Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Fabiano Rosas <farosas@...ux.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arch/powerpc/kvm: kvmppc_core_vcpu_create_hv: check for
kzalloc failure
On 2023-03-28 20:44:48, Michael Ellerman wrote:
> Kautuk Consul <kconsul@...ux.vnet.ibm.com> writes:
> > kvmppc_vcore_create() might not be able to allocate memory through
> > kzalloc. In that case the kvm->arch.online_vcores shouldn't be
> > incremented.
>
> I agree that looks wrong.
>
> Have you tried to test what goes wrong if it fails? It looks like it
> will break the LPCR update, which likely will cause the guest to crash
> horribly.
Not sure about LPCR update, but with and without the patch qemu exits
and so the kvm context is pulled down fine.
>
> You could use CONFIG_FAIL_SLAB and fail-nth etc. to fail just one
> allocation for a guest. Or probably easier to just hack the code to fail
> the 4th time it's called using a static counter.
I am using live debug and I set the r3 return value to 0x0 after the
call to kzalloc.
>
> Doesn't really matter but could be interesting.
With and without this patch qemu quits with:
qemu-system-ppc64: kvm_init_vcpu: kvm_get_vcpu failed (0): Cannot allocate memory
That's because qemu will shut down when any vcpu is not able
to be allocated.
>
> > Add a check for kzalloc failure and return with -ENOMEM from
> > kvmppc_core_vcpu_create_hv().
> >
> > Signed-off-by: Kautuk Consul <kconsul@...ux.vnet.ibm.com>
> > ---
> > arch/powerpc/kvm/book3s_hv.c | 10 +++++++---
> > 1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> > index 6ba68dd6190b..e29ee755c920 100644
> > --- a/arch/powerpc/kvm/book3s_hv.c
> > +++ b/arch/powerpc/kvm/book3s_hv.c
> > @@ -2968,13 +2968,17 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
> > pr_devel("KVM: collision on id %u", id);
> > vcore = NULL;
> > } else if (!vcore) {
> > + vcore = kvmppc_vcore_create(kvm,
> > + id & ~(kvm->arch.smt_mode - 1));
>
> That line doesn't need to be wrapped, we allow 90 columns.
>
> > + if (unlikely(!vcore)) {
> > + mutex_unlock(&kvm->lock);
> > + return -ENOMEM;
> > + }
>
> Rather than introducing a new return point here, I think it would be
> preferable to use the existing !vcore case below.
>
> > /*
> > * Take mmu_setup_lock for mutual exclusion
> > * with kvmppc_update_lpcr().
> > */
> > - err = -ENOMEM;
> > - vcore = kvmppc_vcore_create(kvm,
> > - id & ~(kvm->arch.smt_mode - 1));
>
> So leave that as is (maybe move the comment down).
>
> And wrap the below in:
>
> + if (vcore) {
>
> > mutex_lock(&kvm->arch.mmu_setup_lock);
> > kvm->arch.vcores[core] = vcore;
> > kvm->arch.online_vcores++;
>
> mutex_unlock(&kvm->arch.mmu_setup_lock);
> + }
> }
> }
>
> Meaning the vcore == NULL case will fall through to here and return via
> this existing path:
>
> mutex_unlock(&kvm->lock);
>
> if (!vcore)
> return err;
>
>
> cheers
Powered by blists - more mailing lists