[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YXg7qymDdWZGm3Hm@sashalap>
Date: Tue, 26 Oct 2021 13:32:27 -0400
From: Sasha Levin <sashal@...nel.org>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, kvm@...r.kernel.org
Subject: Re: [PATCH MANUALSEL 5.14 1/5] KVM: X86: fix lazy allocation of rmaps
On Tue, Oct 26, 2021 at 06:14:34PM +0200, Paolo Bonzini wrote:
>On 25/10/21 22:38, Sasha Levin wrote:
>>From: Paolo Bonzini <pbonzini@...hat.com>
>>
>>[ Upstream commit fa13843d1565d4c5b3aeb9be3343b313416bef46 ]
>>
>>If allocation of rmaps fails, but some of the pointers have already been written,
>>those pointers can be cleaned up when the memslot is freed, or even reused later
>>for another attempt at allocating the rmaps. Therefore there is no need to
>>WARN, as done for example in memslot_rmap_alloc, but the allocation *must* be
>>skipped lest KVM will overwrite the previous pointer and will indeed leak memory.
>>
>>Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>>Signed-off-by: Sasha Levin <sashal@...nel.org>
>>---
>> arch/x86/kvm/x86.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>>diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>index 4b0e866e9f08..60d9aa0ab389 100644
>>--- a/arch/x86/kvm/x86.c
>>+++ b/arch/x86/kvm/x86.c
>>@@ -11341,7 +11341,8 @@ static int memslot_rmap_alloc(struct kvm_memory_slot *slot,
>> int lpages = gfn_to_index(slot->base_gfn + npages - 1,
>> slot->base_gfn, level) + 1;
>>- WARN_ON(slot->arch.rmap[i]);
>>+ if (slot->arch.rmap[i])
>>+ continue;
>> slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT);
>> if (!slot->arch.rmap[i]) {
>>
>
>NACK
>
>There is no lazy allocation of rmaps in 5.14, and any failure to
>allocate goes straight to memslot_rmap_free followed by return
>-ENOMEM. So the WARN_ON is justified there.
I'll queue up the ones you've acked, thanks :)
--
Thanks,
Sasha
Powered by blists - more mailing lists