[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200320221708.GF127076@xz-x1>
Date: Fri, 20 Mar 2020 18:17:08 -0400
From: Peter Xu <peterx@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Christian Borntraeger <borntraeger@...ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>,
Paolo Bonzini <pbonzini@...hat.com>,
David Hildenbrand <david@...hat.com>,
Cornelia Huck <cohuck@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Qian Cai <cai@....pw>
Subject: Re: [PATCH 1/7] KVM: Fix out of range accesses to memslots
On Fri, Mar 20, 2020 at 01:55:40PM -0700, Sean Christopherson wrote:
> Reset the LRU slot if it becomes invalid when deleting a memslot to fix
> an out-of-bounds/use-after-free access when searching through memslots.
>
> Explicitly check for there being no used slots in search_memslots(), and
> in the caller of s390's approximation variant.
>
> Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots")
> Reported-by: Qian Cai <cai@....pw>
> Cc: Peter Xu <peterx@...hat.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> ---
> arch/s390/kvm/kvm-s390.c | 3 +++
> include/linux/kvm_host.h | 3 +++
> virt/kvm/kvm_main.c | 3 +++
> 3 files changed, 9 insertions(+)
>
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 807ed6d722dd..cb15fdda1fee 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2002,6 +2002,9 @@ static int kvm_s390_get_cmma(struct kvm *kvm, struct kvm_s390_cmma_log *args,
> struct kvm_memslots *slots = kvm_memslots(kvm);
> struct kvm_memory_slot *ms;
>
> + if (unlikely(!slots->used_slots))
> + return 0;
> +
> cur_gfn = kvm_s390_next_dirty_cmma(slots, args->start_gfn);
> ms = gfn_to_memslot(kvm, cur_gfn);
> args->count = 0;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 35bc52e187a2..b19dee4ed7d9 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1032,6 +1032,9 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
> int slot = atomic_read(&slots->lru_slot);
> struct kvm_memory_slot *memslots = slots->memslots;
>
> + if (unlikely(!slots->used_slots))
> + return NULL;
> +
> if (gfn >= memslots[slot].base_gfn &&
> gfn < memslots[slot].base_gfn + memslots[slot].npages)
> return &memslots[slot];
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 28eae681859f..f744bc603c53 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -882,6 +882,9 @@ static inline void kvm_memslot_delete(struct kvm_memslots *slots,
>
> slots->used_slots--;
>
> + if (atomic_read(&slots->lru_slot) >= slots->used_slots)
> + atomic_set(&slots->lru_slot, 0);
Nit: could we drop the atomic ops? The "slots" is still only used in
the current thread before the rcu assignment, so iirc it's fine (and
RCU assignment should have a mem barrier when needed anyway).
I thought resetting lru_slot to zero would be good enough when
duplicating the slots... however if we want to do finer grained reset,
maybe even better to reset also those invalidated ones (since we know
this information)?
if (slots->lru_slot >= slots->id_to_index[memslot->id])
slots->lru_slot = 0;
Thanks,
> +
> for (i = slots->id_to_index[memslot->id]; i < slots->used_slots; i++) {
> mslots[i] = mslots[i + 1];
> slots->id_to_index[mslots[i].id] = i;
> --
> 2.24.1
>
--
Peter Xu
Powered by blists - more mailing lists