lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 28 Jan 2022 22:36:42 +0100 From: "Maciej S. Szmigiero" <mail@...iej.szmigiero.name> To: Paolo Bonzini <pbonzini@...hat.com> Cc: Sean Christopherson <seanjc@...gle.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Wanpeng Li <wanpengli@...cent.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>, Michal Hocko <mhocko@...e.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org Subject: [PATCH] KVM: x86: Fix rmap allocation for very large memslots From: "Maciej S. Szmigiero" <maciej.szmigiero@...cle.com> Commit 7661809d493b ("mm: don't allow oversized kvmalloc() calls") has forbidden using kvmalloc() to make allocations larger than INT_MAX (2 GiB). Unfortunately, adding a memslot exceeding 1 TiB in size will result in rmap code trying to make an allocation exceeding this limit. Besides failing this allocation, such operation will also trigger a WARN_ON_ONCE() added by the aforementioned commit. Since we probably still want to use kernel slab for small rmap allocations let's only redirect such oversized allocations to vmalloc. A possible alternative would be to add some kind of a __GFP_LARGE flag to skip the INT_MAX check behind kvmalloc(), however this will impact the common kernel memory allocation code, not just KVM. Fixes: a7c3e901a4 ("mm: introduce kv[mz]alloc helpers") Cc: stable@...r.kernel.org Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@...cle.com> --- arch/x86/kvm/x86.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8033eca6f3a1..c64bac8614c7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11806,24 +11806,36 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages) { - const int sz = sizeof(*slot->arch.rmap[0]); + const size_t sz = sizeof(*slot->arch.rmap[0]); int i; for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { int level = i + 1; - int lpages = __kvm_mmu_slot_lpages(slot, npages, level); + size_t lpages = __kvm_mmu_slot_lpages(slot, npages, level); + size_t rmap_size; if (slot->arch.rmap[i]) continue; - slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT); - if (!slot->arch.rmap[i]) { - memslot_rmap_free(slot); - return -ENOMEM; - } + if (unlikely(check_mul_overflow(lpages, sz, &rmap_size))) + goto ret_fail; + + /* kvzalloc() only allows sizes up to INT_MAX */ + if (unlikely(rmap_size > INT_MAX)) + slot->arch.rmap[i] = __vmalloc(rmap_size, + GFP_KERNEL_ACCOUNT | __GFP_ZERO); + else + slot->arch.rmap[i] = kvzalloc(rmap_size, GFP_KERNEL_ACCOUNT); + + if (!slot->arch.rmap[i]) + goto ret_fail; } return 0; + +ret_fail: + memslot_rmap_free(slot); + return -ENOMEM; } static int kvm_alloc_memslot_metadata(struct kvm *kvm,
Powered by blists - more mailing lists