[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200109235620.6536-3-sean.j.christopherson@intel.com>
Date: Thu, 9 Jan 2020 15:56:19 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Jim Mattson <jmattson@...gle.com>,
Andrew Honig <ahonig@...gle.com>,
Barret Rhoden <brho@...gle.com>
Subject: [PATCH 2/3] KVM: Clean up __kvm_gfn_to_hva_cache_init() and its callers
Barret reported a (technically benign) bug where nr_pages_avail can be
accessed without being initialized if gfn_to_hva_many() fails.
virt/kvm/kvm_main.c:2193:13: warning: 'nr_pages_avail' may be
used uninitialized in this function [-Wmaybe-uninitialized]
Rather than simply squashing the warning by initializing nr_pages_avail,
fix the underlying issues by reworking __kvm_gfn_to_hva_cache_init() to
return immediately instead of continuing on. Now that all callers check
the result and/or bail immediately on a bad hva, there's no need to
explicitly nullify the memslot on error.
Reported-by: Barret Rhoden <brho@...gle.com>
Fixes: f1b9dd5eb86c ("kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init")
Cc: Jim Mattson <jmattson@...gle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
---
virt/kvm/kvm_main.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c02c073b9d43..e8d8fc1d72b4 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2155,33 +2155,36 @@ static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
gfn_t end_gfn = (gpa + len - 1) >> PAGE_SHIFT;
gfn_t nr_pages_needed = end_gfn - start_gfn + 1;
gfn_t nr_pages_avail;
- int r = start_gfn <= end_gfn ? 0 : -EINVAL;
- ghc->gpa = gpa;
+ /* Update ghc->generation before performing any error checks. */
ghc->generation = slots->generation;
- ghc->len = len;
- ghc->hva = KVM_HVA_ERR_BAD;
+
+ if (start_gfn > end_gfn) {
+ ghc->hva = KVM_HVA_ERR_BAD;
+ return -EINVAL;
+ }
/*
* If the requested region crosses two memslots, we still
* verify that the entire region is valid here.
*/
- while (!r && start_gfn <= end_gfn) {
+ for ( ; start_gfn <= end_gfn; start_gfn += nr_pages_avail) {
ghc->memslot = __gfn_to_memslot(slots, start_gfn);
ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn,
&nr_pages_avail);
if (kvm_is_error_hva(ghc->hva))
- r = -EFAULT;
- start_gfn += nr_pages_avail;
+ return -EFAULT;
}
/* Use the slow path for cross page reads and writes. */
- if (!r && nr_pages_needed == 1)
+ if (nr_pages_needed == 1)
ghc->hva += offset;
else
ghc->memslot = NULL;
- return r;
+ ghc->gpa = gpa;
+ ghc->len = len;
+ return 0;
}
int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
--
2.24.1
Powered by blists - more mailing lists