[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150408092611.GA2164@potion.brq.redhat.com>
Date: Wed, 8 Apr 2015 11:26:12 +0200
From: Radim Krčmář <rkrcmar@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH] KVM: dirty all pages in kvm_write_guest_cached()
2015-04-08 10:49+0200, Paolo Bonzini:
> On 07/04/2015 22:34, Radim Krčmář wrote:
> > We dirtied only one page because writes originally couldn't span more.
> > Use improved syntax for '>> PAGE_SHIFT' while at it.
> >
> > Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.")
> > Signed-off-by: Radim Krčmář <rkrcmar@...hat.com>
>
> Cross-page reads and writes should never get here; they have
> ghc->memslot set to NULL and go through the slow path in kvm_write_guest.
Only cross-memslot writes have NULL memslot.
> What am I missing?
kvm_gfn_to_hva_cache_init() queries how many pages are remaining in the
memslot and it compares it with the amount of needed pages.
If the write will fit in memslot, it will be done without
kvm_write_guest, regardless of the amount of written pages.
The relevant code path in kvm_gfn_to_hva_cache_init():
gfn_t nr_pages_needed = end_gfn - start_gfn + 1;
ghc->memslot = gfn_to_memslot(kvm, start_gfn);
ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, &nr_pages_avail);
if (!kvm_is_error_hva(ghc->hva) && nr_pages_avail >= nr_pages_needed)
ghc->hva += offset;
return 0;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists