[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FDF1AFE.4000607@redhat.com>
Date: Mon, 18 Jun 2012 15:11:42 +0300
From: Avi Kivity <avi@...hat.com>
To: Takuya Yoshikawa <yoshikawa.takuya@....ntt.co.jp>
CC: mtosatti@...hat.com, agraf@...e.de, paulus@...ba.org,
aarcange@...hat.com, kvm@...r.kernel.org, kvm-ppc@...r.kernel.org,
linux-kernel@...r.kernel.org, takuya.yoshikawa@...il.com
Subject: Re: [PATCH 3/4] KVM: MMU: Make kvm_handle_hva() handle range of addresses
On 06/15/2012 02:32 PM, Takuya Yoshikawa wrote:
> When guest's memory is backed by THP pages, MMU notifier needs to call
> kvm_unmap_hva(), which in turn leads to kvm_handle_hva(), in a loop to
> invalidate a range of pages which constitute one huge page:
>
> for each guest page
> for each memslot
> if page is in memslot
> unmap using rmap
>
> This means although every page in that range is expected to be found in
> the same memslot, we are forced to check unrelated memslots many times.
> If the guest has more memslots, the situation will become worse.
>
> This patch, together with the following patch, solves this problem by
> introducing kvm_handle_hva_range() which makes the loop look like this:
>
> for each memslot
> for each guest page in memslot
> unmap using rmap
>
> In this new processing, the actual work is converted to the loop over
> rmap array which is much more cache friendly than before.
Moreover, if the pages are in no slot (munmap of some non-guest memory),
then we're iterating over all those pages for no purpose.
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ba57b3b..3629f9b 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1185,10 +1185,13 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, unsigned long *rmapp,
> return 0;
> }
>
> -static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
> - unsigned long data,
> - int (*handler)(struct kvm *kvm, unsigned long *rmapp,
> - unsigned long data))
> +static int kvm_handle_hva_range(struct kvm *kvm,
> + unsigned long start_hva,
> + unsigned long end_hva,
> + unsigned long data,
> + int (*handler)(struct kvm *kvm,
> + unsigned long *rmapp,
> + unsigned long data))
> {
> int j;
> int ret;
> @@ -1199,10 +1202,13 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
> slots = kvm_memslots(kvm);
>
> kvm_for_each_memslot(memslot, slots) {
> - gfn_t gfn = hva_to_gfn(hva, memslot);
> + gfn_t gfn = hva_to_gfn(start_hva, memslot);
> + gfn_t end_gfn = hva_to_gfn(end_hva, memslot);
These will return random results which you then use in min/max later, no?
> +
> + gfn = max(gfn, memslot->base_gfn);
> + end_gfn = min(end_gfn, memslot->base_gfn + memslot->npages);
>
> - if (gfn >= memslot->base_gfn &&
> - gfn < memslot->base_gfn + memslot->npages) {
> + for (; gfn < end_gfn; gfn++) {
> ret = 0;
>
> for (j = PT_PAGE_TABLE_LEVEL;
> @@ -1212,7 +1218,9 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
> rmapp = __gfn_to_rmap(gfn, j, memslot);
> ret |= handler(kvm, rmapp, data);
Potential for improvement: don't do 512 iterations on same large page.
Something like
if ((gfn ^ prev_gfn) & mask(level))
ret |= handler(...)
with clever selection of the first prev_gfn so it always matches (~gfn
maybe).
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists