[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130110174956.GC25050@amt.cnet>
Date: Thu, 10 Jan 2013 15:49:56 -0200
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Takuya Yoshikawa <yoshikawa_takuya_b1@....ntt.co.jp>
Cc: gleb@...hat.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7 -v2] KVM: Alleviate mmu_lock hold time when we start
dirty logging
On Tue, Jan 08, 2013 at 07:42:38PM +0900, Takuya Yoshikawa wrote:
> Changelog v1->v2:
> The condition in patch 1 was changed like this:
> npages && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)
>
> This patch set makes kvm_mmu_slot_remove_write_access() rmap based and
> adds conditional rescheduling to it.
>
> The motivation for this change is of course to reduce the mmu_lock hold
> time when we start dirty logging for a large memory slot. You may not
> see the problem if you just give 8GB or less of the memory to the guest
> with THP enabled on the host -- this is for the worst case.
>
> Takuya Yoshikawa (7):
> KVM: Write protect the updated slot only when dirty logging is enabled
> KVM: MMU: Remove unused parameter level from __rmap_write_protect()
> KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap based
> KVM: Remove unused slot_bitmap from kvm_mmu_page
> KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itself
> KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itself
> KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes a long time
>
> Documentation/virtual/kvm/mmu.txt | 7 ----
> arch/x86/include/asm/kvm_host.h | 5 ---
> arch/x86/kvm/mmu.c | 56 +++++++++++++++++++-----------------
> arch/x86/kvm/x86.c | 12 ++++---
> virt/kvm/kvm_main.c | 1 -
> 5 files changed, 37 insertions(+), 44 deletions(-)
>
> --
> 1.7.5.4
Reviewed-by: Marcelo Tosatti <mtosatti@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists