[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181220144345.GB19579@flask>
Date: Thu, 20 Dec 2018 15:43:45 +0100
From: Radim Krčmář <rkrcmar@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH] KVM: MMU: Introduce single thread to zap collapsible
sptes
2018-12-06 15:58+0800, Wanpeng Li:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> Last year guys from huawei reported that the call of memory_global_dirty_log_start/stop()
> takes 13s for 4T memory and cause guest freeze too long which increases the unacceptable
> migration downtime. [1] [2]
>
> Guangrong pointed out:
>
> | collapsible_sptes zaps 4k mappings to make memory-read happy, it is not
> | required by the semanteme of KVM_SET_USER_MEMORY_REGION and it is not
> | urgent for vCPU's running, it could be done in a separate thread and use
> | lock-break technology.
>
> [1] https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg05249.html
> [2] https://www.mail-archive.com/qemu-devel@nongnu.org/msg449994.html
>
> Several TB memory guest is common now after NVDIMM is deployed in cloud environment.
> This patch utilizes worker thread to zap collapsible sptes in order to lazy collapse
> small sptes into large sptes during roll-back after live migration fails.
>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Radim Krčmář <rkrcmar@...hat.com>
> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> ---
> @@ -5679,14 +5679,41 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> return need_tlb_flush;
> }
>
> +void zap_collapsible_sptes_fn(struct work_struct *work)
> +{
> + struct kvm_memory_slot *memslot;
> + struct kvm_memslots *slots;
> + struct delayed_work *dwork = to_delayed_work(work);
> + struct kvm_arch *ka = container_of(dwork, struct kvm_arch,
> + kvm_mmu_zap_collapsible_sptes_work);
> + struct kvm *kvm = container_of(ka, struct kvm, arch);
> + int i;
> +
> + mutex_lock(&kvm->slots_lock);
> + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
> + spin_lock(&kvm->mmu_lock);
> + slots = __kvm_memslots(kvm, i);
> + kvm_for_each_memslot(memslot, slots) {
> + slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot,
> + kvm_mmu_zap_collapsible_spte, true);
> + if (need_resched() || spin_needbreak(&kvm->mmu_lock))
> + cond_resched_lock(&kvm->mmu_lock);
I think we shouldn't zap all memslots when kvm_mmu_zap_collapsible_sptes
only wanted to zap a specific one.
Please add a list of memslots to be zapped; delete from the list here
and add in kvm_mmu_zap_collapsible_sptes().
> + }
> + spin_unlock(&kvm->mmu_lock);
> + }
> + kvm->arch.zap_in_progress = false;
> + mutex_unlock(&kvm->slots_lock);
> +}
> +
> +#define KVM_MMU_ZAP_DELAYED (60 * HZ)
> void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> const struct kvm_memory_slot *memslot)
> {
> - /* FIXME: const-ify all uses of struct kvm_memory_slot. */
> - spin_lock(&kvm->mmu_lock);
> - slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot,
> - kvm_mmu_zap_collapsible_spte, true);
> - spin_unlock(&kvm->mmu_lock);
> + if (!kvm->arch.zap_in_progress) {
The list can also serve in place of zap_in_progress -- if there were any
elements in it, then there is no need to schedule the work again.
Thanks.
Powered by blists - more mailing lists