[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNnbdlKb6Y4L4vMx@yzhao56-desk.sh.intel.com>
Date: Mon, 14 Aug 2023 15:44:54 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>,
bibo mao <maobibo@...ngson.cn>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>,
<pbonzini@...hat.com>, <mike.kravetz@...cle.com>,
<apopple@...dia.com>, <jgg@...dia.com>, <rppt@...nel.org>,
<akpm@...ux-foundation.org>, <kevin.tian@...el.com>,
<david@...hat.com>
Subject: Re: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed
protected for NUMA migration
On Mon, Aug 14, 2023 at 02:52:07PM +0800, Yan Zhao wrote:
> I wonder if we could loose the frequency to check for rescheduling in
> tdp_mmu_iter_cond_resched() if the zap range is wide, e.g.
>
> if (iter->next_last_level_gfn ==
> iter->yielded_gfn + KVM_PAGES_PER_HPAGE(PG_LEVEL_2M))
> return false;
Correct:
@@ -712,7 +713,8 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm,
WARN_ON(iter->yielded);
/* Ensure forward progress has been made before yielding. */
- if (iter->next_last_level_gfn == iter->yielded_gfn)
+ if (iter->next_last_level_gfn >= iter->yielded_gfn &&
+ iter->next_last_level_gfn < iter->yielded_gfn + KVM_PAGES_PER_HPAGE(PG_LEVEL_2M))
return false;
if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
This can reduce kvm_flush_remote_tlbs() a lot in one kvm_unmap_gfn_range() in KVM x86 TDP MMU.
Powered by blists - more mailing lists