lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNXq9M/WqjEkfi3x@yzhao56-desk.sh.intel.com>
Date:   Fri, 11 Aug 2023 16:01:56 +0800
From:   Yan Zhao <yan.y.zhao@...el.com>
To:     bibo mao <maobibo@...ngson.cn>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        <kvm@...r.kernel.org>, <pbonzini@...hat.com>, <seanjc@...gle.com>,
        <mike.kravetz@...cle.com>, <apopple@...dia.com>, <jgg@...dia.com>,
        <rppt@...nel.org>, <akpm@...ux-foundation.org>,
        <kevin.tian@...el.com>, <david@...hat.com>
Subject: Re: [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed
 protected for NUMA migration

On Fri, Aug 11, 2023 at 03:40:44PM +0800, bibo mao wrote:
> 
> 
> 在 2023/8/11 11:45, Yan Zhao 写道:
> >>> +static void kvm_mmu_notifier_numa_protect(struct mmu_notifier *mn,
> >>> +					  struct mm_struct *mm,
> >>> +					  unsigned long start,
> >>> +					  unsigned long end)
> >>> +{
> >>> +	struct kvm *kvm = mmu_notifier_to_kvm(mn);
> >>> +
> >>> +	WARN_ON_ONCE(!READ_ONCE(kvm->mn_active_invalidate_count));
> >>> +	if (!READ_ONCE(kvm->mmu_invalidate_in_progress))
> >>> +		return;
> >>> +
> >>> +	kvm_handle_hva_range(mn, start, end, __pte(0), kvm_unmap_gfn_range);
> >>> +}
> >> numa balance will scan wide memory range, and there will be one time
> > Though scanning memory range is wide, .invalidate_range_start() is sent
> > for each 2M range.
> yes, range is huge page size when changing numa protection during numa scanning.
> 
> > 
> >> ipi notification with kvm_flush_remote_tlbs. With page level notification,
> >> it may bring out lots of flush remote tlb ipi notification.
> > 
> > Hmm, for VMs with assigned devices, apparently, the flush remote tlb IPIs
> > will be reduced to 0 with this series.
> > 
> > For VMs without assigned devices or mdev devices, I was previously also
> > worried about that there might be more IPIs.
> > But with current test data, there's no more remote tlb IPIs on average.
> > 
> > The reason is below:
> > 
> > Before this series, kvm_unmap_gfn_range() is called for once for a 2M
> > range.
> > After this series, kvm_unmap_gfn_range() is called for once if the 2M is
> > mapped to a huge page in primary MMU, and called for at most 512 times
> > if mapped to 4K pages in primary MMU.
> > 
> > 
> > Though kvm_unmap_gfn_range() is only called once before this series,
> > as the range is blockable, when there're contentions, remote tlb IPIs
> > can be sent page by page in 4K granularity (in tdp_mmu_iter_cond_resched())
> I do not know much about x86, does this happen always or only need reschedule
Ah, sorry, I missed platforms other than x86.
Maybe there will be a big difference in other platforms.

> from code?  so that there will be many times of tlb IPIs in only once function
Only when MMU lock is contended. But it's not seldom because of the contention in
TDP page fault.

> call about kvm_unmap_gfn_range.
> 
> > if the pages are mapped in 4K in secondary MMU.
> > 
> > With this series, on the other hand, .numa_protect() sets range to be
> > unblockable. So there could be less remote tlb IPIs when a 2M range is
> > mapped into small PTEs in secondary MMU.
> > Besides, .numa_protect() is not sent for all pages in a given 2M range.
> No, .numa_protect() is not sent for all pages. It depends on the workload,
> whether the page is accessed for different cpu threads cross-nodes.
The .numa_protect() is called in patch 4 only when PROT_NONE is set to
the page.

> 
> > 
> > Below is my testing data on a VM without assigned devices:
> > The data is an average of 10 times guest boot-up.
> >                    
> >     data           | numa balancing caused  | numa balancing caused    
> >   on average       | #kvm_unmap_gfn_range() | #kvm_flush_remote_tlbs() 
> > -------------------|------------------------|--------------------------
> > before this series |         35             |     8625                 
> > after  this series |      10037             |     4610   
> just be cautious, before the series there are  8625/35 = 246 IPI tlb flush ops
> during one time kvm_unmap_gfn_range, is that x86 specific or generic? 
Only on x86. Didn't test on other platforms.

> 
> By the way are primary mmu and secondary mmu both 4K small page size "on average"?
No. 4K and 2M combined in both.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ