lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <97be4a95-de4f-96f9-1eca-142e9fee8ff6@huawei.com>
Date:   Sun, 28 Feb 2021 19:11:56 +0800
From:   "wangyanan (Y)" <wangyanan55@...wei.com>
To:     <kvmarm@...ts.cs.columbia.edu>,
        <linux-arm-kernel@...ts.infradead.org>, kvm <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>
CC:     Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
        Alexandru Elisei <alexandru.elisei@....com>,
        Catalin Marinas <catalin.marinas@....com>,
        <wanghaibin.wang@...wei.com>, <yuzenghui@...wei.com>
Subject: Re: [RFC PATCH 3/4] KVM: arm64: Install the block entry before
 unmapping the page mappings


On 2021/2/8 19:22, Yanan Wang wrote:
> When KVM needs to coalesce the normal page mappings into a block mapping,
> we currently invalidate the old table entry first followed by invalidation
> of TLB, then unmap the page mappings, and install the block entry at last.
>
> It will cost a long time to unmap the numerous page mappings, which means
> there will be a long period when the table entry can be found invalid.
> If other vCPUs access any guest page within the block range and find the
> table entry invalid, they will all exit from guest with a translation fault
> which is not necessary. And KVM will make efforts to handle these faults,
> especially when performing CMOs by block range.
>
> So let's quickly install the block entry at first to ensure uninterrupted
> memory access of the other vCPUs, and then unmap the page mappings after
> installation. This will reduce most of the time when the table entry is
> invalid, and avoid most of the unnecessary translation faults.
BTW: Here show the benefit of this patch alone for reference (testing 
based on patch1) .
This patch aims to speed up the reconstruction of block 
mappings(especially for 1G blocks)
after they have been split, and the following test results represent the 
significant change.
Selftest: 
https://lore.kernel.org/lkml/20210208090841.333724-1-wangyanan55@huawei.com/ 


---

hardware platform: HiSilicon Kunpeng920 Server(FWB not supported)
host kernel: Linux mainline v5.11-rc6 (with series of 
https://lore.kernel.org/r/20210114121350.123684-4-wangyanan55@huawei.com 
applied)

multiple vcpus concurrently access 20G memory.
execution time of KVM reconstituting the block mappings after dirty 
logging.

cmdline: ./kvm_page_table_test -m 4 -t 2 -g 1G -s 20G -v 20
            (20 vcpus, 20G memory, block mappings(HUGETLB 1G))
Before patch: KVM_ADJUST_MAPPINGS: 2.881s 2.883s 2.885s 2.879s 2.882s
After  patch: KVM_ADJUST_MAPPINGS: 0.310s 0.301s 0.312s 0.299s 0.306s  
*average 89% improvement*

cmdline: ./kvm_page_table_test -m 4 -t 2 -g 1G -s 20G -v 40
            (40 vcpus, 20G memory, block mappings(HUGETLB 1G))
Before patch: KVM_ADJUST_MAPPINGS: 2.954s 2.955s 2.949s 2.951s 2.953s
After  patch: KVM_ADJUST_MAPPINGS: 0.381s 0.366s 0.381s 0.380s 0.378s  
*average 87% improvement*

cmdline: ./kvm_page_table_test -m 4 -t 2 -g 1G -s 20G -v 60
            (60 vcpus, 20G memory, block mappings(HUGETLB 1G))
Before patch: KVM_ADJUST_MAPPINGS: 3.118s 3.112s 3.130s 3.128s 3.119s
After  patch: KVM_ADJUST_MAPPINGS: 0.524s 0.534s 0.536s 0.525s 0.539s  
*average 83% improvement*

---

Thanks,

Yanan
>
> Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
> ---
>   arch/arm64/kvm/hyp/pgtable.c | 26 ++++++++++++--------------
>   1 file changed, 12 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 78a560446f80..308c36b9cd21 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -434,6 +434,7 @@ struct stage2_map_data {
>   	kvm_pte_t			attr;
>   
>   	kvm_pte_t			*anchor;
> +	kvm_pte_t			*follow;
>   
>   	struct kvm_s2_mmu		*mmu;
>   	struct kvm_mmu_memory_cache	*memcache;
> @@ -553,15 +554,14 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
>   	if (!kvm_block_mapping_supported(addr, end, data->phys, level))
>   		return 0;
>   
> -	kvm_set_invalid_pte(ptep);
> -
>   	/*
> -	 * Invalidate the whole stage-2, as we may have numerous leaf
> -	 * entries below us which would otherwise need invalidating
> -	 * individually.
> +	 * If we need to coalesce existing table entries into a block here,
> +	 * then install the block entry first and the sub-level page mappings
> +	 * will be unmapped later.
>   	 */
> -	kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
>   	data->anchor = ptep;
> +	data->follow = kvm_pte_follow(*ptep);
> +	stage2_coalesce_tables_into_block(addr, level, ptep, data);
>   	return 0;
>   }
>   
> @@ -614,20 +614,18 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level,
>   				      kvm_pte_t *ptep,
>   				      struct stage2_map_data *data)
>   {
> -	int ret = 0;
> -
>   	if (!data->anchor)
>   		return 0;
>   
> -	free_page((unsigned long)kvm_pte_follow(*ptep));
> -	put_page(virt_to_page(ptep));
> -
> -	if (data->anchor == ptep) {
> +	if (data->anchor != ptep) {
> +		free_page((unsigned long)kvm_pte_follow(*ptep));
> +		put_page(virt_to_page(ptep));
> +	} else {
> +		free_page((unsigned long)data->follow);
>   		data->anchor = NULL;
> -		ret = stage2_map_walk_leaf(addr, end, level, ptep, data);
>   	}
>   
> -	return ret;
> +	return 0;
>   }
>   
>   /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ