lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210308163454.GA26561@willie-the-truck>
Date:   Mon, 8 Mar 2021 16:34:55 +0000
From:   Will Deacon <will@...nel.org>
To:     Yanan Wang <wangyanan55@...wei.com>
Cc:     Marc Zyngier <maz@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        wanghaibin.wang@...wei.com, yuzenghui@...wei.com
Subject: Re: [PATCH 2/2] KVM: arm64: Skip the cache flush when coalescing
 tables into a block

On Mon, Jan 25, 2021 at 10:10:44PM +0800, Yanan Wang wrote:
> After dirty-logging is stopped for a VM configured with huge mappings,
> KVM will recover the table mappings back to block mappings. As we only
> replace the existing page tables with a block entry and the cacheability
> has not been changed, the cache maintenance opreations can be skipped.
> 
> Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
> ---
>  arch/arm64/kvm/mmu.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 8e8549ea1d70..37b427dcbc4f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -744,7 +744,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  {
>  	int ret = 0;
>  	bool write_fault, writable, force_pte = false;
> -	bool exec_fault;
> +	bool exec_fault, adjust_hugepage;
>  	bool device = false;
>  	unsigned long mmu_seq;
>  	struct kvm *kvm = vcpu->kvm;
> @@ -872,12 +872,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  		mark_page_dirty(kvm, gfn);
>  	}
>  
> -	if (fault_status != FSC_PERM && !device)
> +	/*
> +	 * There is no necessity to perform cache maintenance operations if we
> +	 * will only replace the existing table mappings with a block mapping.
> +	 */
> +	adjust_hugepage = fault_granule < vma_pagesize ? true : false;

nit: you don't need the '? true : false' part

That said, your previous patch checks for 'fault_granule > vma_pagesize',
so I'm not sure the local variable helps all that much here because it
obscures the size checks in my opinion. It would be more straight-forward
if we could structure the logic as:


	if (fault_granule < vma_pagesize) {

	} else if (fault_granule > vma_page_size) {

	} else {

	}

With some comments describing what we can infer about the memcache and cache
maintenance requirements for each case.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ