lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <27718d41-32cb-4976-b50e-e9237da7aedf@arm.com>
Date: Mon, 8 Apr 2024 09:43:44 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Gavin Shan <gshan@...hat.com>, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Cc: catalin.marinas@....com, will@...nel.org, akpm@...ux-foundation.org,
 maz@...nel.org, oliver.upton@...ux.dev, apopple@...dia.com,
 rananta@...gle.com, mark.rutland@....com, v-songbaohua@...o.com,
 yangyicong@...ilicon.com, shahuang@...hat.com, yihyu@...hat.com,
 shan.gavin@...il.com
Subject: Re: [PATCH v3 3/3] arm64: tlb: Allow range operation for
 MAX_TLBI_RANGE_PAGES

On 05/04/2024 04:58, Gavin Shan wrote:
> MAX_TLBI_RANGE_PAGES pages is covered by SCALE#3 and NUM#31 and it's
> supported now. Allow TLBI RANGE operation when the number of pages is
> equal to MAX_TLBI_RANGE_PAGES in __flush_tlb_range_nosync().
> 
> Suggested-by: Marc Zyngier <maz@...nel.org>
> Signed-off-by: Gavin Shan <gshan@...hat.com>

Reviewed-by: Ryan Roberts <ryan.roberts@....com>

> ---
>  arch/arm64/include/asm/tlbflush.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 243d71f7bc1f..95fbc8c05607 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -446,11 +446,11 @@ static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma,
>  	 * When not uses TLB range ops, we can handle up to
>  	 * (MAX_DVM_OPS - 1) pages;
>  	 * When uses TLB range ops, we can handle up to
> -	 * (MAX_TLBI_RANGE_PAGES - 1) pages.
> +	 * MAX_TLBI_RANGE_PAGES pages.
>  	 */
>  	if ((!system_supports_tlb_range() &&
>  	     (end - start) >= (MAX_DVM_OPS * stride)) ||
> -	    pages >= MAX_TLBI_RANGE_PAGES) {
> +	    pages > MAX_TLBI_RANGE_PAGES) {

As a further enhancement, I wonder if it might be better to test:

	pages * 4 / MAX_TLBI_RANGE_PAGES > MAX_DVM_OPS

Then add an extra loop over __flush_tlb_range_op(), like KVM does.

The math is trying to express that there are a maximum of 4 tlbi range
instructions for MAX_TLBI_RANGE_PAGES pages (1 per scale) and we only need to
fall back to flushing the whole mm if it could generate more than MAX_DVM_OPS ops.

>  		flush_tlb_mm(vma->vm_mm);
>  		return;
>  	}


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ