lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZuRyv8Q4iRDabq1-@arm.com>
Date: Fri, 13 Sep 2024 18:13:35 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Yang Shi <yang@...amperecomputing.com>
Cc: will@...nel.org, muchun.song@...ux.dev, david@...hat.com,
	akpm@...ux-foundation.org, linux-arm-kernel@...ts.infradead.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [v4 PATCH 1/2] hugetlb: arm64: add mte support

On Thu, Sep 12, 2024 at 01:41:28PM -0700, Yang Shi wrote:
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index a7bb20055ce0..c8687ccc2633 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -18,17 +18,41 @@ void copy_highpage(struct page *to, struct page *from)
>  {
>  	void *kto = page_address(to);
>  	void *kfrom = page_address(from);
> +	struct folio *src = page_folio(from);
> +	struct folio *dst = page_folio(to);
> +	unsigned int i, nr_pages;
>  
>  	copy_page(kto, kfrom);
>  
>  	if (kasan_hw_tags_enabled())
>  		page_kasan_tag_reset(to);
>  
> -	if (system_supports_mte() && page_mte_tagged(from)) {
> -		/* It's a new page, shouldn't have been tagged yet */
> -		WARN_ON_ONCE(!try_page_mte_tagging(to));
> -		mte_copy_page_tags(kto, kfrom);
> -		set_page_mte_tagged(to);
> +	if (system_supports_mte()) {
> +		if (folio_test_hugetlb(src) &&
> +		    folio_test_hugetlb_mte_tagged(src)) {
> +			if (!try_folio_hugetlb_mte_tagging(dst))
> +				return;
> +
> +			/*
> +			 * Populate tags for all subpages.
> +			 *
> +			 * Don't assume the first page is head page since
> +			 * huge page copy may start from any subpage.
> +			 */
> +			nr_pages = folio_nr_pages(src);
> +			for (i = 0; i < nr_pages; i++) {
> +				kfrom = page_address(folio_page(src, i));
> +				kto = page_address(folio_page(dst, i));
> +				mte_copy_page_tags(kto, kfrom);
> +			}
> +			folio_set_hugetlb_mte_tagged(dst);
> +		} else if (page_mte_tagged(from)) {
> +			/* It's a new page, shouldn't have been tagged yet */
> +			WARN_ON_ONCE(!try_page_mte_tagging(to));
> +
> +			mte_copy_page_tags(kto, kfrom);
> +			set_page_mte_tagged(to);
> +		}
>  	}
>  }

A nitpick here: I don't like that much indentation, so just do an early
return if !system_supports_mte() in this function.

Otherwise the patch looks fine to me. I agree with David's point on an
earlier version of this patch, the naming of these functions isn't
great. So, as per David's suggestion (at least for the first two):

folio_test_hugetlb_mte_tagged()
folio_set_hugetlb_mte_tagged()
folio_try_hugetlb_mte_tagging()

As for "try" vs "test_and_set_.*_lock", the original name was picked to
mimic spin_trylock() since this function is waiting/spinning. It's not
great but the alternative naming is closer to test_and_set_bit_lock().
This has different behaviour, it only sets a bit with acquire semantics,
no waiting/spinning.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ