[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241120115855.582867a7@canb.auug.org.au>
Date: Wed, 20 Nov 2024 11:58:55 +1100
From: Stephen Rothwell <sfr@...b.auug.org.au>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon
<will@...nel.org>, Linux Kernel Mailing List
<linux-kernel@...r.kernel.org>, Linux Next Mailing List
<linux-next@...r.kernel.org>, "Mike Rapoport (Microsoft)"
<rppt@...nel.org>, Steven Price <steven.price@....com>, Suzuki K Poulose
<suzuki.poulose@....com>
Subject: Re: linux-next: manual merge of the arm64 tree with the mm tree
Hi all,
On Thu, 24 Oct 2024 10:37:09 +1100 Stephen Rothwell <sfr@...b.auug.org.au> wrote:
>
> Today's linux-next merge of the arm64 tree got a conflict in:
>
> arch/arm64/mm/pageattr.c
>
> between commit:
>
> 040ee4186d6c ("arch: introduce set_direct_map_valid_noflush()")
>
> from the mm-unstable branch of the mm tree and commit:
>
> 42be24a4178f ("arm64: Enable memory encrypt for Realms")
>
> from the arm64 tree.
>
> I fixed it up (see below) and can carry the fix as necessary. This
> is now fixed as far as linux-next is concerned, but any non trivial
> conflicts should be mentioned to your upstream maintainer when your tree
> is submitted for merging. You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
>
> --
> Cheers,
> Stephen Rothwell
>
> diff --cc arch/arm64/mm/pageattr.c
> index 01225900293a,6ae6ae806454..000000000000
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@@ -192,16 -202,86 +202,96 @@@ int set_direct_map_default_noflush(stru
> PAGE_SIZE, change_page_range, &data);
> }
>
> +int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
> +{
> + unsigned long addr = (unsigned long)page_address(page);
> +
> + if (!can_set_direct_map())
> + return 0;
> +
> + return set_memory_valid(addr, nr, valid);
> +}
> +
> + static int __set_memory_enc_dec(unsigned long addr,
> + int numpages,
> + bool encrypt)
> + {
> + unsigned long set_prot = 0, clear_prot = 0;
> + phys_addr_t start, end;
> + int ret;
> +
> + if (!is_realm_world())
> + return 0;
> +
> + if (!__is_lm_address(addr))
> + return -EINVAL;
> +
> + start = __virt_to_phys(addr);
> + end = start + numpages * PAGE_SIZE;
> +
> + if (encrypt)
> + clear_prot = PROT_NS_SHARED;
> + else
> + set_prot = PROT_NS_SHARED;
> +
> + /*
> + * Break the mapping before we make any changes to avoid stale TLB
> + * entries or Synchronous External Aborts caused by RIPAS_EMPTY
> + */
> + ret = __change_memory_common(addr, PAGE_SIZE * numpages,
> + __pgprot(set_prot),
> + __pgprot(clear_prot | PTE_VALID));
> +
> + if (ret)
> + return ret;
> +
> + if (encrypt)
> + ret = rsi_set_memory_range_protected(start, end);
> + else
> + ret = rsi_set_memory_range_shared(start, end);
> +
> + if (ret)
> + return ret;
> +
> + return __change_memory_common(addr, PAGE_SIZE * numpages,
> + __pgprot(PTE_VALID),
> + __pgprot(0));
> + }
> +
> + static int realm_set_memory_encrypted(unsigned long addr, int numpages)
> + {
> + int ret = __set_memory_enc_dec(addr, numpages, true);
> +
> + /*
> + * If the request to change state fails, then the only sensible cause
> + * of action for the caller is to leak the memory
> + */
> + WARN(ret, "Failed to encrypt memory, %d pages will be leaked",
> + numpages);
> +
> + return ret;
> + }
> +
> + static int realm_set_memory_decrypted(unsigned long addr, int numpages)
> + {
> + int ret = __set_memory_enc_dec(addr, numpages, false);
> +
> + WARN(ret, "Failed to decrypt memory, %d pages will be leaked",
> + numpages);
> +
> + return ret;
> + }
> +
> + static const struct arm64_mem_crypt_ops realm_crypt_ops = {
> + .encrypt = realm_set_memory_encrypted,
> + .decrypt = realm_set_memory_decrypted,
> + };
> +
> + int realm_register_memory_enc_ops(void)
> + {
> + return arm64_mem_crypt_ops_register(&realm_crypt_ops);
> + }
> +
> #ifdef CONFIG_DEBUG_PAGEALLOC
> void __kernel_map_pages(struct page *page, int numpages, int enable)
> {
This is now a conflict between the mm-stable tree and Linus' tree.
--
Cheers,
Stephen Rothwell
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists