lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zmc3euO2YGh-g9Th@arm.com>
Date: Mon, 10 Jun 2024 18:27:22 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Steven Price <steven.price@....com>
Cc: kvm@...r.kernel.org, kvmarm@...ts.linux.dev,
	Suzuki K Poulose <suzuki.poulose@....com>,
	Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
	James Morse <james.morse@....com>,
	Oliver Upton <oliver.upton@...ux.dev>,
	Zenghui Yu <yuzenghui@...wei.com>,
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
	Joey Gouly <joey.gouly@....com>,
	Alexandru Elisei <alexandru.elisei@....com>,
	Christoffer Dall <christoffer.dall@....com>,
	Fuad Tabba <tabba@...gle.com>, linux-coco@...ts.linux.dev,
	Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>
Subject: Re: [PATCH v3 09/14] arm64: Enable memory encrypt for Realms

On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote:
> +static int __set_memory_encrypted(unsigned long addr,
> +				  int numpages,
> +				  bool encrypt)
> +{
> +	unsigned long set_prot = 0, clear_prot = 0;
> +	phys_addr_t start, end;
> +	int ret;
> +
> +	if (!is_realm_world())
> +		return 0;
> +
> +	if (!__is_lm_address(addr))
> +		return -EINVAL;
> +
> +	start = __virt_to_phys(addr);
> +	end = start + numpages * PAGE_SIZE;
> +
> +	/*
> +	 * Break the mapping before we make any changes to avoid stale TLB
> +	 * entries or Synchronous External Aborts caused by RIPAS_EMPTY
> +	 */
> +	ret = __change_memory_common(addr, PAGE_SIZE * numpages,
> +				     __pgprot(0),
> +				     __pgprot(PTE_VALID));
> +
> +	if (encrypt) {
> +		clear_prot = PROT_NS_SHARED;
> +		ret = rsi_set_memory_range_protected(start, end);
> +	} else {
> +		set_prot = PROT_NS_SHARED;
> +		ret = rsi_set_memory_range_shared(start, end);
> +	}
> +
> +	if (ret)
> +		return ret;
> +
> +	set_prot |= PTE_VALID;
> +
> +	return __change_memory_common(addr, PAGE_SIZE * numpages,
> +				      __pgprot(set_prot),
> +				      __pgprot(clear_prot));
> +}

This works, does break-before-make and also rejects vmalloc() ranges
(for the time being).

One particular aspect I don't like is doing the TLBI twice. It's
sufficient to do it when you first make the pte invalid. We could guess
this in __change_memory_common() if set_mask has PTE_VALID. The call
sites are restricted to this file, just add a comment. An alternative
would be to add a bool flush argument to this function.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ