lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240108130757.xryzva4fkmgeekhu@box.shutemov.name>
Date: Mon, 8 Jan 2024 16:07:57 +0300
From: kirill.shutemov@...ux.intel.com
To: mhklinux@...look.com
Cc: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
	dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
	haiyangz@...rosoft.com, wei.liu@...nel.org, decui@...rosoft.com,
	luto@...nel.org, peterz@...radead.org, akpm@...ux-foundation.org,
	urezki@...il.com, hch@...radead.org, lstoakes@...il.com,
	thomas.lendacky@....com, ardb@...nel.org, jroedel@...e.de,
	seanjc@...gle.com, rick.p.edgecombe@...el.com,
	sathyanarayanan.kuppuswamy@...ux.intel.com,
	linux-kernel@...r.kernel.org, linux-coco@...ts.linux.dev,
	linux-hyperv@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 1/3] x86/hyperv: Use slow_virt_to_phys() in page
 transition hypervisor callback

On Fri, Jan 05, 2024 at 10:30:23AM -0800, mhkelley58@...il.com wrote:
> From: Michael Kelley <mhklinux@...look.com>
> 
> In preparation for temporarily marking pages not present during a
> transition between encrypted and decrypted, use slow_virt_to_phys()
> in the hypervisor callback. As long as the PFN is correct,
> slow_virt_to_phys() works even if the leaf PTE is not present.
> The existing functions that depend on vmalloc_to_page() all
> require that the leaf PTE be marked present, so they don't work.
> 
> Update the comments for slow_virt_to_phys() to note this broader usage
> and the requirement to work even if the PTE is not marked present.
> 
> Signed-off-by: Michael Kelley <mhklinux@...look.com>
> ---
>  arch/x86/hyperv/ivm.c        |  9 ++++++++-
>  arch/x86/mm/pat/set_memory.c | 13 +++++++++----
>  2 files changed, 17 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
> index 02e55237d919..8ba18635e338 100644
> --- a/arch/x86/hyperv/ivm.c
> +++ b/arch/x86/hyperv/ivm.c
> @@ -524,7 +524,14 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
>  		return false;
>  
>  	for (i = 0, pfn = 0; i < pagecount; i++) {
> -		pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE);
> +		/*
> +		 * Use slow_virt_to_phys() because the PRESENT bit has been
> +		 * temporarily cleared in the PTEs.  slow_virt_to_phys() works
> +		 * without the PRESENT bit while virt_to_hvpfn() or similar
> +		 * does not.
> +		 */
> +		pfn_array[pfn] = slow_virt_to_phys((void *)kbuffer +
> +					i * HV_HYP_PAGE_SIZE) >> HV_HYP_PAGE_SHIFT;

I think you can make it much more readable by introducing few variables:

		virt = (void *)kbuffer + i * HV_HYPPAGE_SIZE;
		phys = slow_virt_to_phys(virt);
		pfn_array[pfn] = phys >> HV_HYP_PAGE_SHIFT;

>  		pfn++;
>  
>  		if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index bda9f129835e..8e19796e7ce5 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -755,10 +755,15 @@ pmd_t *lookup_pmd_address(unsigned long address)
>   * areas on 32-bit NUMA systems.  The percpu areas can
>   * end up in this kind of memory, for instance.
>   *
> - * This could be optimized, but it is only intended to be
> - * used at initialization time, and keeping it
> - * unoptimized should increase the testing coverage for
> - * the more obscure platforms.
> + * It is also used in callbacks for CoCo VM page transitions between private
> + * and shared because it works when the PRESENT bit is not set in the leaf
> + * PTE. In such cases, the state of the PTEs, including the PFN, is otherwise
> + * known to be valid, so the returned physical address is correct. The similar
> + * function vmalloc_to_pfn() can't be used because it requires the PRESENT bit.
> + *
> + * This could be optimized, but it is only used in paths that are not perf
> + * sensitive, and keeping it unoptimized should increase the testing coverage
> + * for the more obscure platforms.
>   */
>  phys_addr_t slow_virt_to_phys(void *__virt_addr)
>  {
> -- 
> 2.25.1
> 

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ