[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aIDzBOmjzveLjhmk@google.com>
Date: Wed, 23 Jul 2025 07:34:44 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Nikolay Borisov <nik.borisov@...e.com>
Cc: Binbin Wu <binbin.wu@...ux.intel.com>, Jianxiong Gao <jxgao@...gle.com>,
"Borislav Petkov (AMD)" <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
Dionna Glaze <dionnaglaze@...gle.com>, "H. Peter Anvin" <hpa@...or.com>, jgross@...e.com,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>, pbonzini@...hat.com,
Peter Gonda <pgonda@...gle.com>, Thomas Gleixner <tglx@...utronix.de>,
Tom Lendacky <thomas.lendacky@....com>, Vitaly Kuznetsov <vkuznets@...hat.com>, x86@...nel.org,
Rick Edgecombe <rick.p.edgecombe@...el.com>
Subject: Re: [PATCH 0/2] x86/kvm: Force legacy PCI hole as WB under SNP/TDX
On Mon, Jul 14, 2025, Nikolay Borisov wrote:
> On 14.07.25 г. 12:06 ч., Binbin Wu wrote:
> > On 7/10/2025 12:54 AM, Jianxiong Gao wrote:
> > > I tested this patch on top of commit 8e690b817e38, however we are
> > > still experiencing the same failure.
> > >
> > I didn't reproduce the issue with QEMU.
> > After some comparison on how QEMU building the ACPI tables for HPET and
> > TPM,
> >
> > - For HPET, the HPET range is added as Operation Region:
> > aml_append(dev,
> > aml_operation_region("HPTM", AML_SYSTEM_MEMORY,
> > aml_int(HPET_BASE),
> > HPET_LEN));
> >
> > - For TPM, the range is added as 32-Bit Fixed Memory Range:
> > if (TPM_IS_TIS_ISA(tpm_find())) {
> > aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
> > TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
> > }
> >
> > So, in KVM, the code patch of TPM is different from the trace for HPET in
> > the patch https://lore.kernel.org/kvm/20250201005048.657470-3-seanjc@google.com/,
> > HPET will trigger the code path acpi_os_map_iomem(), but TPM doesn't.
Argh, I was looking at the wrong TPM resource when poking through QEMU. I peeked
at TPM_PPI_ADDR_BASE, which gets an AML_SYSTEM_MEMORY entry, not TPM_TIS_ADDR_BASE.
*sigh*
Note, the HPET is also enumerated as a fixed resource:
crs = aml_resource_template();
aml_append(crs, aml_memory32_fixed(HPET_BASE, HPET_LEN, AML_READ_ONLY));
aml_append(dev, aml_name_decl("_CRS", crs));
If I comment out the AML_SYSTEM_MEMORY entry for HPET, the kernel's auto-mapping
does NOT kick in (the kernel complains about required resources being missing,
but that's expected). So I'm pretty sure it's the _lack_ of an AML_SYSTEM_MEMORY
entry for TPM TIS in QEMU's ACPI tables that make everything happy
I can't for the life of me suss out exactly what Google's ACPI tables will look
like. I'll follow-up internally to try and get an answer on that front.
In the meantime, can someone who has reproduced the real issue get backtraces to
confirm or disprove that acpi_os_map_iomem() is trying to map the TPM TIS range
as WB? E.g. with something like so:
diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index 2e7923844afe..6c3c40909ef9 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -528,6 +528,9 @@ int memtype_reserve(u64 start, u64 end, enum page_cache_mode req_type,
start = sanitize_phys(start);
+ WARN(start == 0xFED40000,
+ "Mapping TPM TIS with req_type = %u\n", req_type);
+
/*
* The end address passed into this function is exclusive, but
* sanitize_phys() expects an inclusive address.
---
Powered by blists - more mailing lists