[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4bd31b91-1f6a-4081-9ad8-e5fae29d0dd7@apertussolutions.com>
Date: Thu, 22 Feb 2024 20:57:18 -0500
From: "Daniel P. Smith" <dpsmith@...rtussolutions.com>
To: Jarkko Sakkinen <jarkko@...nel.org>,
Lino Sanfilippo <l.sanfilippo@...bus.com>,
Alexander Steffen <Alexander.Steffen@...ineon.com>,
Jason Gunthorpe <jgg@...pe.ca>, Sasha Levin <sashal@...nel.org>,
linux-integrity@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: Ross Philipson <ross.philipson@...cle.com>,
Kanth Ghatraju <kanth.ghatraju@...cle.com>, Peter Huewe <peterhuewe@....de>
Subject: Re: [PATCH 1/3] tpm: protect against locality counter underflow
On 2/20/24 17:31, Jarkko Sakkinen wrote:
> On Tue Feb 20, 2024 at 10:26 PM UTC, Jarkko Sakkinen wrote:
>> On Tue Feb 20, 2024 at 8:54 PM UTC, Lino Sanfilippo wrote:
>>> for (i = 0; i <= MAX_LOCALITY; i++)
>>> __tpm_tis_relinquish_locality(priv, i);
>>
>> I'm pretty unfamiliar with Intel TXT so asking a dummy question:
>> if Intel TXT uses locality 2 I suppose we should not try to
>> relinquish it, or?
>>
>> AFAIK, we don't have a symbol called MAX_LOCALITY.
>
> OK it was called TPM_MAX_LOCALITY :-) I had the patch set applied
> in one branch but looked up with wrong symbol name.
>
> So I reformalize my question to two parts:
>
> 1. Why does TXT leave locality 2 open in the first place? I did
> not see explanation. Isn't this a bug in TXT?
It does so because that is what the TCG D-RTM specification requires.
See Section 5.3.4.10 of the TCG D-RTM specification[1], the first
requirement is, "The DLME SHALL receive control with access to Locality 2."
> 2. Because localities are not too useful these days given TPM2's
> policy mechanism I cannot recall out of top of my head can
> you have two localities open at same time. So what kind of
> conflict happens when you try to open locality 0 and have
> locality 2 open?
I would disagree and would call your attention to the TCG's
definition/motivation for localities, Section 3.2 of Client PTP
specification[2].
"“Locality” is an assertion to the TPM that a command’s source is
associated with a particular component. Locality can be thought of as a
hardware-based authorization. The TPM is not actually aware of the
nature of the relationship between the locality and the component. The
ability to reset and extend notwithstanding, it is important to note
that, from a PCR “usage” perspective, there is no hierarchical
relationship between different localities. The TPM simply enforces
locality restrictions on TPM assets (such as PCR or SEALed blobs)."
As stated, from the TPM specification perspective, it is not aware of
this mapping to components and leaves it to the platform to enforce.
"The protection and separation of the localities (and therefore the
association with the associated components) is entirely the
responsibility of the platform components. Platform components,
including the OS, may provide the separation of localities using
protection mechanisms such as virtual memory or paging."
The x86 manufactures opted to adopt the D-RTM specification which
defines the components as follows:
Locality 4: Usually associated with the CPU executing microcode. This is
used to establish the Dynamic RTM.
Locality 3: Auxiliary components. Use of this is optional and, if used,
it is implementation dependent.
Locality 2: Dynamically Launched OS (Dynamic OS) “runtime” environment.
Locality 1: An environment for use by the Dynamic OS.
Locality 0: The Static RTM, its chain of trust and its environment.
And the means to protect and separate those localities are encoded in
the x86 chipset, i.e A D-RTM Event must be used to access any of the
D-RTM Localities (Locality1 - Locality4).
For Intel, Locality 4 can only be accessed when a dedicated signal
between the CPU and the chipset is raised, thus only allowing the CPU to
utilize Locality 4. The CPU will then close Locality 4, authenticate and
give control to the ACM with access to Locality 3. When the ACM is
complete, it will instruct the chipset to lock Locality 3 and give
control to the DLME (MLE in Intel parlance) with Locality 2 open. It is
up to the DLME, the Linux kernel in this case, to decide how to assign
components to Locality 1 and 2.
As to proposals to utilize localities by the Linux kernel, the only one
I was aware of was dropped because they couldn't open the higher localities.
I would also highlight that the D-RTM implementation guide for Arm
allows for a hardware D-RTM event, which the vendor may choose to
implement a hardware/CPU enforced access to TPM localities. Thus, the
ability to support localities will also become a requirement for certain
Arm CPUs.
[1]
https://trustedcomputinggroup.org/wp-content/uploads/TCG_D-RTM_Architecture_v1-0_Published_06172013.pdf
[2]
https://trustedcomputinggroup.org/wp-content/uploads/PC-Client-Specific-Platform-TPM-Profile-for-TPM-2p0-v1p05p_r14_pub.pdf
Powered by blists - more mailing lists