[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <CZCQZ5FTCCB5.GIN1NU7G5S0@suppilovahvero>
Date: Fri, 23 Feb 2024 22:40:14 +0200
From: "Jarkko Sakkinen" <jarkko@...nel.org>
To: "Daniel P. Smith" <dpsmith@...rtussolutions.com>, "James Bottomley"
<James.Bottomley@...senPartnership.com>, "Lino Sanfilippo"
<l.sanfilippo@...bus.com>, "Alexander Steffen"
<Alexander.Steffen@...ineon.com>, "Jason Gunthorpe" <jgg@...pe.ca>, "Sasha
Levin" <sashal@...nel.org>, <linux-integrity@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Cc: "Ross Philipson" <ross.philipson@...cle.com>, "Kanth Ghatraju"
<kanth.ghatraju@...cle.com>, "Peter Huewe" <peterhuewe@....de>
Subject: Re: [PATCH 1/3] tpm: protect against locality counter underflow
On Fri Feb 23, 2024 at 3:57 AM EET, Daniel P. Smith wrote:
> On 2/21/24 14:43, Jarkko Sakkinen wrote:
> > On Wed Feb 21, 2024 at 12:37 PM UTC, James Bottomley wrote:
> >> On Tue, 2024-02-20 at 22:31 +0000, Jarkko Sakkinen wrote:
> >>>
> >>> 2. Because localities are not too useful these days given TPM2's
> >>> policy mechanism
> >>
> >> Localitites are useful to the TPM2 policy mechanism. When we get key
> >> policy in the kernel it will give us a way to create TPM wrapped keys
> >> that can only be unwrapped in the kernel if we run the kernel in a
> >> different locality from userspace (I already have demo patches doing
> >> this).
> >
> > Let's keep this discussion in scope, please.
> >
> > Removing useless code using registers that you might have some actually
> > useful use is not wrong thing to do. It is better to look at things from
> > clean slate when the time comes.
> >
> >>> I cannot recall out of top of my head can
> >>> you have two localities open at same time.
> >>
> >> I think there's a misunderstanding about what localities are: they're
> >> effectively an additional platform supplied tag to a command. Each
> >> command can therefore have one and only one locality. The TPM doesn't
> >
> > Actually this was not unclear at all. I even read the chapters from
> > Ariel Segall's yesterday as a refresher.
> >
> > I was merely asking that if TPM_ACCESS_X is not properly cleared and you
> > se TPM_ACCESS_Y where Y < X how does the hardware react as the bug
> > report is pretty open ended and not very clear of the steps leading to
> > unwanted results.
> >
> > With a quick check from [1] could not spot the conflict reaction but
> > it is probably there.
>
> The expected behavior is explained in the Informative Comment of section
> 6.5.2.4 of the Client PTP spec[1]:
>
> "The purpose of this register is to allow the processes operating at the
> various localities to share the TPM. The basic notion is that any
> locality can request access to the TPM by setting the
> TPM_ACCESS_x.requestUse field using its assigned TPM_ACCESS_x register
> address. If there is no currently set locality, the TPM sets current
> locality to the requesting one and allows operations only from that
> locality. If the TPM is currently at another locality, the TPM keeps the
> request pending until the currently executing locality frees the TPM.
Right.
I'd think it would make sense to document the basic dance like this as
part of kdoc for request_locality:
* Setting TPM_ACCESS_x.requestUse:
* 1. No locality reserved => set locality.
* 2. Locality reserved => set pending.
I.e. easy reminder with enough granularity.
> Software relinquishes the TPM’s locality by writing a 1 to the
> TPM_ACCESS_x.activeLocality field. Upon release, the TPM honors the
> highest locality request pending. If there is no pending request, the
> TPM enters the “free” state."
And this for relinquish_locality:
* Setting TPM_ACCESS_x.activeLocality:
* 1. No locality pending => free.
* 2. Localities pending => reserve for highest.
> >> submission). I think the locality request/relinquish was modelled
> >> after some other HW, but I don't know what.
> >
> > My wild guess: first implementation was made when TPM's became available
> > and there was no analytical thinking other than getting something that
> > runs :-)
>
> Actually, no that is not how it was done. IIRC, localities were designed
> in conjunction with D-RTM when Intel and MS started the LeGrande effort
> back in 2000. It was then generalized for the TPM 1.1b specification. My
OK, thanks for this bit of information! I did not know this.
> first introduction to LeGrande/TXT wasn't until 2005 as part of an early
> access program. So most of my historical understanding is from
> discussions I luckily got to have with one of the architects and a few
> of the original TCG committee members.
Thanks alot for sharing this.
>
> [1]
> https://trustedcomputinggroup.org/wp-content/uploads/PC-Client-Specific-Platform-TPM-Profile-for-TPM-2p0-v1p05p_r14_pub.pdf
>
> v/r,
> dps
BR, Jarkko
Powered by blists - more mailing lists