[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aM0A1hceUC-RJdo8@kernel.org>
Date: Fri, 19 Sep 2025 10:05:58 +0300
From: Jarkko Sakkinen <jarkko@...nel.org>
To: "Serge E. Hallyn" <serge@...lyn.com>
Cc: linux-integrity@...r.kernel.org,
Frédéric Jouen <fjouen@...lsq.com>,
Peter Huewe <peterhuewe@....de>, Jason Gunthorpe <jgg@...pe.ca>,
James Bottomley <James.Bottomley@...senpartnership.com>,
Mimi Zohar <zohar@...ux.ibm.com>,
David Howells <dhowells@...hat.com>,
Paul Moore <paul@...l-moore.com>, James Morris <jmorris@...ei.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:KEYS-TRUSTED" <keyrings@...r.kernel.org>,
"open list:SECURITY SUBSYSTEM" <linux-security-module@...r.kernel.org>
Subject: Re: [PATCH v2] tpm: use a map for tpm2_calc_ordinal_duration()
On Thu, Sep 18, 2025 at 10:49:28PM -0500, Serge E. Hallyn wrote:
> On Thu, Sep 18, 2025 at 10:30:18PM +0300, Jarkko Sakkinen wrote:
> > The current shenanigans for duration calculation introduce too much
> > complexity for a trivial problem, and further the code is hard to patch and
> > maintain.
> >
> > Address these issues with a flat look-up table, which is easy to understand
> > and patch. If leaf driver specific patching is required in future, it is
> > easy enough to make a copy of this table during driver initialization and
> > add the chip parameter back.
> >
> > 'chip->duration' is retained for TPM 1.x.
> >
> > As the first entry for this new behavior address TCG spec update mentioned
> > in this issue:
> >
> > https://github.com/raspberrypi/linux/issues/7054
> >
> > Therefore, for TPM_SelfTest the duration is set to 3000 ms.
> >
> > This does not categorize a as bug, given that this is introduced to the
> > spec after the feature was originally made.
> >
> > Cc: Frédéric Jouen <fjouen@...lsq.com>
> > Signed-off-by: Jarkko Sakkinen <jarkko@...nel.org>
>
> fwiw (which shouldn't be much) looks good to me, but two questions,
> one here and one below.
>
> First, it looks like in the existing code it is possible for a tpm2
> chip to set its own timeouts and then set the TPM_CHIP_FLAG_HAVE_TIMEOUTS
> flag to avoid using the defaults, but I don't see anything using that
> in-tree. Is it possible that there are out of tree drivers that will be
> sabotaged here? Or am I misunderstanding that completely?
Good questions, and I can brief a bit about the context of the
pre-existing art and this change.
This complexity was formed in 2014 when I originally developed TPM2
support and the only available testing plaform was early Intel PTT with
a flakky version of TPM2 support (e.g., no localities).
Since then we haven't had per leaf-driver divergence.
Further, I think that this type of layout is actually a better fit if
we ever need to quirks for command durations for a particular device, as
then we can migrate to "copy and patch" semantics i.e., have a copy of
this map in the chip structure.
As per out-of-tree drivers, it's unfortunate reality of out-of-tree
drivers :-) However, this will definitely add some extra work, when
backporting fixes (not overwhelmingly much).
BR, Jarkko
Powered by blists - more mailing lists