[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <711d659f-3f57-48e4-b5b3-efbc2fe236c8@infineon.com>
Date: Wed, 17 Jan 2024 09:44:37 +0100
From: Alexander Steffen <Alexander.Steffen@...ineon.com>
To: "Daniel P. Smith" <dpsmith@...rtussolutions.com>, Jason Gunthorpe
<jgg@...pe.ca>, Jarkko Sakkinen <jarkko@...nel.org>, Lino Sanfilippo
<l.sanfilippo@...bus.com>, Sasha Levin <sashal@...nel.org>,
<linux-integrity@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC: Ross Philipson <ross.philipson@...cle.com>, Kanth Ghatraju
<kanth.ghatraju@...cle.com>, Peter Huewe <peterhuewe@....de>
Subject: Re: [PATCH] tpm: make locality handling resilient
On 15.01.2024 02:15, Daniel P. Smith wrote:
> Commit 933bfc5ad213 introduced the use of a locality counter to control when
> locality request was actually sent to the TPM. This locality counter created a
> hard enforcement that the TPM had no active locality at the time of the driver
> initialization. The reality is that this may not always be the case coupled
> with the fact that the commit indiscriminately decremented the counter created
> the condition for integer underflow of the counter. The underflow was triggered
> by the first pair of request/relinquish calls made in tpm_tis_init_core and all
> subsequent calls to request/relinquished calls would have the counter flipping
> between the underflow value and 0. The result is that it appeared all calls to
> request/relinquish were successful, but they were not. The end result is that
> the locality that was active when the driver loaded would always remain active,
> to include after the driver shutdown. This creates a significant issue when
> using Intel TXT and Locality 2 is active at boot. After the GETSEC[SEXIT]
> instruction is called, the PCH will close access to Locality 2 MMIO address
> space, leaving the TPM locked in Locality 2 with no means to relinquish the
> locality until system reset.
>
> The commit seeks to address this situation through three changes.
Could you split this up into multiple patches then, so that they can be
discussed separately?
> The first is
> to walk the localities during initialization and close any open localities to
> ensure the TPM is in the assumed state. Next is to put guards around the
> counter and the requested locality to ensure they remain within valid values.
> The last change is to make the request locality functions be consistent in
> their return values. The functions will either return the locality requested if
> successful or a negative error code.
>
> Signed-off-by: Daniel P. Smith <dpsmith@...rtussolutions.com>
> Signed-off-by: Ross Philipson <ross.philipson@...cle.com>
> Reported-by: Kanth Ghatraju <kanth.ghatraju@...cle.com>
> Fixes: 933bfc5ad213 ("tpm, tpm: Implement usage counter for locality")
> ---
> drivers/char/tpm/tpm-chip.c | 2 +-
> drivers/char/tpm/tpm_tis_core.c | 20 +++++++++++++++-----
> include/linux/tpm.h | 2 ++
> 3 files changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
> index 42b1062e33cd..e7293f85335a 100644
> --- a/drivers/char/tpm/tpm-chip.c
> +++ b/drivers/char/tpm/tpm-chip.c
> @@ -49,7 +49,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
> return rc;
>
> chip->locality = rc;
> - return 0;
> + return chip->locality;
> }
>
> static void tpm_relinquish_locality(struct tpm_chip *chip)
> diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
> index 1b350412d8a6..c8b9b0b199dc 100644
> --- a/drivers/char/tpm/tpm_tis_core.c
> +++ b/drivers/char/tpm/tpm_tis_core.c
> @@ -180,7 +180,8 @@ static int tpm_tis_relinquish_locality(struct tpm_chip *chip, int l)
> struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
>
> mutex_lock(&priv->locality_count_mutex);
> - priv->locality_count--;
> + if (priv->locality_count > 0)
> + priv->locality_count--;
> if (priv->locality_count == 0)
> __tpm_tis_relinquish_locality(priv, l);
> mutex_unlock(&priv->locality_count_mutex);
> @@ -226,18 +227,21 @@ static int __tpm_tis_request_locality(struct tpm_chip *chip, int l)
> tpm_msleep(TPM_TIMEOUT);
> } while (time_before(jiffies, stop));
> }
> - return -1;
> + return -EBUSY;
> }
>
> static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
> {
> struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
> - int ret = 0;
> + int ret = -EIO;
> +
> + if (l > TPM_MAX_LOCALITY)
> + return -EINVAL;
>
> mutex_lock(&priv->locality_count_mutex);
> if (priv->locality_count == 0)
> ret = __tpm_tis_request_locality(chip, l);
> - if (!ret)
> + if (ret >= 0)
> priv->locality_count++;
> mutex_unlock(&priv->locality_count_mutex);
> return ret;
This line seems to be the most important change, that fixes the
locality_count handling for localities != 0. It could already be
sufficient to fix your original problem. I'm not sure all the other
changes are really necessary.
> @@ -1108,7 +1112,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
> u32 intmask;
> u32 clkrun_val;
> u8 rid;
> - int rc, probe;
> + int rc, probe, locality;
> struct tpm_chip *chip;
>
> chip = tpmm_chip_alloc(dev, &tpm_tis);
> @@ -1169,6 +1173,12 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
> goto out_err;
> }
>
> + /* It is not safe to assume localities are closed on startup */
> + for (locality = 0; locality <= TPM_MAX_LOCALITY; locality++) {
> + if (check_locality(chip, locality))
> + tpm_tis_relinquish_locality(chip, locality);
> + }
> +
> /* Take control of the TPM's interrupt hardware and shut it off */
> rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
> if (rc < 0)
> diff --git a/include/linux/tpm.h b/include/linux/tpm.h
> index 4ee9d13749ad..f2651281f02e 100644
> --- a/include/linux/tpm.h
> +++ b/include/linux/tpm.h
> @@ -116,6 +116,8 @@ struct tpm_chip_seqops {
> const struct seq_operations *seqops;
> };
>
> +#define TPM_MAX_LOCALITY 4
> +
> struct tpm_chip {
> struct device dev;
> struct device devs;
> --
> 2.30.2
>
Powered by blists - more mailing lists