[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <CYJ02TX197BU.2MENZV8N4N87O@seitikki>
Date: Fri, 19 Jan 2024 21:28:14 +0000
From: "Jarkko Sakkinen" <jarkko@...nel.org>
To: "Daniel P. Smith" <dpsmith@...rtussolutions.com>, "Jason Gunthorpe"
<jgg@...pe.ca>, "Lino Sanfilippo" <l.sanfilippo@...bus.com>, "Sasha Levin"
<sashal@...nel.org>, <linux-integrity@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Cc: "Ross Philipson" <ross.philipson@...cle.com>, "Kanth Ghatraju"
<kanth.ghatraju@...cle.com>, "Peter Huewe" <peterhuewe@....de>
Subject: Re: [PATCH] tpm: make locality handling resilient
On Mon Jan 15, 2024 at 1:15 AM UTC, Daniel P. Smith wrote:
> Commit 933bfc5ad213 introduced the use of a locality counter to control when
> locality request was actually sent to the TPM. This locality counter created a
> hard enforcement that the TPM had no active locality at the time of the driver
> initialization. The reality is that this may not always be the case coupled
> with the fact that the commit indiscriminately decremented the counter created
> the condition for integer underflow of the counter. The underflow was triggered
> by the first pair of request/relinquish calls made in tpm_tis_init_core and all
> subsequent calls to request/relinquished calls would have the counter flipping
> between the underflow value and 0. The result is that it appeared all calls to
> request/relinquish were successful, but they were not. The end result is that
> the locality that was active when the driver loaded would always remain active,
> to include after the driver shutdown. This creates a significant issue when
> using Intel TXT and Locality 2 is active at boot. After the GETSEC[SEXIT]
> instruction is called, the PCH will close access to Locality 2 MMIO address
> space, leaving the TPM locked in Locality 2 with no means to relinquish the
> locality until system reset.
>
> The commit seeks to address this situation through three changes. The first is
> to walk the localities during initialization and close any open localities to
> ensure the TPM is in the assumed state. Next is to put guards around the
> counter and the requested locality to ensure they remain within valid values.
> The last change is to make the request locality functions be consistent in
> their return values. The functions will either return the locality requested if
> successful or a negative error code.
>
> Signed-off-by: Daniel P. Smith <dpsmith@...rtussolutions.com>
> Signed-off-by: Ross Philipson <ross.philipson@...cle.com>
> Reported-by: Kanth Ghatraju <kanth.ghatraju@...cle.com>
> Fixes: 933bfc5ad213 ("tpm, tpm: Implement usage counter for locality")
> ---
> drivers/char/tpm/tpm-chip.c | 2 +-
> drivers/char/tpm/tpm_tis_core.c | 20 +++++++++++++++-----
> include/linux/tpm.h | 2 ++
> 3 files changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
> index 42b1062e33cd..e7293f85335a 100644
> --- a/drivers/char/tpm/tpm-chip.c
> +++ b/drivers/char/tpm/tpm-chip.c
> @@ -49,7 +49,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
> return rc;
>
> chip->locality = rc;
> - return 0;
> + return chip->locality;
> }
>
> static void tpm_relinquish_locality(struct tpm_chip *chip)
> diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
> index 1b350412d8a6..c8b9b0b199dc 100644
> --- a/drivers/char/tpm/tpm_tis_core.c
> +++ b/drivers/char/tpm/tpm_tis_core.c
> @@ -180,7 +180,8 @@ static int tpm_tis_relinquish_locality(struct tpm_chip *chip, int l)
> struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
>
> mutex_lock(&priv->locality_count_mutex);
> - priv->locality_count--;
> + if (priv->locality_count > 0)
> + priv->locality_count--;
> if (priv->locality_count == 0)
> __tpm_tis_relinquish_locality(priv, l);
> mutex_unlock(&priv->locality_count_mutex);
> @@ -226,18 +227,21 @@ static int __tpm_tis_request_locality(struct tpm_chip *chip, int l)
> tpm_msleep(TPM_TIMEOUT);
> } while (time_before(jiffies, stop));
> }
> - return -1;
> + return -EBUSY;
> }
>
> static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
> {
> struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
> - int ret = 0;
> + int ret = -EIO;
> +
> + if (l > TPM_MAX_LOCALITY)
> + return -EINVAL;
>
> mutex_lock(&priv->locality_count_mutex);
> if (priv->locality_count == 0)
> ret = __tpm_tis_request_locality(chip, l);
> - if (!ret)
> + if (ret >= 0)
> priv->locality_count++;
> mutex_unlock(&priv->locality_count_mutex);
> return ret;
> @@ -1108,7 +1112,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
> u32 intmask;
> u32 clkrun_val;
> u8 rid;
> - int rc, probe;
> + int rc, probe, locality;
s/locality/i/
We don't use long names for loop indices generally.
> struct tpm_chip *chip;
>
> chip = tpmm_chip_alloc(dev, &tpm_tis);
> @@ -1169,6 +1173,12 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
> goto out_err;
> }
>
> + /* It is not safe to assume localities are closed on startup */
This is somewhat useless comment.
E.g. this would be way more useful:
/*
* Intel TXT starts with locality 2 active. Therefore,
* localities cannot be assumed to be closed on startup.
*/
> + for (locality = 0; locality <= TPM_MAX_LOCALITY; locality++) {
> + if (check_locality(chip, locality))
> + tpm_tis_relinquish_locality(chip, locality);
> + }
> +
> /* Take control of the TPM's interrupt hardware and shut it off */
> rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
> if (rc < 0)
> diff --git a/include/linux/tpm.h b/include/linux/tpm.h
> index 4ee9d13749ad..f2651281f02e 100644
> --- a/include/linux/tpm.h
> +++ b/include/linux/tpm.h
> @@ -116,6 +116,8 @@ struct tpm_chip_seqops {
> const struct seq_operations *seqops;
> };
>
> +#define TPM_MAX_LOCALITY 4
Not documented.
> +
> struct tpm_chip {
> struct device dev;
> struct device devs;
Thanks for the fix.
BR, Jarkko
Powered by blists - more mailing lists