[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190812224242.GA3865@tazik.netherland>
Date: Tue, 13 Aug 2019 01:42:42 +0300
From: Vanya Lazeev <ivan.lazeev@...il.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Jarkko Sakkinen <jarkko.sakkinen@...ux.intel.com>,
Peter Huewe <peterhuewe@....de>, Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-integrity@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] Fix fTPM on AMD Zen+ CPUs
On Mon, Aug 12, 2019 at 10:10:03AM -0300, Jason Gunthorpe wrote:
> On Sun, Aug 11, 2019 at 09:52:18PM +0300, ivan.lazeev@...il.com wrote:
> > From: Vanya Lazeev <ivan.lazeev@...il.com>
> >
> > The patch is an attempt to make fTPM on AMD Zen CPUs work.
> > Bug link: https://bugzilla.kernel.org/show_bug.cgi?id=195657
> >
> > The problem seems to be that tpm_crb driver doesn't expect tpm command
> > and response memory regions to belong to different ACPI resources.
> >
> > Tested on Asrock ITX motherboard with Ryzen 2600X CPU.
> > However, I don't have any other hardware to test the changes on and no
> > expertise to be sure that other TPMs won't break as a result.
> > Hopefully, the patch will be useful.
> >
> > Changes from v1:
> > - use list_for_each_safe
> >
> > Signed-off-by: Vanya Lazeev <ivan.lazeev@...il.com>
> > drivers/char/tpm/tpm_crb.c | 146 ++++++++++++++++++++++++++++---------
> > 1 file changed, 110 insertions(+), 36 deletions(-)
> >
> > diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
> > index e59f1f91d..b0e797464 100644
> > +++ b/drivers/char/tpm/tpm_crb.c
> > @@ -91,7 +91,6 @@ enum crb_status {
> > struct crb_priv {
> > u32 sm;
> > const char *hid;
> > - void __iomem *iobase;
> > struct crb_regs_head __iomem *regs_h;
> > struct crb_regs_tail __iomem *regs_t;
> > u8 __iomem *cmd;
> > @@ -108,6 +107,13 @@ struct tpm2_crb_smc {
> > u32 smc_func_id;
> > };
> >
> > +struct crb_resource {
> > + struct resource io_res;
> > + void __iomem *iobase;
> > +
> > + struct list_head link;
> > +};
> > +
> > static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
> > unsigned long timeout)
> > {
> > @@ -432,23 +438,69 @@ static const struct tpm_class_ops tpm_crb = {
> > .req_complete_val = CRB_DRV_STS_COMPLETE,
> > };
> >
> > +static void crb_free_resource_list(struct list_head *resources)
> > +{
> > + struct list_head *position, *tmp;
> > +
> > + list_for_each_safe(position, tmp, resources)
> > + kfree(list_entry(position, struct crb_resource, link));
> > +}
> > +
> > +/**
> > + * Checks if resource @io_res contains range with the specified @start and @size
> > + * completely or, when @strict is false, at least it's beginning.
> > + * Non-strict match is needed to work around broken BIOSes that return
> > + * inconsistent values from ACPI regions vs registers.
> > + */
> > +static inline bool crb_resource_contains(const struct resource *io_res,
> > + u64 start, u32 size, bool strict)
> > +{
> > + return start >= io_res->start &&
> > + (start + size - 1 <= io_res->end ||
> > + (!strict && start <= io_res->end));
> > +}
> > +
> > +static struct crb_resource *crb_containing_resource(
> > + const struct list_head *resources,
> > + u64 start, u32 size, bool strict)
> > +{
> > + struct list_head *pos;
> > +
> > + list_for_each(pos, resources) {
> > + struct crb_resource *cres;
> > +
> > + cres = list_entry(pos, struct crb_resource, link);
> > + if (crb_resource_contains(&cres->io_res, start, size, strict))
> > + return cres;
> > + }
> > +
> > + return NULL;
> > +}
>
> This flow seems very strange, why isn't this part of crb_map_res?
>
fTPM on Zen+ not only needs multiple mappings, it can also return
inconsistent with ACPI values for range sizes (as for me and
mikajhe from the bug thread), so results of crb_containing_resource
are also used to fix the inconsistencies with crb_fixup_cmd_size.
> > static int crb_check_resource(struct acpi_resource *ares, void *data)
> > {
> > - struct resource *io_res = data;
> > + struct list_head *list = data;
> > + struct crb_resource *cres;
> > struct resource_win win;
> > struct resource *res = &(win.res);
> >
> > if (acpi_dev_resource_memory(ares, res) ||
> > acpi_dev_resource_address_space(ares, &win)) {
> > - *io_res = *res;
> > - io_res->name = NULL;
> > + cres = kzalloc(sizeof(*cres), GFP_KERNEL);
> > + if (!cres)
> > + return -ENOMEM;
> > +
> > + cres->io_res = *res;
> > + cres->io_res.name = NULL;
> > +
> > + list_add_tail(&cres->link, list);
>
> And why is this allocating memory inside the acpi table walker? It
> seems to me like the memory should be allocated once the mapping is
> made.
>
Yes, this looks bad. Letting the walker build the list and then using
it is, probably, a better idea.
> Maybe all the mappings should be created from the ACPI ranges right
> away?
>
I don't know if it's a good idea to just map them all instead of doing
so only for relevant ones. Maybe it is safe, here I need an advice
from a more knowledgeable person.
> > @@ -460,10 +512,16 @@ static void __iomem *crb_map_res(struct device *dev, struct crb_priv *priv,
> > if (start != new_res.start)
> > return (void __iomem *) ERR_PTR(-EINVAL);
> >
> > - if (!resource_contains(io_res, &new_res))
> > + if (!cres)
> > return devm_ioremap_resource(dev, &new_res);
> >
> > - return priv->iobase + (new_res.start - io_res->start);
> > + if (!cres->iobase) {
> > + cres->iobase = devm_ioremap_resource(dev, &cres->io_res);
> > + if (IS_ERR(cres->iobase))
> > + return cres->iobase;
> > + }
>
> It sounds likethe real bug is that the crb_map_res only considers a
> single active mapping, while these AMD chips need more than one?
>
Yes, this seems to be the issue.
Powered by blists - more mailing lists