[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZMv2WJX6b2UBBelt@nvidia.com>
Date: Thu, 3 Aug 2023 15:47:52 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Michael Shavit <mshavit@...gle.com>
Cc: iommu@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, will@...nel.org,
robin.murphy@....com, nicolinc@...dia.com, jean-philippe@...aro.org
Subject: Re: [PATCH v4 6/8] iommu/arm-smmu-v3: Move CD table to
arm_smmu_master
On Fri, Aug 04, 2023 at 01:56:12AM +0800, Michael Shavit wrote:
> This patch introduces a subtle bug.
>
> Previously, the arm-smmu-v3 driver could get away with skipping the
> clearing of the CD entry on detach, since the table belonged to the
> domain and wouldn't be re-written on re-attach. When we switch to the
> master-owned table model, that CDTE in the master's table can get
> written to with different CD domains. When the CD domain get's
> switched to a new one without first being cleared, arm_smmu_write_ctx
> will mis-interpret its call as an ASID update instead of an entirely
> new Cd.
I'm not surprised, I think arm_smmu_write_ctx is a little too clever
for its own good..
I would have written it by computing the full target CD entry,
extracted directly from the domain.
Something like:
struct cd_entry {
__le64 val[4];
};
static void arm_smmu_get_domain_cd_value(struct arm_smmu_domain *domain,
struct arm_smmu_master *master,
bool quiet, struct cd_entry *entry)
{
struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table;
struct arm_smmu_ctx_desc *cd = &domain->cd;
u64 val0;
if (!domain) {
memset(entry, 0, sizeof(*entry));
return;
}
val0 = cd->tcr |
#ifdef __BIG_ENDIAN
CTXDESC_CD_0_ENDI |
#endif
CTXDESC_CD_0_R | CTXDESC_CD_0_A |
(cd->mm ? 0 : CTXDESC_CD_0_ASET) | CTXDESC_CD_0_AA64 |
FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | CTXDESC_CD_0_V;
if (cd_table->stall_enabled)
val0 |= CTXDESC_CD_0_S;
if (quiet)
val0 |= CTXDESC_CD_0_TCR_EPD0;
entry->val[0] = cpu_to_le64(val0);
entry->val[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
entry->val[2] = 0;
entry->val[3] = cpu_to_le64(cd->mair);
}
int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid,
struct arm_smmu_ctx_desc *cd)
{
struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table;
struct cd_entry *cur_cd;
struct cd_entry new_cd;
if (WARN_ON(ssid >= (1 << cd_table->max_cds_bits)))
return -E2BIG;
new_cd = arm_smmu_get_cd_ptr(master, ssid);
if (!new_cd)
return -ENOMEM;
arm_smmu_get_domain_cd_value(domain, master, cd == &quiet_cd, &new_cd);
/*
* The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3
* "Configuration structures and configuration invalidation completion"
*
* The size of single-copy atomic reads made by the SMMU is
* IMPLEMENTATION DEFINED but must be at least 64 bits. Any single
* field within an aligned 64-bit span of a structure can be altered
* without first making the structure invalid.
*/
/*
* Changing only dword 0 is common enough that we give it a fast path.
*/
if (cur_cd->val[1] != new_cd.val[1] ||
cur_cd->val[2] != new_cd.val[2] ||
cur_cd->val[3] != new_cd.val[3]) {
/* Make it invalid so we can update all 4 values */
if (le64_to_cpu(cur_cd->val[0]) & CTXDESC_CD_0_V) {
if (le64_to_cpu(new_cd.val[0]) & CTXDESC_CD_0_V)
WRITE_ONCE(cur_cd->val[0], 0);
else
WRITE_ONCE(cur_cd->val[0], new_cd.val[0]);
arm_smmu_sync_cd(master, ssid, true);
}
cur_cd->val[1] = new_cd.val[1];
cur_cd->val[2] = new_cd.val[2];
cur_cd->val[3] = new_cd.val[3];
/*
* CD entry may be live, and the SMMU might read dwords of this
* CD in any order. Ensure that it observes valid values before
* reading V=1.
*/
if (le64_to_cpu(new_cd.val[0]) & CTXDESC_CD_0_V)
arm_smmu_sync_cd(master, ssid, true);
}
if (cur_cd->val[0] == new_cd.val[0])
return 0;
WRITE_ONCE(cur_cd->val[0], new_cd.val[0]);
arm_smmu_sync_cd(master, ssid, true);
}
Jason
Powered by blists - more mailing lists