[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKHBV24e4YAB8J7MP=vuVarn5cVSWrB-NsjO-obH5CZECk0xNg@mail.gmail.com>
Date: Wed, 2 Aug 2023 19:19:12 +0800
From: Michael Shavit <mshavit@...gle.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: iommu@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, robin.murphy@....com,
will@...nel.org, jean-philippe@...aro.org, nicolinc@...dia.com
Subject: Re: [PATCH v3 6/8] iommu/arm-smmu-v3: Move CD table to arm_smmu_master
On Wed, Aug 2, 2023 at 7:53 AM Jason Gunthorpe <jgg@...dia.com> wrote:
>
> On Wed, Aug 02, 2023 at 02:35:23AM +0800, Michael Shavit wrote:
> > @@ -2465,6 +2440,22 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> > if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
> > master->ats_enabled = arm_smmu_ats_supported(master);
> >
> > + if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> > + if (!master->cd_table.cdtab) {
> > + ret = arm_smmu_alloc_cd_tables(master);
> > + if (ret) {
>
> Again, I didn't look very closely at your locking, but what lock is
> being held to protect the read of master->cd_table.cdtab ?
The cd_table is only written into (with write_ctx_desc) when something
attaches or detaches (SVA is a little weird, but it handles locking
internally, and blocks all non-sva attach/detach calls while enabled).
The cd_table itself is allocated on first attach, and freed on release.
Doesn't the iommu framework guarantee that attach_dev (and
release_device) won't have concurrent calls for a given master through
the group lock? I can add an internal lock if relying on the iommu
lock is not OK.
> > + master->domain = NULL;
> > + goto out_unlock;
>
> This is only the domain lock:
> mutex_unlock(&smmu_domain->init_mutex);
>
> Which is no longer sufficient.
Hmmm yeah that lock looks misleading here. Let me move the unlock
further up so that it more clearly surrounds the section it protects.
Powered by blists - more mailing lists