[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae271e08-f390-4ce7-914c-63668a46bc4b@intel.com>
Date: Wed, 3 Jan 2024 09:33:48 +0800
From: Yi Liu <yi.l.liu@...el.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: <joro@...tes.org>, <alex.williamson@...hat.com>, <kevin.tian@...el.com>,
<robin.murphy@....com>, <baolu.lu@...ux.intel.com>, <cohuck@...hat.com>,
<eric.auger@...hat.com>, <nicolinc@...dia.com>, <kvm@...r.kernel.org>,
<mjrosato@...ux.ibm.com>, <chao.p.peng@...ux.intel.com>,
<yi.y.sun@...ux.intel.com>, <peterx@...hat.com>, <jasowang@...hat.com>,
<shameerali.kolothum.thodi@...wei.com>, <lulu@...hat.com>,
<suravee.suthikulpanit@....com>, <iommu@...ts.linux.dev>,
<linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>,
<zhenzhong.duan@...el.com>, <joao.m.martins@...cle.com>,
<xin.zeng@...el.com>, <yan.y.zhao@...el.com>, <j.granados@...sung.com>
Subject: Re: [PATCH v10 10/10] iommu/vt-d: Add iotlb flush for nested domain
On 2024/1/3 02:44, Jason Gunthorpe wrote:
> On Tue, Jan 02, 2024 at 06:38:34AM -0800, Yi Liu wrote:
>
>> +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 addr,
>> + unsigned long npages, bool ih, u32 *error)
>> +{
>> + struct iommu_domain_info *info;
>> + unsigned long i;
>> + unsigned mask;
>> + u32 fault;
>> +
>> + xa_for_each(&domain->iommu_array, i, info)
>> + qi_flush_piotlb(info->iommu,
>> + domain_id_iommu(domain, info->iommu),
>> + IOMMU_NO_PASID, addr, npages, ih, NULL);
>
> This locking on the xarray is messed up throughout the driver. There
> could be a concurrent detach at this point which will free info and
> UAF this.
hmmm, xa_for_each() takes and releases rcu lock, and according to the
domain_detach_iommu(), info is freed after xa_erase(). For an existing
info stored in xarray, xa_erase() should return after rcu lock is released.
is it? Any idea? @Baolu
void domain_detach_iommu(struct dmar_domain *domain, struct intel_iommu *iommu)
{
struct iommu_domain_info *info;
spin_lock(&iommu->lock);
info = xa_load(&domain->iommu_array, iommu->seq_id);
if (--info->refcnt == 0) {
clear_bit(info->did, iommu->domain_ids);
xa_erase(&domain->iommu_array, iommu->seq_id);
domain->nid = NUMA_NO_NODE;
domain_update_iommu_cap(domain);
kfree(info);
}
spin_unlock(&iommu->lock);
}
> This seems to be systemic issue, so I'm going to ignore it here, but
> please make a series to fix it completely.
yeah, this writing is the same with other places that reference the
iommu_array. If there is real problem, may check with Baolu and Kevin.
> xarray is probably a bad data structure to manage attachment, a linked
> list is going to use less memory in most cases and you need a mutex
> lock anyhow.
below is the commit that introduces iommu_array.
commit ba949f4cd4c39c587e9b722ac7eb7f7e8a42dace
Author: Lu Baolu <baolu.lu@...ux.intel.com>
Date: Tue Jul 12 08:09:05 2022 +0800
iommu/vt-d: Refactor iommu information of each domain
When a DMA domain is attached to a device, it needs to allocate a domain
ID from its IOMMU. Currently, the domain ID information is stored in two
static arrays embedded in the domain structure. This can lead to memory
waste when the driver is running on a small platform.
This optimizes these static arrays by replacing them with an xarray and
consuming memory on demand.
Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@...el.com>
Reviewed-by: Steve Wahl <steve.wahl@....com>
Link:
https://lore.kernel.org/r/20220702015610.2849494-4-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@...e.de>
--
Regards,
Yi Liu
Powered by blists - more mailing lists