[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180711073932.GA15615@xz-mi>
Date: Wed, 11 Jul 2018 15:39:32 +0800
From: Peter Xu <peterx@...hat.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>
Cc: Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>, ashok.raj@...el.com,
sanjay.k.kumar@...el.com, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, yi.y.sun@...el.com,
jacob.jun.pan@...el.com
Subject: Re: [PATCH v4 6/9] iommu/vt-d: Per PCI device pasid table interfaces
On Wed, Jul 11, 2018 at 03:26:21PM +0800, Lu Baolu wrote:
[...]
> >> +int intel_pasid_alloc_table(struct device *dev)
> >> +{
> >> + struct device_domain_info *info;
> >> + struct pasid_table *pasid_table;
> >> + struct pasid_table_opaque data;
> >> + struct page *pages;
> >> + size_t size, count;
> >> + int ret, order;
> >> +
> >> + info = dev->archdata.iommu;
> >> + if (WARN_ON(!info || !dev_is_pci(dev) ||
> >> + !info->pasid_supported || info->pasid_table))
> >> + return -EINVAL;
> >> +
> >> + /* DMA alias device already has a pasid table, use it: */
> >> + data.pasid_table = &pasid_table;
> >> + ret = pci_for_each_dma_alias(to_pci_dev(dev),
> >> + &get_alias_pasid_table, &data);
> >> + if (ret)
> >> + goto attach_out;
> >> +
> >> + pasid_table = kzalloc(sizeof(*pasid_table), GFP_ATOMIC);
> > Do we need to take some lock here (e.g., the pasid lock)? Otherwise
> > what if two devices (that are sharing the same DMA alias) call the
> > function intel_pasid_alloc_table() concurrently, then could it
> > possible that we create one table for each of the device while AFAIU
> > we should let them share a single pasid table?
>
> The only place where this function is called is in a single-thread context
> (protected by a spinlock of device_domain_lock with local interrupt disabled).
>
> So we don't need an extra lock here. But anyway, I should put a comment
> here.
Yeah, that would be nice too! Or add a comment for both of the
functions:
/* Must be with device_domain_lock held */
Regards,
--
Peter Xu
Powered by blists - more mailing lists