[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cb52ad1f-822f-4da6-9753-691454565b26@intel.com>
Date: Fri, 19 Jan 2024 18:15:30 +0800
From: Yi Liu <yi.l.liu@...el.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "Tian, Kevin" <kevin.tian@...el.com>, "joro@...tes.org" <joro@...tes.org>,
"alex.williamson@...hat.com" <alex.williamson@...hat.com>,
"robin.murphy@....com" <robin.murphy@....com>, "baolu.lu@...ux.intel.com"
<baolu.lu@...ux.intel.com>, "cohuck@...hat.com" <cohuck@...hat.com>,
"eric.auger@...hat.com" <eric.auger@...hat.com>, "nicolinc@...dia.com"
<nicolinc@...dia.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"mjrosato@...ux.ibm.com" <mjrosato@...ux.ibm.com>,
"chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>,
"yi.y.sun@...ux.intel.com" <yi.y.sun@...ux.intel.com>, "peterx@...hat.com"
<peterx@...hat.com>, "jasowang@...hat.com" <jasowang@...hat.com>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>, "lulu@...hat.com" <lulu@...hat.com>,
"suravee.suthikulpanit@....com" <suravee.suthikulpanit@....com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>, "Duan,
Zhenzhong" <zhenzhong.duan@...el.com>, "joao.m.martins@...cle.com"
<joao.m.martins@...cle.com>, "Zeng, Xin" <xin.zeng@...el.com>, "Zhao, Yan Y"
<yan.y.zhao@...el.com>
Subject: Re: [PATCH 3/8] iommufd: Support attach/replace hwpt per pasid
On 2024/1/18 21:38, Jason Gunthorpe wrote:
> On Thu, Jan 18, 2024 at 05:28:01PM +0800, Yi Liu wrote:
>> On 2024/1/17 20:56, Jason Gunthorpe wrote:
>>> On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
>>>> Above indeed makes more sense if there can be concurrent attach/replace/detach
>>>> on a single pasid. Just have one doubt should we add lock to protect the
>>>> whole attach/replace/detach paths. In the attach/replace path[1] [2], the
>>>> xarray entry is verified firstly, and then updated after returning from
>>>> iommu attach/replace API. It is uneasy to protect the xarray operations only
>>>> with xa_lock as a detach path can acquire xa_lock right after attach/replace
>>>> path checks the xarray. To avoid it, may need a mutex to protect the whole
>>>> attach/replace/detach path to avoid race. Or maybe the attach/replace path
>>>> should mark the corresponding entry as a special state that can block the
>>>> other path like detach until the attach/replace path update the final hwpt to
>>>> the xarray. Is there such state in xarray?
>>>
>>> If the caller is not allowed to make concurrent attaches/detaches to
>>> the same pasid then you can document that in a comment,
>>
>> yes. I can document it. Otherwise, we may need a mutex for pasid to allow
>> concurrent attaches/detaches.
>>
>>> but it is
>>> still better to use xarray in a self-consistent way.
>>
>> sure. I'll try. At least in the detach path, xarray should be what you've
>> suggested in prior email. Currently in the attach path, the logic is as
>> below. Perhaps I can skip the check on old_hwpt since
>> iommu_attach_device_pasid() should fail if there is an existing domain
>> attached on the pasid. Then the xarray should be more consistent. what
>> about your opinion?
>>
>> old_hwpt = xa_load(&idev->pasid_hwpts, pasid);
>> if (old_hwpt) {
>> /* Attach does not allow overwrite */
>> if (old_hwpt == hwpt)
>> return NULL;
>> else
>> return ERR_PTR(-EINVAL);
>> }
>>
>> rc = iommu_attach_device_pasid(hwpt->domain, idev->dev, pasid);
>> if (rc)
>> return ERR_PTR(rc);
>>
>> refcount_inc(&hwpt->obj.users);
>> xa_store(&idev->pasid_hwpts, pasid, hwpt, GFP_KERNEL);
>
> Use xa_cmpxchg()
sure.
--
Regards,
Yi Liu
Powered by blists - more mailing lists