[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9q8llC0JVokHLf7@Asurada-Nvidia>
Date: Wed, 1 Feb 2023 11:25:10 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: <kevin.tian@...el.com>, <yi.l.liu@...el.com>,
<iommu@...ts.linux.dev>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/3] iommufd: Add devices_users to track the
hw_pagetable usage by device
On Wed, Feb 01, 2023 at 02:37:42PM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 01, 2023 at 09:46:23AM -0800, Nicolin Chen wrote:
> > > So the issue is with replace you need to have the domain populated
> > > before we can call replace but you can't populate the domain until it
> > > is bound because of the above issue? That seems unsovlable without
> > > fixing up the driver.
> >
> > Not really. A REPLACE ioctl is just an ATTACH, if the device just
> > gets BIND-ed. So the SMMU driver will initialize ("finalise") the
> > domain during the replace() call, then iopt_table_add_domain() can
> > be done.
> >
> > So, not a blocker here.
>
> Well, yes, there sort of is because the whole flow becomes nonsensical
> - we are supposed to have the iommu_domain populated by the time we do
> replace. Otherwise replace is extra-pointless..
The "finalise" is one of the very first lines of the attach_dev()
callback function in SMMU driver, though it might still undesirably
fail the replace().
https://github.com/nicolinc/iommufd/commit/5ae54f360495aae35b5967d1eb00149912145639
Btw, this is a draft that I made to move iopt_table_add_domain(). I
think we can have this with the nesting series.
Later, once we pass in the dev pointer to the ->domain_alloc op
using Robin's change, all the iopt_table_add_domain() can be done
within the hwpt_alloc(), prior to an attach()/replace().
> > > Is there another issue?
> >
> > Oh. I think we mixed the topics here. These three patches were
> > not to unblock but to clean up a way for the replace series and
> > the nesting series, for the device locking issue:
> >
> > if (cur_hwpt != hwpt)
> > mutex_lock(&cur_hwpt->device_lock);
> > mutex_lock(&hwpt->device_lock);
> > ...
> > if (iommufd_hw_pagetabe_has_group()) { // touching device list
> > ...
> > iommu_group_replace_domain();
> > ...
> > }
> > if (cur_hwpt && hwpt)
> > list_del(&idev->devices_item);
> > list_add(&idev->devices_item, &cur_hwpt->devices);
> > ...
> > mutex_unlock(&hwpt->device_lock);
> > if (cur_hwpt != hwpt)
> > mutex_unlock(&cur_hwpt->device_lock);
>
> What is the issue? That isn't quite right, but the basic bit is fine
>
> If you want to do replace then you have to hold both devices_lock and
> you write that super ugly thing like this
>
> lock_both:
> if (hwpt_a < hwpt_b) {
> mutex_lock(&hwpt_a->devices_lock);
> mutex_lock_nested(&hwpt_b->devices_lock);
> } else if (hwpt_a > hwpt_b) {
> mutex_lock(&hwpt_b->devices_lock);
> mutex_lock_nested(&hwpt_a->devices_lock);
> } else
> mutex_lock(&hwpt_a->devices_lock);
>
> And then it is trivial, yes?
Yea. That's your previous remark.
> Using the group_lock in the iommu core is the right way to fix
> this.. Maybe someday we can do that.
>
> (also document that replace causes all the devices in the group to
> change iommu_domains at once)
Yes. There's a discussion in PATCH-3 of this series. I drafted a
patch changing iommu_attach/detach_dev():
https://github.com/nicolinc/iommufd/commit/124f7804ef38d50490b606fd56c1e27ce551a839
Baolu had a similar patch series a year ago. So we might continue
that effort in parallel, and eventually drop the device list/lock.
> > I just gave another thought about it. Since we have the patch-2
> > from this series moving the ioas->mutex, it already serializes
> > attach/detach routines. And I see that all the places touching
> > idev->device_item and hwpt->devices are protected by ioas->mutex.
> > So, perhaps we can simply remove the device_lock?
>
> The two hwpts are not required to have the same ioas, so this doesn't
> really help..
Hmm...in that case, we should hold two ioas->mutex locks in
addition to two device locks?
Thanks
Nic
Powered by blists - more mailing lists