[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4qmGLQ1oB+aS9h1@nvidia.com>
Date: Fri, 17 Jan 2025 10:48:56 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: "Tian, Kevin" <kevin.tian@...el.com>, "joro@...tes.org" <joro@...tes.org>,
"will@...nel.org" <will@...nel.org>, "robin.murphy@....com"
<robin.murphy@....com>, "baolu.lu@...ux.intel.com"
<baolu.lu@...ux.intel.com>, "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH rc v3] iommufd/fault: Use a separate spinlock to protect
fault->deliver list
On Fri, Jan 17, 2025 at 10:38:56AM -0400, Jason Gunthorpe wrote:
> On Fri, Jan 17, 2025 at 06:20:15AM +0000, Tian, Kevin wrote:
> > > From: Nicolin Chen <nicolinc@...dia.com>
> > > Sent: Friday, January 17, 2025 10:05 AM
> > >
> > > mutex_lock(&fault->mutex);
> >
> > Nit. The scope of above can be reduced too, by guarding only the
> > lines for fault->response.
>
> Hmm, I think you have found a flaw unfortunately..
>
> iommufd_auto_response_faults() is called async to all of this if a
> device is removed. It should clean out that device from all the fault
> machinery.
>
> With the new locking we don't hold the mutex across the list
> manipulation in read so there is a window where a fault can be on the
> stack in iommufd_fault_fops_read() but not in the fault->response or
> the deliver list.
>
> Thus it will be missed during cleanup.
>
> I think because of the cleanup we have to continue to hold the mutex
> across all of fops_read and this patch is just adding an additional
> spinlock around the deliver list to isolate it from the
> copy_to_user().
>
> Is that right Nicolin?
Yes. I've missed that too..
A group can be read out of the deliver list in fops_read() prior
to auto_response_faults() taking the mutex, then its following
xa_alloc() will add to the response list that fetched group, and
it will stay in the xarray until iommufd_fault_destroy() flushes
everything away.
It might not be a bug to the existing flow (?), but doesn't seem
to worth touching the mutex in this patch.
Let me send a v4 changing that mutex back.
Thanks
Nicolin
Powered by blists - more mailing lists