[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <315fe1c5-2685-6ee3-2aa4-35a27233127b@nvidia.com>
Date: Thu, 17 Jun 2021 11:21:39 +0530
From: Ashish Mhetre <amhetre@...dia.com>
To: Robin Murphy <robin.murphy@....com>, Will Deacon <will@...nel.org>,
<vdumpa@...dia.com>
CC: <iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 1/2] iommu: Fix race condition during default domain
allocation
On 6/11/2021 6:19 PM, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 2021-06-11 11:45, Will Deacon wrote:
>> On Thu, Jun 10, 2021 at 09:46:53AM +0530, Ashish Mhetre wrote:
>>> Domain is getting created more than once during asynchronous multiple
>>> display heads(devices) probe. All the display heads share same SID and
>>> are expected to be in same domain. As iommu_alloc_default_domain() call
>>> is not protected, the group->default_domain and group->domain are ending
>>> up with different domains and leading to subsequent IOMMU faults.
>>> Fix this by protecting iommu_alloc_default_domain() call with
>>> group->mutex.
>>
>> Can you provide some more information about exactly what the h/w
>> configuration is, and the callstack which exhibits the race, please?
>
> It'll be basically the same as the issue reported long ago with PCI
> groups in the absence of ACS not being constructed correctly. Triggering
> the iommu_probe_device() replay in of_iommu_configure() off the back of
> driver probe is way too late and allows calls to happen in the wrong
> order, or indeed race in parallel as here. Fixing that is still on my
> radar, but will not be simple, and will probably go hand-in-hand with
> phasing out the bus ops (for the multiple-driver-coexistence problem).
>
For iommu group creation, the stack flow during race is like:
Display device 1:
iommu_probe_device -> iommu_group_get_for_dev -> arm_smmu_device_group
Display device 2:
iommu_probe_device -> iommu_group_get_for_dev -> arm_smmu_device_group
And this way it ends up in creating 2 groups for 2 display devices
sharing same SID.
Ideally for 2nd display device, iommu_group_get call from
iommu_group_get_for_dev should return same group as 1st display device.
But due to the race, it ends up with 2 groups.
For default domain, the stack flow during race is like:
Display device 1:
iommu_probe_device -> iommu_alloc_default_domain -> arm_smmu_domain_alloc
Display device 2:
iommu_probe_device -> iommu_alloc_default_domain -> arm_smmu_domain_alloc
Here also 2nd device should already have domain allocated and
'if(group->default_domain)' condition from iommu_alloc_default_domain
should be true for 2nd device.
Issue with this is IOVA accesses from 2nd device results in context faults.
>>> Signed-off-by: Ashish Mhetre <amhetre@...dia.com>
>>> ---
>>> drivers/iommu/iommu.c | 2 ++
>>> 1 file changed, 2 insertions(+)
>>>
>>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>>> index 808ab70..2700500 100644
>>> --- a/drivers/iommu/iommu.c
>>> +++ b/drivers/iommu/iommu.c
>>> @@ -273,7 +273,9 @@ int iommu_probe_device(struct device *dev)
>>> * support default domains, so the return value is not yet
>>> * checked.
>>> */
>>> + mutex_lock(&group->mutex);
>>> iommu_alloc_default_domain(group, dev);
>>> + mutex_unlock(&group->mutex);
>>
>> It feels wrong to serialise this for everybody just to cater for systems
>> with aliasing SIDs between devices.
>
> If two or more devices are racing at this point then they're already
> going to be serialised by at least iommu_group_add_device(), so I doubt
> there would be much impact - only the first device through here will
> hold the mutex for any appreciable length of time. Every other path
> which modifies group->domain does so with the mutex held (note the
> "expected" default domain allocation flow in bus_iommu_probe() in
> particular), so not holding it here does seem like a straightforward
> oversight.
>
> Robin.
Serialization will only happen for the devices sharing same group. Only
the first device in group will hold this till domain is created. For
rest of the devices it will just check for existing domain in
iommu_alloc_default_domain and then return and release the mutex.
Powered by blists - more mailing lists