lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7a3dad54-6236-17d0-e859-be316d888a62@arm.com>
Date:   Thu, 19 Jan 2023 20:12:13 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Jason Gunthorpe <jgg@...dia.com>
Cc:     joro@...tes.org, will@...nel.org, hch@....de,
        iommu@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/8] iommu: Switch __iommu_domain_alloc() to device ops

On 19/01/2023 7:26 pm, Jason Gunthorpe wrote:
> On Thu, Jan 19, 2023 at 07:18:22PM +0000, Robin Murphy wrote:
> 
>> -static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
>> +static struct iommu_domain *__iommu_domain_alloc(struct device *dev,
>>   						 unsigned type)
>>   {
>> -	const struct iommu_ops *ops = bus ? bus->iommu_ops : NULL;
>> +	const struct iommu_ops *ops = dev_iommu_ops(dev);
>>   	struct iommu_domain *domain;
>>   
>> -	if (!ops)
>> -		return NULL;
>> -
>>   	domain = ops->domain_alloc(type);
>>   	if (!domain)
>>   		return NULL;
>> @@ -1970,9 +1968,28 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
>>   	return domain;
>>   }
>>   
>> +static int __iommu_domain_alloc_dev(struct device *dev, void *data)
>> +{
>> +	struct device **alloc_dev = data;
>> +
>> +	if (!device_iommu_mapped(dev))
>> +		return 0;
> 
> Is 0 the right thing? see below

Yes, the idea here is to always double-check the whole bus.

>> +
>> +	WARN_ONCE(*alloc_dev && dev_iommu_ops(dev) != dev_iommu_ops(*alloc_dev),
>> +		"Multiple IOMMU drivers present, which the public IOMMU API can't fully support yet. This may not work as expected, sorry!\n");
> 
> if (WARN_ONCE(..))
>     return -EINVAL
> 
> So that iommu_domain_alloc fails?

The current behaviour is that if you have multiple different IOMMUs 
present, then only one driver succeeds in registering, effectively at 
random depending on probe order. To get predictable behaviour where 
iommu_domain_alloc() (and indeed the whole IOMMU API) works for a 
specific device, you have to manage your kernel config or modules to 
only load the driver for the correct IOMMU.

After patch #4, we allow all the drivers to register, but the net effect 
on the public API is still the same - it only works successfully for one 
driver, effectively at random - and the same solution - don't load the 
other drivers, or at least load them in an appropriate order relative to 
the client drivers - still applies. On those grounds it seems a fair 
compromise until we can sort iommu_domain_alloc() itself. As far as I'm 
aware there are still no immediate real-world users for this - upstream 
support for Rockchip RK3588 is still in very early days, and a long way 
off being complete enough for users to get interested in trying to play 
with the Arm SMMUs in there (leading to disappointment that VFIO won't 
work since they're non-coherent...)

>> +	*alloc_dev = dev;
>> +	return 0;
>> +}
>> +
>>   struct iommu_domain *iommu_domain_alloc(struct bus_type *bus)
>>   {
>> -	return __iommu_domain_alloc(bus, IOMMU_DOMAIN_UNMANAGED);
>> +	struct device *dev = NULL;
>> +
>> +	if (bus_for_each_dev(bus, NULL, &dev, __iommu_domain_alloc_dev))
>> +		return NULL;
> 
> eg shouldn't iommu_domain_alloc() return NULL if any devices are
> !device_iommu_mapped ?

No, that would even break the normal single-driver case, because it's 
always been the case that not all devices on e.g. the platform bus are 
iommu_mapped. That's largely why bus ops are a rubbish abstraction.

Even with multiple drivers, we can still allocate a domain here which 
will work fine with *some* devices, and safely fail to work with others, 
so I don't see that we'd gain much from being unnecessarily restrictive.

Thanks,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ