[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e11d8138-f704-2f5e-c0b1-70b367a33d5d@linux.intel.com>
Date: Thu, 16 Apr 2020 15:40:38 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: Christoph Hellwig <hch@....de>
Cc: baolu.lu@...ux.intel.com, Joerg Roedel <joro@...tes.org>,
ashok.raj@...el.com, jacob.jun.pan@...ux.intel.com,
kevin.tian@...el.com,
Sai Praneeth Prakhya <sai.praneeth.prakhya@...el.com>,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
Daniel Drake <drake@...lessm.com>,
Derrick Jonathan <jonathan.derrick@...el.com>,
Jerry Snitselaar <jsnitsel@...hat.com>,
Robin Murphy <robin.murphy@....com>
Subject: Re: [PATCH v3 1/3] iommu/vt-d: Allow 32bit devices to uses DMA domain
Hi Christoph,
On 2020/4/16 15:01, Christoph Hellwig wrote:
> On Thu, Apr 16, 2020 at 02:23:52PM +0800, Lu Baolu wrote:
>> Currently, if a 32bit device initially uses an identity domain,
>> Intel IOMMU driver will convert it forcibly to a DMA one if its
>> address capability is not enough for the whole system memory.
>> The motivation was to overcome the overhead caused by possible
>> bounced buffer.
>>
>> Unfortunately, this improvement has led to many problems. For
>> example, some 32bit devices are required to use an identity
>> domain, forcing them to use DMA domain will cause the device
>> not to work anymore. On the other hand, the VMD sub-devices
>> share a domain but each sub-device might have different address
>> capability. Forcing a VMD sub-device to use DMA domain blindly
>> will impact the operation of other sub-devices without any
>> notification. Further more, PCI aliased devices (PCI bridge
>> and all devices beneath it, VMD devices and various devices
>> quirked with pci_add_dma_alias()) must use the same domain.
>> Forcing one device to switch to DMA domain during runtime
>> will cause in-fligh DMAs for other devices to abort or target
>> to other memory which might cause undefind system behavior.
>
> This commit log doesn't actually explain what you are chaning, and
> as far as I can tell it just removes the code to change the domain
> at run time, which seems to not actually match the subject or
This removes the domain switching in iommu_need_mapping(). Another place
where the private domain is used is intel_iommu_add_device(). Joerg's
patch set has remove that. So with domain switching in
iommu_need_mapping() removed, the private domain helpers could be
removed now. Otherwise, the compiler will complain that some functions
are defined but not used.
> description. I'd need to look at the final code, but it seems like
> this will still cause bounce buffering instead of using dynamic
> mapping, which still seems like an awful idea.
Yes. If the user chooses to use identity domain by default through
kernel command, identity domain will be applied for all devices. For
those devices with limited addressing capability, bounce buffering will
be used when they try to access the memory beyond their address
capability. This won't cause any kernel regression as far as I can see.
Switching domain during runtime with drivers loaded will cause real
problems as I said in the commit message. That's the reason why I am
proposing to remove it. If we want to keep it, we have to make sure that
switching domain for one device should not impact other devices which
share the same domain with it. Furthermore, it's better to implement it
in the generic layer to keep device driver behavior consistent on all
architectures.
>
> Also from a purely stylistic perspective a lot of the lines seem
> very short and not use up the whole 73 charaters allowed.
>
Yes. I will try to use up the allowed characters.
Best regards,
baolu
Powered by blists - more mailing lists