lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f0163adb-f9f9-5c7b-2bf8-2e0f182ffe49@linux.intel.com>
Date:   Tue, 13 Aug 2019 15:38:48 +0800
From:   Lu Baolu <baolu.lu@...ux.intel.com>
To:     Christoph Hellwig <hch@....de>
Cc:     baolu.lu@...ux.intel.com, David Woodhouse <dwmw2@...radead.org>,
        Joerg Roedel <joro@...tes.org>, ashok.raj@...el.com,
        jacob.jun.pan@...el.com, kevin.tian@...el.com,
        Robin Murphy <robin.murphy@....com>,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
        Jacob Pan <jacob.jun.pan@...ux.intel.com>
Subject: Re: [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops

Hi again,

On 8/7/19 11:06 AM, Lu Baolu wrote:
> Hi Christoph,
> 
> On 8/6/19 2:43 PM, Christoph Hellwig wrote:
>> Hi Lu,
>>
>> I really do like the switch to the per-device dma_map_ops, but:
>>
>> On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:
>>> Current Intel IOMMU driver sets the system level dma_ops. This
>>> implementation has at least the following drawbacks: 1) each
>>> dma API will go through the IOMMU driver even the devices are
>>> using identity mapped domains; 2) if user requests to use an
>>> identity mapped domain (a.k.a. bypass iommu translation), the
>>> driver might fall back to dma domain blindly if the device is
>>> not able to address all system memory.
>>
>> This is very clearly a behavioral regression.  The intel-iommu driver
>> has always used the iommu mapping to provide decent support for
>> devices that do not have the full 64-bit addressing capability, and
>> changing this will make a lot of existing setups go slower.
>>
> 
> I agree with you that we should keep the capability and avoid possible
> performance regression on some setups. But, instead of hard-coding this
> in the iommu driver, I prefer a more scalable way.
> 
> For example, the concept of per group default domain type [1] seems to
> be a good choice. The kernel could be statically compiled as by-default
> "pass through" or "translate everything". The per group default domain
> type API could then be used by the privileged user to tweak some of the
> groups for better performance, either by 1) bypassing iommu translation
> for the trusted super-speed devices, or 2) applying iommu translation to
> access the system memory which is beyond the device's address capability
> (without the necessary of using bounce buffer).
> 
> [1] https://www.spinics.net/lists/iommu/msg37113.html
> 

The code that this patch is trying to remove also looks buggy. The check
and replace of domain happens in each DMA API, but there isn't any lock
to serialize them.

Best regards,
Lu Baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ