lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92b240f5-596e-87a9-863a-b18475042cce@arm.com>
Date:   Thu, 31 May 2018 15:25:20 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Hanjun Guo <guohanjun@...wei.com>,
        Zhen Lei <thunder.leizhen@...wei.com>,
        Will Deacon <will.deacon@....com>,
        Matthias Brugger <matthias.bgg@...il.com>,
        Rob Clark <robdclark@...il.com>,
        Joerg Roedel <joro@...tes.org>,
        linux-mediatek <linux-mediatek@...ts.infradead.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        iommu <iommu@...ts.linux-foundation.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Cc:     Libin <huawei.libin@...wei.com>,
        Guozhu Li <liguozhu@...ilicon.com>,
        Xinwei Hu <huxinwei@...wei.com>
Subject: Re: [PATCH 0/7] add non-strict mode support for arm-smmu-v3

On 31/05/18 14:49, Hanjun Guo wrote:
> Hi Robin,
> 
> On 2018/5/31 19:24, Robin Murphy wrote:
>> On 31/05/18 08:42, Zhen Lei wrote:
>>> In common, a IOMMU unmap operation follow the below steps:
>>> 1. remove the mapping in page table of the specified iova range
>>> 2. execute tlbi command to invalid the mapping which is cached in TLB
>>> 3. wait for the above tlbi operation to be finished
>>> 4. free the IOVA resource
>>> 5. free the physical memory resource
>>>
>>> This maybe a problem when unmap is very frequently, the combination of tlbi
>>> and wait operation will consume a lot of time. A feasible method is put off
>>> tlbi and iova-free operation, when accumulating to a certain number or
>>> reaching a specified time, execute only one tlbi_all command to clean up
>>> TLB, then free the backup IOVAs. Mark as non-strict mode.
>>>
>>> But it must be noted that, although the mapping has already been removed in
>>> the page table, it maybe still exist in TLB. And the freed physical memory
>>> may also be reused for others. So a attacker can persistent access to memory
>>> based on the just freed IOVA, to obtain sensible data or corrupt memory. So
>>> the VFIO should always choose the strict mode.
>>>
>>> Some may consider put off physical memory free also, that will still follow
>>> strict mode. But for the map_sg cases, the memory allocation is not controlled
>>> by IOMMU APIs, so it is not enforceable.
>>>
>>> Fortunately, Intel and AMD have already applied the non-strict mode, and put
>>> queue_iova() operation into the common file dma-iommu.c., and my work is based
>>> on it. The difference is that arm-smmu-v3 driver will call IOMMU common APIs to
>>> unmap, but Intel and AMD IOMMU drivers are not.
>>>
>>> Below is the performance data of strict vs non-strict for NVMe device:
>>> Randomly Read  IOPS: 146K(strict) vs 573K(non-strict)
>>> Randomly Write IOPS: 143K(strict) vs 513K(non-strict)
>>
>> What hardware is this on? If it's SMMUv3 without MSIs (e.g. D05), then you'll still be using the rubbish globally-blocking sync implementation. If that is the case, I'd be very interested to see how much there is to gain from just improving that - I've had a patch kicking around for a while[1] (also on a rebased branch at [2]), but don't have the means for serious performance testing.
> 
> The hardware is the new D06 which the SMMU with MSIs,

Cool! Now that profiling is fairly useful since we got rid of most of 
the locks, are you able to get an idea of how the overhead in the normal 
case is distributed between arm_smmu_cmdq_insert_cmd() and 
__arm_smmu_sync_poll_msi()? We're always trying to improve our 
understanding of where command-queue-related overheads turn out to be in 
practice, and there's still potentially room to do nicer things than 
TLBI_NH_ALL ;)

Robin.

> it's not D05 :)
> 
> Thanks
> Hanjun
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ