lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <62b801e8-66b6-0af7-b0c9-195823bf9f62@opensource.wdc.com>
Date:   Mon, 11 Jul 2022 19:40:22 +0900
From:   Damien Le Moal <damien.lemoal@...nsource.wdc.com>
To:     John Garry <john.garry@...wei.com>,
        "Martin K. Petersen" <martin.petersen@...cle.com>,
        Christoph Hellwig <hch@....de>
Cc:     joro@...tes.org, will@...nel.org, jejb@...ux.ibm.com,
        m.szyprowski@...sung.com, robin.murphy@....com,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-ide@...r.kernel.org, iommu@...ts.linux-foundation.org,
        iommu@...ts.linux.dev, linux-scsi@...r.kernel.org,
        linuxarm@...wei.com
Subject: Re: [PATCH v5 0/5] DMA mapping changes for SCSI core

On 7/11/22 16:36, John Garry wrote:
> On 11/07/2022 00:08, Damien Le Moal wrote:
>>> Ah, I think that I misunderstood Damien's question. I thought he was
>>> asking why not keep shost max_sectors at dma_max_mapping_size() and then
>>> init each sdev request queue max hw sectors at dma_opt_mapping_size().
>> I was suggesting the reverse:)  Keep the device hard limit
>> (max_hw_sectors) to the max dma mapping and set the soft limit
>> (max_sectors) to the optimal dma mapping size.
> 
> Sure, but as I mentioned below, I only see a small % of requests whose 
> mapping size exceeds max_sectors but that still causes a big performance 
> hit. So that is why I want to set the hard limit as the optimal dma 
> mapping size.

How can you possibly end-up with requests larger than max_sectors ? BIO
split is done using this limit, right ? Or is it that request merging is
allowed up to max_hw_sectors even if the resulting request size exceeds
max_sectors ?

> 
> Indeed, the IOMMU IOVA caching limit is already the same as default 
> max_sectors for the disks in my system - 128Kb for 4k page size.
> 
>>
>>> But he seems that you want to know why not have the request queue max
>>> sectors at dma_opt_mapping_size(). The answer is related to meaning of
>>> dma_opt_mapping_size(). If we get any mappings which exceed this size
>>> then it can have a big dma mapping performance hit. So I set max hw
>>> sectors at this ‘opt’ mapping size to ensure that we get no mappings
>>> which exceed this size. Indeed, I think max sectors is 128Kb today for
>>> my host, which would be same as dma_opt_mapping_size() value with an
>>> IOMMU enabled. And I find that only a small % of request size may exceed
>>> this 128kb size, but it still has a big performance impact.
>>>
> 
> Thanks,
> John


-- 
Damien Le Moal
Western Digital Research

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ