lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Mar 2021 18:02:02 +0000
From:   John Garry <john.garry@...wei.com>
To:     Robin Murphy <robin.murphy@....com>, <joro@...tes.org>,
        <will@...nel.org>, <jejb@...ux.ibm.com>,
        <martin.petersen@...cle.com>, <hch@....de>,
        <m.szyprowski@...sung.com>
CC:     <iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
        <linux-scsi@...r.kernel.org>, <linuxarm@...wei.com>
Subject: Re: [PATCH 5/6] dma-mapping/iommu: Add dma_set_max_opt_size()

On 19/03/2021 17:00, Robin Murphy wrote:
> On 2021-03-19 13:25, John Garry wrote:
>> Add a function to allow the max size which we want to optimise DMA 
>> mappings
>> for.
> 
> It seems neat in theory - particularly for packet-based interfaces that 
> might have a known fixed size of data unit that they're working on at 
> any given time - but aren't there going to be many cases where the 
> driver has no idea because it depends on whatever size(s) of request 
> userspace happens to throw at it? Even if it does know the absolute 
> maximum size of thing it could ever transfer, that could be 
> impractically large in areas like video/AI/etc., so it could still be 
> hard to make a reasonable decision.

So if you consider the SCSI stack, which is my interest, we know the max 
segment size and we know the max number of segments per request, so we 
should know the theoretical upper limit of the actual IOVA length we can 
get.

Indeed, from my experiment on my SCSI host, max IOVA len is found to be 
507904, which is PAGE_SIZE * 124 (that is max sg ents there). 
Incidentally that means that we want RCACHE RANGE MAX of 8, not 6.

> 
> Being largely workload-dependent is why I still think this should be a 
> command-line or sysfs tuneable - we could set the default based on how 
> much total memory is available, but ultimately it's the end user who 
> knows what the workload is going to be and what they care about 
> optimising for.

If that hardware is only found in a server, then the extra memory cost 
would be trivial, so setting to max is standard approach.

> 
> Another thought (which I'm almost reluctant to share) is that I would 
> *love* to try implementing a self-tuning strategy that can detect high 
> contention on particular allocation sizes and adjust the caches on the 
> fly, but I can easily imagine that having enough inherent overhead to 
> end up being an impractical (but fun) waste of time.
> 

For now, I just want to recover the performance lost recently :)

Thanks,
John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ