[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1043830a-02af-bf8a-6089-0eb7a84a8e85@huawei.com>
Date: Tue, 17 May 2022 14:50:24 +0100
From: John Garry <john.garry@...wei.com>
To: Robin Murphy <robin.murphy@....com>, <joro@...tes.org>,
<will@...nel.org>, <hch@....de>, <m.szyprowski@...sung.com>
CC: <chenxiang66@...ilicon.com>, <thunder.leizhen@...wei.com>,
<iommu@...ts.linux-foundation.org>, <linux-kernel@...r.kernel.org>,
<liyihang6@...ilicon.com>
Subject: Re: [RFC PATCH] dma-iommu: Add iommu_dma_max_mapping_size()
On 17/05/2022 13:02, Robin Murphy wrote:
>>>
>>> Indeed, sorry but NAK for this being nonsense. As I've said at least
>>> once before, if the unnecessary SAC address allocation attempt slows
>>> down your workload, make it not do that in the first place. If you
>>> don't like the existing command-line parameter then fine, > there are
>>> plenty of
>>> other options, it just needs to be done in a way that doesn't break
>>> x86 systems with dodgy firmware, as my first attempt turned out to.
>>
>> Sorry, but I am not interested in this. It was discussed in Jan last
>> year without any viable solution.
>
> Er, OK, if you're not interested in solving that problem I don't see why
> you'd bring it up, but hey ho. *I* still think it's important, so I
> guess I'll revive my old patch with a CONFIG_X86 bodge and have another
> go at some point.
Let me rephrase, I would be happy to help fix that problem if we really
can get it fixed, however for my problem it's better to try to get the
SCSI driver to stop requesting uncached IOVAs foremost.
>
>> Anyway we still have the long-term IOVA aging issue, and requesting
>> non-cached IOVAs is involved in that. So I would rather keep the SCSI
>> driver to requesting cached IOVAs all the time.
>>
>> I did try to do it the other way around - configuring the IOVA caching
>> range according to the drivers requirement but that got nowhere.
Note that this is still not a final solution as it's not always viable
to ask a user to unbind + bind the driver.
>
> FWIW I thought that all looked OK, it just kept getting drowned out by
> more critical things in my inbox so I hoped someone else might comment.
> If it turns out that I've become the de-facto IOVA maintainer in
> everyone else's minds now and they're all waiting for my word then fair
> enough, I just need to know and reset my expectations accordingly. Joerg?
It would be great to see an improvement here...
>
>>> Furthermore, if a particular SCSI driver doesn't benefit from
>>> mappings larger than 256KB, then that driver is also free to limit
>>> its own mapping size. There are other folks out there with use-cases
>>> for mapping *gigabytes* at once; you don't get to cripple the API and
>>> say that that's suddenly not allowed just because it happens to make
>>> your thing go faster, that's absurd.
>>
>> I'd say less catastrophically slow, not faster.
>>
>> So how to inform the SCSI driver of this caching limit then so that it
>> may limit the SGL length?
>
> Driver-specific mechanism; block-layer-specific mechanism; redefine this
> whole API to something like dma_opt_mapping_size(), as a limit above
> which mappings might become less efficient or start to fail (callback to
> my thoughts on [1] as well, I suppose); many options.
ok, fine.
> Just not imposing
> a ridiculously low *maximum* on everyone wherein mapping calls "should
> not be larger than the returned value" when that's clearly bollocks.
I agree that this change is in violation as the documentation clearly
implies a hard limit.
However, FWIW, from looking at users of dma_max_mapping_size(), they
seem to use in a similar way to SCSI/block layer core, i.e. use this
value to limit the max SGL total len per IO command. And not as a method
to learn max DMA consistent mappings size for ring buffers, etc.
Anyway I'll look at dma_opt_mapping_size() but I am not sure how keen
Christoph will be on this... it is strange to introduce that API due to
peculiarity of the IOVA allocator.
>
>
> [1]
> https://lore.kernel.org/linux-iommu/20220510142109.777738-1-ltykernel@gmail.com/
Thanks,
John
Powered by blists - more mailing lists