[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d4f2f99c-b8bb-4ed9-8d91-ed0f5b418425@arm.com>
Date: Fri, 1 Mar 2024 18:04:10 +0000
From: Robin Murphy <robin.murphy@....com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, linux-kernel@...r.kernel.org,
Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Christoph Hellwig <hch@....de>, Marek Szyprowski <m.szyprowski@...sung.com>,
iommu@...ts.linux.dev, Zelin Deng <zelin.deng@...ux.alibaba.com>
Subject: Re: [RFC] dma-mapping: introduce dma_can_skip_unmap()
On 2024-03-01 1:41 pm, Michael S. Tsirkin wrote:
> On Fri, Mar 01, 2024 at 12:42:39PM +0000, Robin Murphy wrote:
>> On 2024-03-01 11:50 am, Michael S. Tsirkin wrote:
>>> On Fri, Mar 01, 2024 at 11:38:25AM +0000, Robin Murphy wrote:
>>>> Not only is this idea not viable, the entire premise seems flawed - the
>>>> reasons for virtio needing to use the DMA API at all are highly likely to be
>>>> the same reasons for it needing to use the DMA API *properly* anyway.
>>>
>>> The idea has nothing to do with virtio per se
>>
>> Sure, I can see that, but if virtio is presented as the justification for
>> doing this then it's the justification I'm going to look at first. And the
>> fact is that it *does* seem to have particular significance, since having up
>> to 19 DMA addresses involved in a single transfer is very much an outlier
>> compared to typical hardware drivers.
>
> That's a valid comment. Xuan Zhuo do other drivers do this too,
> could you check pls?
>
>> Furthermore the fact that DMA API
>> support was retrofitted to the established virtio design means I would
>> always expect it to run up against more challenges than a hardware driver
>> designed around the expectation that DMA buffers have DMA addresses.
>
>
> It seems virtio can't drive any DMA changes then it's forever tainted?
> Seems unfair - we retrofitted it years ago, enough refactoring happened
> since then.
No, I'm not saying we couldn't still do things to help virtio if and
when it does prove reasonable to do so; just that if anything it's
*because* that retrofit is mature and fairly well polished by now that
any remaining issues like this one are going to be found in the most
awkward corners and thus unlikely to generalise.
FWIW in my experience it seems more common for network drivers to
actually have the opposite problem, where knowing the DMA address of a
buffer is easy, but keeping track of the corresponding CPU address can
be more of a pain.
>>> - we are likely not the
>>> only driver that wastes a lot of memory (hot in cache, too) keeping DMA
>>> addresses around for the sole purpose of calling DMA unmap. On a bunch
>>> of systems unmap is always a nop and we could save some memory if there
>>> was a way to find out. What is proposed is an API extension allowing
>>> that for anyone - not just virtio.
>>
>> And the point I'm making is that that "always" is a big assumption, and in
>> fact for the situations where it is robustly true we already have the
>> DEFINE_DMA_UNMAP_{ADDR,LEN} mechanism.
>> I'd consider it rare for DMA
>> addresses to be stored in isolation, as opposed to being part of some kind
>> of buffer descriptor (or indeed struct scatterlist, for an obvious example)
>> that a driver or subsystem still has to keep track of anyway, so in general
>> I believe the scope for saving decidedly small amounts of memory at runtime
>> is also considerably less than you might be imagining.
>>
>> Thanks,
>> Robin.
>
>
> Yes. DEFINE_DMA_UNMAP_ exits but that's only compile time.
> And I think the fact we have that mechanism is a hint that
> enough configurations could benefit from a runtime
> mechanism, too.
>
> E.g. since you mentioned scatterlist, it has a bunch of ifdefs
> in place.
But what could that benefit be in general? It's not like we can change
structure layouts on a per-DMA-mapping-call basis to save
already-allocated memory... :/
Thanks,
Robin.
>
> Of course
> - finding more examples would be benefitial to help maintainers
> do the cost/benefit analysis
> - a robust implementation is needed
>
>
Powered by blists - more mailing lists