[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZSaLoaenhsEG4/IP@matsya>
Date: Wed, 11 Oct 2023 17:18:49 +0530
From: Vinod Koul <vkoul@...nel.org>
To: Kelvin.Cao@...rochip.com
Cc: dmaengine@...r.kernel.org, George.Ge@...rochip.com,
linux-kernel@...r.kernel.org, logang@...tatee.com,
christophe.jaillet@...adoo.fr, hch@...radead.org
Subject: Re: [PATCH v6 1/1] dmaengine: switchtec-dma: Introduce Switchtec DMA
engine PCI driver
On 10-10-23, 21:23, Kelvin.Cao@...rochip.com wrote:
> On Mon, 2023-10-09 at 11:08 +0530, Vinod Koul wrote:
> > > u64 size_to_transfer;
> >
> > Why cant the client driver write to doorbell, is there anything which
> > prevents us from doing so?
>
> I think the potential challenge here for the client driver to ring db
> is that the client driver (host RC) is a different requester in the
> PCIe hierarchy compared to DMA EP, in which case PCIe ordering need to
> be considered.
>
> As PCIe ensures that reads don't pass writes, we can insert a read DMA
> operation with DMA_PREP_FENSE flag in between the two DMA writes (one
> for data transfer and one for notification) to ensure the ordering for
> the same requester DMA EP. I'm not sure if the RC could ensure the same
> ordering if the client driver issue MMIO write to db after the data DMA
> and read DMA completion, so that the consumer is guaranteed the
> transferred data is ready in memory when the db is triggered by the
> client MMIO write. I guess it's still doable with MMIO write but just
> some special consideration needed.
Given that it is a single value, overhead of doing a new txn would be
higher than a mmio write! I think that should be preferred
--
~Vinod
Powered by blists - more mailing lists