[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220927104831.bovlzl74osb4t5d3@mobilestation>
Date: Tue, 27 Sep 2022 13:48:31 +0300
From: Serge Semin <fancer.lancer@...il.com>
To: Robin Murphy <robin.murphy@....com>
Cc: Serge Semin <Sergey.Semin@...kalelectronics.ru>,
Gustavo Pimentel <gustavo.pimentel@...opsys.com>,
Vinod Koul <vkoul@...nel.org>, Rob Herring <robh@...nel.org>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Jingoo Han <jingoohan1@...il.com>, Frank Li <Frank.Li@....com>,
Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
Alexey Malahov <Alexey.Malahov@...kalelectronics.ru>,
Pavel Parkhomenko <Pavel.Parkhomenko@...kalelectronics.ru>,
Krzysztof WilczyĆski <kw@...ux.com>,
linux-pci@...r.kernel.org, dmaengine@...r.kernel.org,
linux-kernel@...r.kernel.org,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>
Subject: Re: [PATCH RESEND v5 22/24] dmaengine: dw-edma: Bypass dma-ranges
mapping for the local setup
On Mon, Sep 26, 2022 at 03:08:01PM +0100, Robin Murphy wrote:
> On 2022-09-12 02:24, Serge Semin wrote:
> > On Wed, Aug 31, 2022 at 10:17:30AM +0100, Robin Murphy wrote:
> > > On 2022-08-22 19:53, Serge Semin wrote:
> > > > DW eDMA doesn't perform any translation of the traffic generated on the
> > > > CPU/Application side. It just generates read/write AXI-bus requests with
> > > > the specified addresses. But in case if the dma-ranges DT-property is
> > > > specified for a platform device node, Linux will use it to map the CPU
> > > > memory regions into the DMAable bus ranges. This isn't what we want for
> > > > the eDMA embedded into the locally accessed DW PCIe Root Port and
> > > > End-point. In order to work that around let's set the chan_dma_dev flag
> > > > for each DW eDMA channel thus forcing the client drivers to getting a
> > > > custom dma-ranges-less parental device for the mappings.
> > > >
> > > > Note it will only work for the client drivers using the
> > > > dmaengine_get_dma_device() method to get the parental DMA device.
> > >
> >
> > > No, this is nonsense. If the DMA engine is on the host side of the bridge
> > > then it should not have anything to do with the PCI device at all, it should
> > > be associated with the platform device,
> >
> > Well. The DMA-engine is embedded into the PCIe Root Port bus, is associated
> > with the platform device it's embedded to, and it doesn't have
> > anything to do with any particular PCI device.
> >
> > > and thus any range mapping on the bridge itself would be irrelevant anyway.
> >
> > Really? I find it otherwise. Please see the way the "dma-ranges"
> > property is parsed and works during the device-specific memory ranges
> > mapping when it's applicable for the PCIe Root Ports.
>
> Sigh, that's a bug. Now I see where the confusion is coming from.
Finally we are on the same page.) I didn't thought it was a bug
though. Some details of the problem I described in another thread
earlier today:
Link: https://lore.kernel.org/linux-pci/20220926205333.qlhb5ojmx4sktzt5@mobilestation/
(See my note regarding the "dma-ranges" usage, which I accidentally
addressed to William instead of you.)
>
> Annoyingly it's basically the exact thing I called out in 951d48855d86 when
> making dma-ranges work for non-OF PCI devices in the first place, but
> apparently neither I nor anyone else thought of this particular edge case at
> the time. Sorry about that. I'll have a look at how best to fix it.
You are right. The PCI-specific dma-ranges semantic hasn't been well
thought through in the first place. The child devices should have had
a dedicated method to set their own way of the memory ranges mapping.
Just a thought. As a possible solution for the dma-ranges property
being dedicated for the child devices we could introduce a new "space
code" of the dma-ranges property with a flag which would indicate the
actual bridge/host-controller memory range. If the dma-ranges property
doesn't have an entry with such code the mapping could be considered
as direct (in accordance with the parental dma-ranges properties).
IOMMU-part is applicable for all PCIe-related hierarchy - bridge itself
and peripheral devices.
>
> Everything else still stands, though. If you can't use the original platform
> device for DMA API calls, at least configure the child device properly by
> calling of_dma_configure() with the parent's DT node in the expected manner
> (and manually remove its dma_range_map if you need an immediate workaround).
Do you mean something like this?
< struct dma_chan *dchan = ...;
< struct dw_edma_chan *chan = ...;
< struct device *parent = chan->dw->chip->dev;
<
< if (dev_of_node(parent)) {
< struct device_node *node = dev_of_node(parent);
<
< ret = of_dma_configure(&chan->dev->device, node, true);
< } else if (has_acpi_companion(parent)) {
< struct acpi_device *adev = to_acpi_device_node(parent->fwnode);
<
< ret = acpi_dma_configure(&chan->dev->device, acpi_get_dma_attr(adev));
< } else {
< ret = -EINVAL;
< }
<
< if (ret)
< return ret;
<
< /* Drop the detected dma-ranges mapping since it isn't applicable for
< * the PCIe RP/EP bridge itself but to the peripheral devices only.
< */
< dchan->dev->device.dma_range_map = NULL;
< dchan->dev->chan_dma_dev = true;
<
< return 0;
What about the DMA-mask? Will it be ok if I copy it from the parental device?
Like this:
< dma_coerce_mask_and_coherent(&dchan->dev->device, dma_get_mask(parent));
Judging by the of_dma_configure_id() method implementation the mask
upper bound is calculated based on the dma-ranges entries. Since the
DT-property isn't applicable for the PCIe host platform device itself
then it' upper bound most like will be invalid for the bridge too.
Regards,
-Sergey
>
> Thanks,
> Robin.
Powered by blists - more mailing lists