[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180322081926.GC29444@lst.de>
Date: Thu, 22 Mar 2018 09:19:26 +0100
From: Christoph Hellwig <hch@....de>
To: Nipun Gupta <nipun.gupta@....com>
Cc: robin.murphy@....com, hch@....de, linux@...linux.org.uk,
gregkh@...uxfoundation.org, m.szyprowski@...sung.com,
bhelgaas@...gle.com, zajec5@...il.com, andy.gross@...aro.org,
david.brown@...aro.org, dan.j.williams@...el.com,
vinod.koul@...el.com, thierry.reding@...il.com, robh+dt@...nel.org,
frowand.list@...il.com, jarkko.sakkinen@...ux.intel.com,
rafael.j.wysocki@...el.com, dmitry.torokhov@...il.com,
johan@...nel.org, msuchanek@...e.de, linux-kernel@...r.kernel.org,
iommu@...ts.linux-foundation.org, linux-wireless@...r.kernel.org,
linux-arm-msm@...r.kernel.org, linux-soc@...r.kernel.org,
dmaengine@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linux-tegra@...r.kernel.org, devicetree@...r.kernel.org,
linux-pci@...r.kernel.org, bharat.bhushan@....com,
leoyang.li@....com
Subject: Re: [PATCH v2 2/2] drivers: remove force dma flag from buses
> --- a/drivers/dma/qcom/hidma_mgmt.c
> +++ b/drivers/dma/qcom/hidma_mgmt.c
> @@ -398,7 +398,7 @@ static int __init hidma_mgmt_of_populate_channels(struct device_node *np)
> }
> of_node_get(child);
> new_pdev->dev.of_node = child;
> - of_dma_configure(&new_pdev->dev, child);
> + of_dma_configure(&new_pdev->dev, child, true);
> /*
> * It is assumed that calling of_msi_configure is safe on
> * platforms with or without MSI support.
Where did we mark this bus as force_dma before?
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 9a4f4246..895c83e 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -353,7 +353,7 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
> /* ensure that dma_ops is set for virtual devices
> * using reserved memory
> */
> - of_dma_configure(dev, np);
> + of_dma_configure(dev, np, true);
Did all the callers of this one really force dma? I have a hard time
untangling the call stacks unfortunately.
Powered by blists - more mailing lists