[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5773E503.3070404@codeaurora.org>
Date: Wed, 29 Jun 2016 10:10:59 -0500
From: Timur Tabi <timur@...eaurora.org>
To: Arnd Bergmann <arnd@...db.de>
Cc: netdev@...r.kernel.org, devicetree@...r.kernel.org,
linux-arm-msm@...r.kernel.org, sdharia@...eaurora.org,
shankerd@...eaurora.org, vikrams@...eaurora.org,
cov@...eaurora.org, gavidov@...eaurora.org, robh+dt@...nel.org,
andrew@...n.ch, bjorn.andersson@...aro.org, mlangsdo@...hat.com,
jcm@...hat.com, agross@...eaurora.org, davem@...emloft.net,
f.fainelli@...il.com, catalin.marinas@....com
Subject: Re: [PATCH] [v6] net: emac: emac gigabit ethernet controller driver
Arnd Bergmann wrote:
> That's also not how it works: each device starts out with a 32-bit mask,
> because that's what historically all PCI devices can do. If a device
> is 64-bit DMA capable, it can extend the mask by passing DMA_BIT_MASK(64)
> (or whatever it can support), and the platform code checks if that's
> possible.
So if it's not possible, then dma_set_mask returns an error, and the
driver should try a smaller mask? Doesn't that mean that every driver
for a 64-bit device should do this:
for (i = 64; i >=32; i--) {
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(i));
if (!ret)
break;
}
if (ret)
return ret;
Sure, this is overkill, but it seems to me that the driver does not
really know what mask is actually valid, so it has to find the largest
mask that works.
--
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.
Powered by blists - more mailing lists