[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DC02622.90000@intel.com>
Date: Tue, 03 May 2011 08:58:26 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: 康剑斌 <kjbmail@...il.com>
CC: "Koul, Vinod" <vinod.koul@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Can I/OAT DMA engineer access PCI MMIO space
On 5/2/2011 11:31 PM, 康剑斌 wrote:
>
>>> yes, I had used 'ioremap_nocache' to map the IO memory and I can use
>>> memcpy to copy data to this region. The async_tx should have been
>>> correctly configured as
>>> I can use aync_memcpy to copy data between different system memory address.
>> Then you should be using memcpy_toio() and friends
>>
> Do you mean that if I have mapped the mmio, I can' use I/OAT dma
> transfer to this region any more?
> I can use memcpy to copy data, but it consumes lots of cpu as PCI access
> is too slow.
> If I can use i/oat dma and asyc_tx api to do the job, the performance
> should be imporved.
> Thanks
The async_tx api only supports memory-to-memory transfers. To write to
mmio space with ioatdma you would need a custom method, like the
dma-slave support in other drivers, to program the descriptors with the
physical mmio bus address.
--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists