[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 05 May 2011 08:11:14 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: 康剑斌 <kjbmail@...il.com>
CC: "Koul, Vinod" <vinod.koul@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Jiang, Dave" <dave.jiang@...el.com>
Subject: Re: Can I/OAT DMA engineer access PCI MMIO space
[ adding Dave ]
On 5/5/2011 1:45 AM, 康剑斌 wrote:
> Thanks.
> I directly read pci bar address and program it into descriptors, ioatdma
> works.
> Some problem is, when PCI transfer failed (Using a NTB connect to
> another system, and the system power down),
> ioatdma will cause kernel oops.
>
> BUG_ON(is_ioat_bug(chanerr));
> in drivers/dma/ioat/dma_v3.c, line 365
>
> It seems that HW reports a 'IOAT_CHANERR_DEST_ADDR_ERR', and drivers
> can't recover from this situation.
Ah ok, this is expected with the current upstream ioatdma driver. The
driver assumes that all transfers are mem-to-mem (ASYNC_TX_DMA or
NET_DMA) and that a destination address error is a fatal error (similar
to a kernel page fault).
With NTB, where failures are expected, the driver would need to be
modified to expect the error, recover from it, and report it to the
application.
> What does dma-slave mean? Just like DMA_SLAVE flag existing in other DMA
> drivers?
Yes, DMA_SLAVE is the generic framework to associate a dma offload
device with an mmio peripheral.
--
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists