[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080218142239.43809da7@dhcp-252-066.norway.atmel.com>
Date: Mon, 18 Feb 2008 14:22:39 +0100
From: Haavard Skinnemoen <hskinnemoen@...el.com>
To: "Dan Williams" <dan.j.williams@...el.com>
Cc: linux-kernel@...r.kernel.org,
"Shannon Nelson" <shannon.nelson@...el.com>,
"David Brownell" <david-b@...bell.net>, kernel@...32linux.org,
"Francis Moreau" <francis.moro@...il.com>,
"Paul Mundt" <lethal@...ux-sh.org>,
"Vladimir A. Barinov" <vbarinov@...mvista.com>,
"Pierre Ossman" <drzeus-list@...eus.cx>
Subject: Re: [RFC v3 4/7] dmaengine: Add slave DMA interface
On Sat, 16 Feb 2008 13:06:54 -0700
"Dan Williams" <dan.j.williams@...el.com> wrote:
> I like the direction of the patch, i.e. splitting out separate
> functionality into separate structs. However, I do not want to break
> the model of clients sourcing the operations and drivers sinking them
> which dma_slave_descriptor appears to do. How about adding a
> scatterlist pointer and an 'unmap_type' to the common descriptor?
> Where unmap_type selects between, page, single, sg, or no-unmap.
> Drivers already know the length and direction.
But there are currently no operations available for submitting
scatterlists as a single descriptor -- the client iterates over the
scatterlist and submits one descriptor for each entry. So there's no
way you can associated a scatterlist with a single descriptor. You can
perhaps attach it to the last one, but that may get you into trouble if
the transfer is terminated early for some reason.
I don't think the pure source/sink model is very realistic -- clients
can't just submit DMA transfers and then forget about them. They must
at the very least check the status and take appropriate action if there
was an error. They must also notify the driver that the descriptor can
be reused (although I guess if a client doesn't care about the result
it can do this immediately after submission, but I really think clients
_should_ care about the result.)
And since they need to do some sort of cleanup anyway, they might as
well unmap the buffers (or call some dma_memcpy_finish() type of
function that does it for them.)
The clients certainly know the length and direction too, but they don't
necessarily know the physical address since the mapping is done by a
"middle layer". I guess that's the main problem with the model I'm
proposing.
How about we add a kind of "address cookie" struct like this (feel free
to suggest better names):
struct dma_buf_addr {
void *cpu_addr;
dma_addr_t dma_addr;
};
The client initializes the cpu_addr part and passes the struct to
dma_async_memcpy_buf_to_buf() or whatever, the middle layer sets the
dma_addr after mapping the buffer (or page), and the client passes the
same struct to dma_finish_memcpy_buf_to_buf(). Which will then be able
to unmap both buffers appropriately.
This will also eliminate the hack in crypto/async_tx/async_xor.c and
make HIGHMEM64G work again.
Scatterlists currently don't have any middle-layer support, so we can
ignore them for now and let the client take the full responsibility of
mapping and unmapping the buffers. If we were to add some middle-layer
functions for dealing with scatterlists, I don't think they would need
any special treatment -- the dma address is kept in the struct
scatterlist array passed by the client, so it would work pretty much
the same way without any special treatment.
Alternatively, we could convert the whole API to use scatterlists. But
that's probably overdoing it.
Btw, this discussion is a bit off-topic for the patch in question, but
it's an issue that needs to be resolved.
Haavard
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists