[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJe_Zhd7j+30K4ho59XiBHtGpc4PagK-NcXmxGC1WSBudV5HHg@mail.gmail.com>
Date: Tue, 18 Oct 2011 20:24:45 +0530
From: Jassi Brar <jaswinder.singh@...aro.org>
To: "Bounine, Alexandre" <Alexandre.Bounine@....com>
Cc: "Williams, Dan J" <dan.j.williams@...el.com>,
Vinod Koul <vinod.koul@...el.com>,
Russell King <rmk@....linux.org.uk>,
Barry Song <21cnbao@...il.com>, linux-kernel@...r.kernel.org,
DL-SHA-WorkGroupLinux <workgroup.linux@....com>,
Dave Jiang <dave.jiang@...el.com>
Subject: Re: [PATCHv4] DMAEngine: Define interleaved transfer request api
On 18 October 2011 19:21, Bounine, Alexandre <Alexandre.Bounine@....com> wrote:
>> > - there is no advance knowledge of which target device may require
>> DMA
>> > service. A device driver for the particular target device is
>> expected
>> > to request DMA service if required.
>> >
>> IMHO 1 channel per real device is an acceptable 'overhead'.
>> Already many SoCs register dozens of channels but only a couple
>> of them are actually used.
>>
> Is 64K of virtual channels per RIO port is acceptable?
>
I said 1 channel per _real_ device after enumeration, assuming status quo
of no hotplug support. But you plan to implement hotplug soon, so that kills
this option.
Btw, is the RIO hotplug usage gonna look like USB or PCI?
>> >> > There is nothing that absence of full 66-bit addressing blocks
>> now.
>> >> > So far we are not aware about implementations that use 66-bit
>> >> address.
>> >> >
>> >> Thanks for the important info.
>> >> If I were you, I would postpone enabling support for 66-bit
>> addressing
>> >> esp when it soo affects the dmaengine API.
>> >> Otherwise, you don't just code unused feature, but also put
>> > constraints
>> >> on development/up-gradation of the API in future, possibly, nearer
>> > than
>> >> real need of the feature.
>> >>
>> >> If we postpone 66-bit addressing to when it arrives, we can
>> >> 1) Attach destID to the virtual channel's identity
>> >> 2) Use device_prep_dma_memcpy so as to be able to change
>> >> target address for every transfer. Or use prep_slave, depending
>> >> upon nature of address at target endpoint.
>> >> 3) Use slave_config to set wr_type if it remains same for enough
>> >> consecutive transfers to the same target (only you could strike
>> >> the balance between idealism and pragmatism).
>> >>
>> > With item #1 above being a separate topic, I may have a problem with
>> #2
>> > as well: dma_addr_t is sized for the local platform and not
>> guaranteed
>> > to be a 64-bit value (which may be required by a target).
>> > Agree with #3 (if #1 and #2 work).
>> >
>> Perhaps simply change dma_addr_t to u64 in dmaengine.h alone ?
>>
> Adding an extra parameter to prep_slave_sg() looks much better to me.
> That tweak for prep_dma_memcopy() may be more unsafe than the
> extra param in prep_slave_sg().
>
To me the idea of making exception for RapidIO to add a new API looks
even better. Anyways... whatever maintainers decide.
> Plus dma_memcopy does not fit logically for RIO as good as dma_slave.
> For RIO we have only one buffer in local memory (as slave).
> We just need to pass more info to the transfer prep routine.
>
>From a client's POV a slave transfer is just a 'variation' of memcpy
after the underlying channel has been configured appropriately. More so
when even some Mem->Mem dmacs support physical channel configuration
tweaking (PL330 in Samsung's SoCs do at least)
So in an ideal world, there could be one generic 'prepare' and an optional
'slave_config' callback.
OTOH I suspect I am overlooking something serious since nobody ever
tried doing it. But this is a topic of different discussion.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists