[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1319428149.9355.60.camel@vkoul-udesk3>
Date: Mon, 24 Oct 2011 09:19:09 +0530
From: Vinod Koul <vinod.koul@...el.com>
To: "Bounine, Alexandre" <Alexandre.Bounine@....com>
Cc: Jassi Brar <jaswinder.singh@...aro.org>,
Russell King <rmk@....linux.org.uk>,
"Williams, Dan J" <dan.j.williams@...el.com>,
Barry Song <21cnbao@...il.com>, linux-kernel@...r.kernel.org,
DL-SHA-WorkGroupLinux <workgroup.linux@....com>,
Dave Jiang <dave.jiang@...el.com>
Subject: RE: [PATCHv4] DMAEngine: Define interleaved transfer request api
On Tue, 2011-10-18 at 10:57 -0700, Bounine, Alexandre wrote:
Sorry for delayed response, been quite busy with upcoming festival, and
other stuff :(
> > >
> > Thanks for the detailed explanation.
> >
> > RapidIO is a packet switched interconnect with parallel or serial
> > interface.
> > Among other things, a packet contains 32, 48 or 64 bit offset into the
> > remote-endpoint's address space. So I don't get how any of the above
> > 6 points apply here.
> >
> > Though I agree it is peculiar for a networking technology to expose a
> > DMAEngine interface. But I assume Alex has good reasons for it, who
> > knows RIO better than us.
> To keep it simple, look at this as a peer-to-peer networking with HW ability
> to directly address memory of the link partner.
>
> RapidIO supports messaging which is closer to traditional networking and is
> supported by RIONET driver (pretending to be Ethernet). But in some situations
> messaging cannot be used. In these cases addressed memory read/write
> operations take place.
>
> I would like to put a simple example of RIO based system that may help to
> understand our DMA requirements.
>
> Consider a platform with one host CPU and several DSP cards connected
> to it through a switched backplane (transparent for purpose of this example).
>
> The host CPU has one or more RIO-capable DMA channel and runs device
> drivers for connected DSP cards. Each device driver is required to load
> an individual program code into corresponding DSP(s). Directly addressed
> writes have a lot of sense.
>
> After DSP code is loaded device drivers start DSP program and may
> participate in data transfers between DSP cards and host CPU. Again
> messaging type transfers may add unnecessary overhead here compared
> to direct data reads/writes.
>
> Configuration of each DSP card may be different but from host's
> POV is RIO spec compliant.
I think we all agree that this fits the dma_slave case :)
As for changing in dmaengine to u64, if we are thinking this as slave
usage, then ideally we should not make assumption of the address type of
peripheral so we should only move the dma_slave_config address fields to
u64, if that helps in RIO case. Moving other usages would be insane.
At this point we have two proposals
a) to make RIO exceptional case and add RIO specific stuff.
b) make dmaengine transparent and add additional argument
in .device_prep_slave_sg() callback which is subsystem dependent.
Current dmacs and those who don't need it will ignore it.
ATM, I am leaning towards the latter, for the main reason to keep
dmaengine away from subsystem details.
--
~Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists