lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2u1b68c6791005071937s5b2bbb60p6de395a6c06a963e@mail.gmail.com>
Date:	Sat, 8 May 2010 11:37:02 +0900
From:	jassi brar <jassisinghbrar@...il.com>
To:	Linus Walleij <linus.ml.walleij@...il.com>
Cc:	Russell King - ARM Linux <linux@....linux.org.uk>,
	Ben Dooks <ben-linux@...ff.org>,
	Dan Williams <dan.j.williams@...el.com>,
	linux-mmc@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 0/7] DMAENGINE: fixes and PrimeCells

On Sat, May 8, 2010 at 1:10 AM, Linus Walleij
<linus.ml.walleij@...il.com> wrote:
> Surely circular linked buffers and other goodies can be retrofitted into the
> DMAengine without a complete redesign? I only see a new slave call
> to support that really, in addition to the existing sglist interface.
well, before taking up the PL330 dma api driver, 'async' character of it
was the only concern I had in mind. That still is, but I came across a
a few more peculiarities while implementing the driver.

a) Async:- For lazy transfers of mem to mem this may be ok.
  But there might be devices the employ DMA to do extensive M2M transfers
  (say dedicated multimedia oriented devices) the 'async' nature might be
  a bottleneck. So too for M<=>D with a fast device with shallow FIFO.
  There may be clients that don't wanna do much upon DMA done, but they
  do need notifications ASAP.  By definition, this API forbids such
expectations.
 IMHO, a DMA api should be as quick as possible - callbacks done in IRQ context.
 But since there maybe clients that need to do sleepable stuff in
callbacks, the API
 may do two callbacks - 'quick' in irq context and 'lazy' from
tasklets scheduled from
 the IRQ. Most clients will provide either, while some may provide
both callback functions.

b) There seems to be no clear way of reporting failed transfers. The
device_tx_status
    can get FAIL/SUCSESS but the call is open ended and can be performed
    without any time bound after tx_submit. It is not very optimal for
DMAC drivers
    to save descriptors of all failed transactions until the channel
is released.
    IMHO, provision of status checking by two mechanisms: cookie and dma-done
   callbacks is complication more than a feature. Perhaps the dma
engine could provide
   a default callback, should the client doesn't do so, and track
done/pending xfers
  for such requests?

c) Conceptually, the channels are tightly coupled with the DMACs,
there seems to be
   no way to be able to schedule a channel among more than one DMACs
in the runtime,
   that is if more then one DMAC support the same channel/peripheral.
   For example, Samsung's S5Pxxxx have many channels available on more
than 1 DMAC
   but for this dma api we have to statically assign channels to
DMACs, which may result in
   a channel acquire request rejected just because the DMAC we chose
for it is already
   fully busy while another DMAC, which also supports the channel, is idling.
   Unless we treat the same peripheral as, say, I2STX_viaDMAC1 and
I2STX_viaDMAC2
   and allocate double resources for these "mutually exclusive" channels.

d) Something like circular-linked-request is highly desirable for one
of the important DMA
   clients i.e, audio.

e) There seems to be no ScatterGather support for Mem to Mem transfers.

Or these are just due to my cursory understanding of the DMA Engine core?...

Of course, there are many good features of this API which any API
should provide.

regards.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ