[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48C13869.9040603@weinigel.se>
Date: Fri, 05 Sep 2008 15:47:21 +0200
From: Christer Weinigel <christer@...nigel.se>
To: Ben Dooks <ben-linux@...ff.org>
CC: linux-kernel@...r.kernel.org, Pierre Ossman <drzeus-list@...eus.cx>
Subject: Re: Proposed SDIO layer rework
Ben Dooks wrote:
>> Most of the CPU is probably spent doing PIO transfers to the SDIO
>> controller, if DMA starts working in the s3cmci driver, the CPU load
>> difference will be even larger.
>
> I'm not sure if I'll get the time to look at this before the new kernel
> is released... anyway DMA may not be much of a win for smaller transfers
> anyway, since the setup (the cache will need to be cleaned out or the
> transfer memory made unbuffered) and complete time will add another
> IRQ's worth of response time. This means small transfers are probably
> better off using PIO.
Yes. For the DMA-capable S3C SPI driver I wrote, I added some
thresholds, so for smaller transfers than a certain number of bytes, I
skip DMA and just do a polled/interrupt transfer instead. For short
transfers at high clock rates it's not even worth getting an interrupt
per byte, it's better to just busy wait for each byte, since the
interrupt overhead is larger than the time between each byte.
A SDIO CMD/response packet is 48 bits, so at 25 MHz that is only about 4
us and I think the interrupt overhead is more than that. So if we
really want to squeeze every last clock cycle out of the SDIO driver it
may be better to busy wait for the end of simple CMD52s instead of using
the an interrupt to complete the transfer.
I'll clean up my s3cmci patches and send them to you, but I can't
promise when I'll be done, so it'll probably have to wait for the next
kernel release.
/Christer
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists