[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080905165005.2a4ac355@mjolnir.drzeus.cx>
Date: Fri, 5 Sep 2008 16:50:05 +0200
From: Pierre Ossman <drzeus-list@...eus.cx>
To: Christer Weinigel <christer@...nigel.se>
Cc: linux-kernel@...r.kernel.org, Ben Dooks <ben-linux@...ff.org>
Subject: Re: Proposed SDIO layer rework
On Fri, 05 Sep 2008 13:45:55 +0200
Christer Weinigel <christer@...nigel.se> wrote:
> First some measurements. I've connected the SDIO bus to an Agilent
> scope to get some pretty plots.
Why doesn't santa bring me any of the nice toys everyone else seems to
be getting :(
> Since more and more high speed chips are being connected to embedded
> devices using SDIO, I belive that to get good SDIO performance, the SDIO
> layer has to be changed not to use a worker thread. This is
> unfortunately rather painful since it means that we have to add
> asynchronous variants of all the I/O operations, and the sdio_irq thread
> has to be totally redone, and the sdio IRQ enabling and disablin turned
> out to be a bit tricky.
>
> So, do you think that it is worth it to make such a major change to the
> SDIO subsystem? I belive so, but I guess I'm a bit biased. I can clean
> up the work I've done and make sure that everything is backwards
> compatible so that existing SDIO function drivers keep working (my
> current hack to the sdio_irq thread breaks existing drivers, and it is
> too ugly to live anyway so I don't even want to show it to you yet), and
> add a demo driver which shows how to use the asynchronous functions.
The latency improvement is indeed impressive, but I am not convinced it
is worth it. An asynchronous API is much more complex and difficult to
work with (not to mention reading and trying to make sense of existing
code), and SDIO is not even an asynchronous bus to begin with.
Also, as the latency is caused by CPU cycles, the problem will be
reduced as hardware improved.
But...
> But if you don't like this idea at all, I'll probably just keep it as a
> patch and not bother as much with backwards compatibility. This is a
> lot more work for me though, and I'd much prefer to get it into the
> official kernel.
I do like the idea of reducing latencies (and generally improving the
performance of the MMC stack). The primary reason I haven't done
anything myself is lack of stuff to test and proper instrumentation.
There are really two issues here, which aren't necessarily related;
actual interrupt latency and command completion latency.
The main culprit in your case is the command completion one. Perhaps
there is some other way of solving that inside the MMC core? E.g. we
could spin instead of sleeping while we wait for the request to
complete. Most drivers never use process context to handle a request
anyway. We would need to determine when the request is small enough
(all non-busy, non-data commands?) and that the CPU is slow enough. I
also saw something about a new trigger interface that could make this
efficient.
Does this sound like something worth exploring?
Rgds
--
-- Pierre Ossman
Linux kernel, MMC maintainer http://www.kernel.org
rdesktop, core developer http://www.rdesktop.org
WARNING: This correspondence is being monitored by the
Swedish government. Make sure your server uses encryption
for SMTP traffic and consider using PGP for end-to-end
encryption.
Download attachment "signature.asc" of type "application/pgp-signature" (198 bytes)
Powered by blists - more mailing lists