[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110205203600.GA31760@n2100.arm.linux.org.uk>
Date: Sat, 5 Feb 2011 20:36:00 +0000
From: Russell King - ARM Linux <linux@....linux.org.uk>
To: Per Forlin <per.forlin@...aro.org>,
Catalin Marinas <catalin.marinas@....com>
Cc: Chris Ball <cjb@...top.org>, linux-mmc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
dev@...ts.linaro.org
Subject: Re: [PATCH 0/5] mmc: add double buffering for mmc block requests
On Sat, Feb 05, 2011 at 05:02:55PM +0000, Russell King - ARM Linux wrote:
> On Wed, Jan 12, 2011 at 07:13:58PM +0100, Per Forlin wrote:
> > Add support to prepare one MMC request while another is active on
> > the host. This is done by making the issue_rw_rq() asynchronous.
> > The increase in throughput is proportional to the time it takes to
> > prepare a request and how fast the memory is. The faster the MMC/SD is
> > the more significant the prepare request time becomes. Measurements on U5500
> > and U8500 on eMMC shows significant performance gain for DMA on MMC for large
> > reads. In the PIO case there is some gain in performance for large reads too.
> > There seems to be no or small performance gain for write, don't have a good
> > explanation for this yet.
>
> It might be worth seeing what effect the following patch has. This
> moves the dsb out of the cache operations into a separate function,
> so we only do one dsb per DMA mapping/unmapping operation. That's
> particularly significant for the scattergather code.
>
> I don't remember the reason why this was dropped as a candidate for
> merging - could that be because the dsb needs to be before the outer
> cache maintainence? Adding Catalin for comment on that.
FWIW, trying this with MMC on OMAP4, I see no measurable difference in
performance nor CPU usage.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists