[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091217180802.GA1546@flint.arm.linux.org.uk>
Date: Thu, 17 Dec 2009 18:08:03 +0000
From: Russell King <rmk+lkml@....linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Christoph Hellwig <hch@...radead.org>, tytso@....edu,
Kyle McMartin <kyle@...artin.ca>, linux-parisc@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
James.Bottomley@...e.de, linux-arch@...r.kernel.org,
Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [git patches] xfs and block fixes for virtually indexed arches
On Thu, Dec 17, 2009 at 09:42:15AM -0800, Linus Torvalds wrote:
> You both flush the virtual caches before
> the IO and invalidate after - when the real pattern should be that you
> flush it before a write, and invalidate it after a read.
That's not entirely true. If you have write back caches which are not DMA
coherent, you need to as a minimum:
- on write, clean the cache to ensure that the page is up to date with data
held in cache.
- on read, you must ensure that there are no potential write-backs before
the read commenses and invalidate at some point.
The point at which you invalidate depends on whether the CPU speculatively
prefetches:
- If it doesn't, you can invalidate the cache before the read, thereby
destroying any potential writebacks, and the cache will remain
unallocated for that address range until explicitly accessed.
- If you do have a CPU which does prefetch speculatively, then you do
need to clean the cache before DMA starts, and then you must invalidate
after the DMA completes.
Invalidating after DMA completes for the non-speculative prefetch just
wastes performance, especially if you have to do so line by line over
a region.
With ARM architecture version 7, we now have ARM CPUs which fall into
both categories.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists