[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294270104.16957.73.camel@mulgrave.site>
Date: Wed, 05 Jan 2011 23:28:24 +0000
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: Trond Myklebust <Trond.Myklebust@...app.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Russell King - ARM Linux <linux@....linux.org.uk>,
linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org,
Marc Kleine-Budde <mkl@...gutronix.de>,
Uwe Kleine-König
<u.kleine-koenig@...gutronix.de>,
Marc Kleine-Budde <m.kleine-budde@...gutronix.de>,
linux-arm-kernel@...ts.infradead.org,
Parisc List <linux-parisc@...r.kernel.org>,
linux-arch@...r.kernel.org
Subject: Re: still nfs problems [Was: Linux 2.6.37-rc8]
On Wed, 2011-01-05 at 18:06 -0500, Trond Myklebust wrote:
> On Wed, 2011-01-05 at 13:30 -0800, Linus Torvalds wrote:
> > On Wed, Jan 5, 2011 at 1:16 PM, Trond Myklebust
> > <Trond.Myklebust@...app.com> wrote:
> > >
> > > So what should be the preferred way to ensure data gets flushed when
> > > you've written directly to a page, and then want to read through the
> > > vm_map_ram() virtual range? Should we be adding new semantics to
> > > flush_kernel_dcache_page()?
> >
> > The "preferred way" is actually simple: "don't do that". IOW, if some
> > page is accessed through a virtual mapping you've set up, then
> > _always_ access it through that virtual mapping.
> >
> > Now, when that is impossible (and yes, it sometimes is), then you
> > should flush after doing all writes. And if you do the write through
> > the regular kernel mapping, you should use flush_dcache_page(). And if
> > you did it through the virtual mapping, you should use
> > "flush_kernel_vmap_range()" or whatever.
> >
> > NOTE! I really didn't look those up very closely, and if the accesses
> > can happen concurrently you are basically screwed, so you do need to
> > do locking or something else to guarantee that there is some nice
> > sequential order. And maybe I forgot something. Which is why I do
> > suggest "don't do that" as a primary approach to the problem if at all
> > possible.
> >
> > Oh, and you may need to flush before reading too (and many writes do
> > end up being "read-modify-write" cycles) in case it's possible that
> > you have stale data from a previous read that was then invalidated by
> > a write to the aliasing address. Even if that write was flushed out,
> > the stale read data may exist at the virtual address. I forget what
> > all we required - in the end the only sane model is "virtual caches
> > suck so bad that anybody who does them should be laughed at for being
> > a retard".
>
> Yes. The fix I sent out was a call to invalidate_kernel_vmap_range(),
> which takes care of invalidating the cache prior to a virtual address
> read.
>
> My question was specifically about the write through the regular kernel
> mapping: according to Russell and my reading of the cachetlb.txt
> documentation, flush_dcache_page() is only guaranteed to have an effect
> on page cache pages.
> flush_kernel_dcache_page() (not to be confused with flush_dcache_page)
> would appear to be the closest fit according to my reading of the
> documentation, however the ARM implementation appears to be a no-op...
It depends on exactly what you're doing. In the worst case, (ping pong
reads and writes through both aliases) you have to flush and invalidate
both alias 1 alias 2 each time you access on one and then another.
Can you explain how the code works? it looks to me like you read the xdr
stuff through the vmap region then write it out directly to the pages?
*if* this is just a conversion, *and* you never need to read the new
data through the vmap alias, you might be able to get away with a
flush_dcache_page in nfs_readdir_release_array(). If the access pattern
is more complex, you'll need more stuff splashed through the loop
(including vmap invalidation/flushing).
Is there any way you could just rewrite nfs_readdir_add_to_array() to
use the vmap address instead of doing a kmap? That way everything will
go through a single alias and not end up with this incoherency.
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists