[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 07 Jul 2008 17:08:26 +0200
From: Miklos Szeredi <miklos@...redi.hu>
To: nickpiggin@...oo.com.au
CC: miklos@...redi.hu, jamie@...reable.org,
torvalds@...ux-foundation.org, jens.axboe@...cle.com,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, akpm@...ux-foundation.org, hugh@...itas.com
Subject: Re: [patch 1/2] mm: dont clear PG_uptodate in invalidate_complete_page2()
On Tue, 8 Jul 2008, Nick Piggin wrote:
> If dirty can't happen, the caller should just use the truncate.
> The creation of this "invalidate 2" thing was just papering over
> problems in the callers.
Dirty *can* happen. The difference between truncate_inode_pages() and
invalidate_inode_pages2() is that the former just throws away dirty
pages, while the latter can do something about them through
->launder_page().
> But anyway your point is taken -- caller doesn't really handle failure.
Yes.
> > Right. I think leaving PG_uptodate on invalidation is actually a
> > rather clean solution compared to the alternatives.
>
> Note that files can be truncated in the middle too, so you can't
> just fix one case that happens to hit you, you'd have to fix things
> consistently.
Hmm, OK.
> But...
>
>
> > Well, other than my original proposal, which would just have reused
> > the do_generic_file_read() infrastructure for splice. I still don't
> > see why we shouldn't use that, until the whole async splice-in thing
> > is properly figured out.
>
> Given the alternatives, perhaps this is for the best, at least for
> now.
Yeah. I'm not at all opposed to improving splice to be able to do all
sorts of fancy things like async splice-in, and stealing of pages.
But it's unlikely that I will have the motivation to implement any of
them just to fix this bug.
Miklos
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists