[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAO6Zf6C618gt4uLaw=MYgAq519d3UrW7zLf78Q1HOryxzRpkKA@mail.gmail.com>
Date: Sat, 7 Apr 2012 10:14:13 +0200
From: Dmitry Adamushko <dmitry.adamushko@...il.com>
To: John Stultz <john.stultz@...aro.org>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Android Kernel Team <kernel-team@...roid.com>,
Robert Love <rlove@...gle.com>, Mel Gorman <mel@....ul.ie>,
Hugh Dickins <hughd@...gle.com>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
Rik van Riel <riel@...hat.com>,
Dave Chinner <david@...morbit.com>, Neil Brown <neilb@...e.de>,
Andrea Righi <andrea@...terlinux.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Subject: Re: [PATCH 0/2] [RFC] Volatile Ranges (v6)
On 7 April 2012 02:08, John Stultz <john.stultz@...aro.org> wrote:
>
> Another detail is that by hanging the volatile ranges off of the
> address_space, the volatility for tmpfs files persists even when no one
> has an open fd on the file. This could cause some surprises if application
> A marked some pages volatile and died, then application B opened the file
> and had pages dropped out underneith it while it was being used. I suspect
> I need to clean up the volatility when all fds are dropped.
And how do you handle the regions that have already been purged by
this moment? Unless B has some specific mechanism to verify the
consistency of the content, a sensible way would be to always mark off
the regions as non-volatile before accessing them and verify the
return code to see if there are holes.
More generally, what if B opens the file while A is still working with
it? Besides the use of normal synchronization mechanisms, B should not
make any assumption on the current state of the regions (unless there
is a high-level protocol between A and B to share this info). So an
explicit mark-off-as-non_volatile could be a simple generic mechanism.
--Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists