[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50A36E2A.8040302@parallels.com>
Date: Wed, 14 Nov 2012 14:10:50 +0400
From: Pavel Emelyanov <xemul@...allels.com>
To: Cyrill Gorcunov <gorcunov@...nvz.org>,
Andrew Morton <akpm@...ux-foundation.org>
CC: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Al Viro <viro@...iv.linux.org.uk>,
Alexey Dobriyan <adobriyan@...il.com>,
James Bottomley <jbottomley@...allels.com>,
Matthew Helsley <matt.helsley@...il.com>,
aneesh.kumar@...ux.vnet.ibm.com, bfields@...ldses.org
Subject: Re: [patch 3/7] fs, notify: Add file handle entry into inotify_inode_mark
On 11/14/2012 10:46 AM, Cyrill Gorcunov wrote:
> On Tue, Nov 13, 2012 at 02:38:08PM -0800, Andrew Morton wrote:
>> On Tue, 13 Nov 2012 12:00:32 +0400
>> Cyrill Gorcunov <gorcunov@...nvz.org> wrote:
>>
>>>> Dumb question: do we really need inotify_inode_mark.fhandle at all?
>>>> What prevents us from assembling this info on demand when ->show_fdinfo() is
>>>> called?
>>>
>>> exportfs requires the dentry to be passed as an argument while inotify works
>>> with inodes instead and at moment of show-fdinfo the target dentry might be
>>> already deleted but inode yet present as far as I remember.
>>
>> How can the c/r restore code reestablish the inode data if the dentry
>> isn't there any more?
>
> By "deleted" I meant deleted from dcache, thus when we call for
> open_by_handle_at with fhandle, the kernel reconstruct the path
> and we simply read the /proc/self/fd/ link, and then pass this
> path to inotify_add_watch.
No we don't do readlink as the path we'd see would be empty. Instead after
we called the open_by_handle_at, we pass the "/proc/self/fd/<fd>" _path_ itself
to inotify_add_watch. The path resolution code follows the link properly and
adds the target inode into the watch list.
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists