lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH2r5mut083KtCZWMsvC08WtJPJpBxXm6M0HCY4hk8T3hN7xdg@mail.gmail.com>
Date:	Thu, 26 Apr 2012 12:06:22 -0500
From:	Steve French <smfrench@...il.com>
To:	"J. Bruce Fields" <bfields@...ldses.org>
Cc:	David Howells <dhowells@...hat.com>, linux-fsdevel@...r.kernel.org,
	linux-nfs@...r.kernel.org, linux-cifs@...r.kernel.org,
	samba-technical@...ts.samba.org, linux-ext4@...r.kernel.org,
	wine-devel@...ehq.org, linux-api@...r.kernel.org,
	libc-alpha@...rceware.org
Subject: Re: [PATCH 1/6] xstat: Add a pair of system calls to make extended
 file stats available

On Thu, Apr 26, 2012 at 9:28 AM, J. Bruce Fields <bfields@...ldses.org> wrote:
> On Thu, Apr 26, 2012 at 02:45:54PM +0100, David Howells wrote:
>> Steve French <smfrench@...il.com> wrote:
>>
>> > I also would prefer that we simply treat the time granularity as part
>> > of the superblock (mounted volume) ie returned on fstat rather than on
>> > every stat of the filesystem.   For cifs mounts we could conceivably
>> > have different time granularity (1 or 2 second) on mounts to old
>> > servers rather than 100 nanoseconds.
>>
>> The question is whether you want to have to do a statfs in addition to a stat?
>> I suppose you can potentially cache the statfs based on device number.
>>
>> That said, there are cases where caching filesystem-level info based on i_dev
>> doesn't work.  OpenAFS springs to mind as that only has one superblock and
>> thus one set of device numbers, but keeps all the inodes for all the different
>> volumes it may have mounted there.
>>
>> I don't know whether this would be a problem for CIFS too - say on a windows
>> server you fabricate P:, for example, by joining together several filesystems
>> (with junctions?).  How does this appear on a Linux client when it steps from
>> one filesystem to another within a mounted share?
>
> In the NFS case we do try to preserve filesystem boundaries as well as
> we can--the protocol has an fsid field and the client creates a new
> mount each time it sees it change.  And the protocol defines time_delta
> as a per-filesystem attribute (though, somewhat hilariously, there's
> also a per-filesystem "homogeneous" attribute that a server can clear to
> indicate the per-filesystem attributes might actually vary within the
> filesystem.)

Thank you for reminding me, I need to look at this case more ...
although cifs creates  implicit submounts (as we traverse DFS referrals)
there are probably cases where we need to do the same thing as NFS
and look at the fsid so we don't run into a Windows server
exporting something with a "junction" (e.g. directory redirection to
a DVD drive for example) and thus cross file system volume boundaries.


-- 
Thanks,

Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ