lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8DB49AEC-5DA8-4AC3-A660-689E6BC80D2E@oracle.com>
Date:	Wed, 30 Jan 2008 17:36:16 -0500
From:	Chuck Lever <chuck.lever@...cle.com>
To:	David Howells <dhowells@...hat.com>
Cc:	Trond.Myklebust@...app.com, nfsv4@...ux-nfs.org,
	linux-kernel@...r.kernel.org,
	linux-security-module@...r.kernel.org, selinux@...ho.nsa.gov,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 24/27] NFS: Use local caching [try #2]

Hi David-

On Jan 29, 2008, at 10:25 PM, David Howells wrote:
> Chuck Lever <chuck.lever@...cle.com> wrote:
>> This patch really ought to be broken into more manageable atomic
>> changes to make it easier to review, and to provide more fine-grained
>> explanation and rationalization for each specific change via
>> individual patch descriptions.
>
> Hmmm....  I broke the patch up as Trond stipulated - at least, I  
> thought I
> had.
>
> In many ways this request doesn't make sense.  You can't do NFS  
> caching
> without all the appropriate bits, so logically they should be one  
> patch.
> Breaking it up won't help git-bisect since the option to enable all  
> this is
> the last (or nearly last) patch.

In addition to adding a new feature, you are changing existing code.   
If any one of the changes you made breaks existing behavior, having  
them all in small atomic patches makes it practical to bisect and  
find the problem.

In addition it makes it worlds easier to review by people who are not  
so familiar with your fscache implementation.  And smaller patches  
means the ratio of patch descriptions to code changes can be much  
higher.

It does make sense to introduce the files under fs/fsc in a single  
patch.  But when you are changing code that is already being used,  
more care needs to be taken.

>> This should no longer be necessary.  The latest mount.nfs subcommand
>> from nfs-utils supports text-based mounts when running on kernels
>> 2.6.23 and later.
>
> Okay.  I'll update my patches to reflect this.  Note, however, I've  
> got
> someone reporting a bug that seems to show otherwise.  I'll have to
> investigate this more next week.

The very latest version (post 1.1.1) is required today for text-based  
NFS mounts.  (That is, the bleeding edge version you get by cloning  
the nfs-utils git repo).

And it only works on kernels later than 2.6.22 -- if that particular  
user is testing fscache on 2.6.22 or older, then only the legacy  
binary NFS mount system call API is supported.

>> Add comments like this in a separate clean up patch.
>
> ????

+/*
+ * Notification that a PTE pointing to an NFS page is about to be made
+ * writable, implying that someone is about to modify the page  
through a
+ * shared-writable mapping
+ */

What does that have to do with local disk caching?

>>> +struct nfs_fh_auxdata {
>>> +	struct timespec	i_mtime;
>>> +	struct timespec	i_ctime;
>>> +	loff_t		i_size;
>>> +};
>>
>> It might be useful to explain here why you need to supplement the
>> mtime, ctime, and size fields that already exist in an NFS inode.
>
> Supplement?  I don't understand.

Why is it necessary to add additional mtime, ctime and size fields  
for NFS inodes?  Similar metadata is already stored in nfsi.

All I'm asking for is some documentation of what these fields do that  
the existing time stamps and size fields in nfsi don't.  Explain why  
the NFS fsc implementation needs this data structure.

>>> +		key->port = clp->cl_addr.sin_port;
>>
>> Not sure why you are using the server's port here.  In almost every
>> case the server side port number will be 2049, so it really doesn't
>> add any uniquification.
>
> The reason lies is "in almost every case".  It's possible to  
> configure it
> such that a server is running two separate NFS servers on different  
> ports.

We should explore whether it is typical or even possible that such a  
configuration exports the same file handles on different ports, and  
whether that really matters to the client.

>> I strongly recommend you use the existing IPv6 address conversion
>> macros for this instead of open-coding yet another way of mapping an
>> IPv4 address to an IPv6 address.
>>
>> However, since AF_INET6 support is being introduced in the NFS client
>> in 2.6.24, I recommend you take a look at these source files after
>> Trond has pushed his NFS_ALL for 2.6.24.
>
> I'll look at them.

I always do this:  I meant 2.6.25, not 2.6.24.

By the time you return, basic IPv6 support for NFSv4 should be in  
2.6.25-rc1's NFS client (not server).  Not that it is bug-free, but  
an implementation is now there.

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ