lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120613155651.GB1178@fieldses.org>
Date:	Wed, 13 Jun 2012 11:56:51 -0400
From:	"J. Bruce Fields" <bfields@...ldses.org>
To:	"J. Bruce Fields" <bfields@...hat.com>, linux-nfs@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: RPC: fragment too large with transition to 3.4

On Wed, Jun 13, 2012 at 09:27:03AM +0000, Jamie Heilman wrote:
> Jamie Heilman wrote:
> > It's looking like my issues with "RPC: fragment too large" may be
> > something else entirely at this point, I've noticed other weird
> > network behavior that I'm gonna have to try to isolate before I keep
> > blaming nfs changes.  Though for some reason my
> > /proc/fs/nfsd/max_block_size ends up only 128KiB w/3.4 where it was
> > 512KiB w/3.3.
> 
> OK, I get it now.  32-bit PAE system w/4G of RAM (minus a chunk for
> the IGP video etc.) for my NFS server, and the max_block_size
> calculation changed significantly in commit
> 508f92275624fc755104b17945bdc822936f1918 to account for rpc buffers
> only being in low memory.  That means whereas in 3.3 the math came out
> to having a target size of roughly 843241 my new target size in 3.4 is
> only 219959-ish, so choosing 128KiB is understandable.  The problem
> was that all my clients had negotiated their nfs mounts against the
> v3.3 value of 512KiB, and when I rebooted into 3.4... they hit the
> wall attempting larger transfers and become uselessly stuck at that
> point.  If I remount everything before doing any large transfers, then
> it negotiates a lower wsize and things work fine.  So everything is
> working as planned I suppose... the transition between 3.3 and 3.4 is
> just a bit rough.

Oh, got it, thanks.  Yes, now I remember I've seen that problem before.

Perhaps we should be more careful about tweaks to that calculation that
may result in a decreased r/wsize.  You could also get into the same
situation if you took the server down to change the amount of RAM, but
only if you were *removing* memory, which is probably unusual.

Best might be if distributions set max_block_size--it should be easier
for userspace to remember the value across reboots.

While we're at it we also want to create an /etc/nfsd.conf that rpc.nfsd
could read, for setting this, and number of threads, and a few other
things.  The systemd people would prefer that to the current practice of
sourcing a shell script in /etc/sysconfig or /etc/default.

We could warn about this problem ("don't decrease max_block_size on a
server without unmounting clients first") next to that variable in the
config file.

I think we can do the same calculation as nfsd_create_serv() does from
userspace to set an initial default.  I don't know if that should happen
on package install or on first run of rpc.nfsd.

For now that's a project looking for a volunteer, though.

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ