lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <001001c7eb57$afd2d320$0f787960$@com>
Date:	Thu, 30 Aug 2007 15:47:02 -0700
From:	"Hua Zhong" <hzhong@...il.com>
To:	"'Trond Myklebust'" <trond.myklebust@....uio.no>
Cc:	"'Linux Kernel Mailing List'" <linux-kernel@...r.kernel.org>,
	"'Linus Torvalds'" <torvalds@...ux-foundation.org>,
	<akpm@...ux-foundation.org>
Subject: RE: recent nfs change causes autofs regression

Hi Trond,

> On Thu, 2007-08-30 at 14:07 -0700, Hua Zhong wrote:
> > I am re-sending this after help from Ian and git-bisect. To me it's a
> > show-stopper: I cannot find an acceptable workaround that I can
> > implement. The problem: upgrading to 2.6.23-rc4 from 2.6.22 causes 
> > several autofs mounts to fail silently - they just not appear when 
> > they should.
> > I believe it's caused by the NFS change that forces multiple mounts
> > from different directories under the same server side filesystem to have
> > the same mount options by default, otherwise it returns EBUSY.
> >
> > For example, if server has a filesystem /a, and it exports /a/x and /a/y
> > (maybe with rw or ro), and a client must mount /a/x and /a/y with the
> > same mount options now.
> 
> Which is better than having it fail silently, or giving you a mount
> with the wrong mount options.

Well, it depends on how you define "better".

In this particular scenario, the maps read as follows:

tools
-fstype=nfs,udp,rw,intr,nosuid,nodev,rsize=8192,wsize=8192,mountvers=3,nfsve
rs=3,actimeo=600 fs1.domain.com:/a/tools
share
-fstype=nfs,udp,rw,intr,nosuid,nodev,rsize=8192,wsize=8192,mountvers=3,nfsve
rs=3 fs1.domain.com:/a/share

The only difference is in the actimeo (I don't even know what it means). Is
this enough to fail a mount?

More importantly, it is a regression. My understanding is that unless
absolutely necessary we do not introduce a "feature" that breaks working
setups.

> If you need to mount the same filesystem with incompatible mount
> options on the same client, then there is a new mount option
"nosharecache",
> which enables it.

Unfortunately, as I said, I don't control the auto.auto nis map.

> The new option is there in order to make it damned clear to sysadmins
> that this is a dangerous thing to do: mounts which don't share the same
> superblock also don't share the same data and attribute caches. Any
> file or directory which appears in both mounts had better only be used by
> one application at a time or be using an appropriate locking scheme.

I guess the question is what should be the default.

I'll convey this to our system admin (fortunately we are not a very big
company), but I am just not 100% sure this is a well-thought change because
I believe many people will be impacted once 2.6.23 is out. Shouldn't we give
some time to user to fix their config before we enforce this, by like some
kernel warnings?

Thanks.

> Trond

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ