lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140528160653.GL17233@tucsk.piliscsaba.szeredi.hu>
Date:	Wed, 28 May 2014 18:06:53 +0200
From:	Miklos Szeredi <miklos@...redi.hu>
To:	"J. R. Okajima" <hooanon05g@...il.com>
Cc:	viro@...IV.linux.org.uk, torvalds@...ux-foundation.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	hch@...radead.org, akpm@...ux-foundation.org, apw@...onical.com,
	nbd@...nwrt.org, neilb@...e.de, jordipujolp@...il.com,
	ezk@....cs.sunysb.edu, dhowells@...hat.com, sedat.dilek@...il.com,
	mszeredi@...e.cz
Subject: Re: [PATCH 00/13] overlay filesystem v22

On Mon, May 26, 2014 at 10:56:42AM +0900, J. R. Okajima wrote:

> 
> Here are some comments.

Thanks for the review.

> 
> - I have no objection about the 0:0 char-dev whiteout, but you don't
>   have to have the inode for each whiteout. The hardlink is better.
>   In this version, you have <workdir> now. How about creating a "base"
>   whiteout under workdir at the mount-time? Maybe it is possible by
>   user-space "mount.overlayfs" or kernel-space. When the whiteout meets
>   EMLINK, create a non-hardlink for that target synchronously and
>   re-create the "base" asynchronously.

The reason I don't do this is complexity.  If ever this becomes a problem, then
we could add hardlinked whiteouts in the future.

> 
> - Is <workdir>/work always visible to users? If a user accidentally
>   removes it or its children, then some operations will fail. And he
>   cannot recover it anymore. I know it cannot easily happen since its
>   mode is 0. But I am afraid it will be a source of troubles. For
>   example, find(1) or "ls -R /overlayfs" will almost always fail.

Perfect solution would be an invisible temp directory.  This needs filesystem
support, but perhaps not so difficult.  Again could be done later without
backward compatibility issues.

> 
> - If I remember correctly, the length of the dir mutex is held time has
>   been pointed out. This version has still a long mutex held time, a whole
>   copy-up operation includeing lookup, create, copy filedata, copy
>   xattr/attr, and then rename. How about unlock the dir before copying
>   filedata and re-lock and confirm after copying attr?

Possibly doable, but again this would add complexity and I'd rather leave it
until somebody complains.

> 
> - When two processes copy-up a similar dir hierarcy, for example
>   /dirA/dirB/fileC and /dirA/dirB/dirC/fileD, may a race condition
>   happen? Two processes begin copying-up dirA, first processA succeeds
>   it and second processB fails and returns EIO?

No, we check the state with the parent lock held and skip the copy up if sombody
else won the race.

> 
> - All copy-up operations will be serialized due to <workdir> lock.

Yes.  Trivially fixable by creating a separate dir for each temp file.

> 
> - In fstat(2) for a dir, is nlink set to 1 even it is removed?

Probably.  I think right fix is to check if dentry is hashed and set nlink to
zero otherwise.  Will look into it.

> 
> - In readdir, it lookup or getattr to determine whether the found char
>   dev entry is a whiteout or not. I know a char dev is not so many, so
>   this overhead won't be large. But if its name represented "I am a
>   whiteout", then the extra lookup or getattr would be unnecessary.

At the cost of namespace issues.  I wouldn't consider that a good trade.


Thanks,
Miklos
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ