lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yaji1C/wK73jAkho@redhat.com>
Date:   Thu, 2 Dec 2021 10:14:28 -0500
From:   Vivek Goyal <vgoyal@...hat.com>
To:     Amir Goldstein <amir73il@...il.com>
Cc:     Chengguang Xu <cgxu519@...ernel.net>, Jan Kara <jack@...e.cz>,
        Miklos Szeredi <miklos@...redi.hu>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        overlayfs <linux-unionfs@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        ronyjin <ronyjin@...cent.com>,
        charliecgxu <charliecgxu@...cent.com>
Subject: Re: ovl_flush() behavior

On Thu, Dec 02, 2021 at 01:23:17AM +0200, Amir Goldstein wrote:
> > >
> > > To be honest I even don't fully understand what's the ->flush() logic in overlayfs.
> > > Why should we open new underlying file when calling ->flush()?
> > > Is it still correct in the case of opening lower layer first then copy-uped case?
> > >
> >
> > The semantics of flush() are far from being uniform across filesystems.
> > most local filesystems do nothing on close.
> > most network fs only flush dirty data when a writer closes a file
> > but not when a reader closes a file.
> > It is hard to imagine that applications rely on flush-on-close of
> > rdonly fd behavior and I agree that flushing only if original fd was upper
> > makes more sense, so I am not sure if it is really essential for
> > overlayfs to open an upper rdonly fd just to do whatever the upper fs
> > would have done on close of rdonly fd, but maybe there is no good
> > reason to change this behavior either.
> >
> 
> On second thought, I think there may be a good reason to change
> ovl_flush() otherwise I wouldn't have submitted commit
> a390ccb316be ("fuse: add FOPEN_NOFLUSH") - I did observe
> applications that frequently open short lived rdonly fds and suffered
> undesired latencies on close().
> 
> As for "changing existing behavior", I think that most fs used as
> upper do not implement flush at all.
> Using fuse/virtiofs as overlayfs upper is quite new, so maybe that
> is not a problem and maybe the new behavior would be preferred
> for those users?

It probably will be nice not to send flush to fuse server when it is not
required.

Right now in virtiofsd, I see that we are depending on flush being sent
as we are dealing with remote posix lock magic. I am supporting remotme
posix locks in virtiofs and virtiofsd is building these on top of open
file description locks on host. (Can't use posix locks on host as these
locks are per process and virtiofsd is single process working on behalf
of all the guest processes, and unexpected things happen).

When an fd is being closed, flush request is sent and along with it we
also send "lock_owner".

inarg.lock_owner = fuse_lock_owner_id(fm->fc, id);

We basically use this to keep track which process is closing the fd and
release associated OFD locks on host. /me needs to dive into details
to explain it better. Will do that if need be.

Bottom line is that as of now virtiofsd seems to be relying on receiving
FLUSH requests when remote posix locks are enabled. Maybe we can set
FOPEN_NOFLUSH when remote posix locks are not enabled.

Thanks
Vivek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ