lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOQ4uxiezabg0U3tGmpH8J4u7OUbB2KMXexZ7yQzujj0vAWirw@mail.gmail.com>
Date:   Mon, 16 Aug 2021 22:46:10 +0300
From:   Amir Goldstein <amir73il@...il.com>
To:     Vivek Goyal <vgoyal@...hat.com>
Cc:     Greg Kurz <groug@...d.org>, Miklos Szeredi <miklos@...redi.hu>,
        virtualization@...ts.linux-foundation.org,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        virtio-fs-list <virtio-fs@...hat.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        Max Reitz <mreitz@...hat.com>, Robert Krawitz <rlk@...hat.com>
Subject: Re: [PATCH v4 5/5] virtiofs: propagate sync() to file server

On Mon, Aug 16, 2021 at 10:11 PM Vivek Goyal <vgoyal@...hat.com> wrote:
>
> On Mon, Aug 16, 2021 at 09:57:08PM +0300, Amir Goldstein wrote:
> > On Mon, Aug 16, 2021 at 6:29 PM Vivek Goyal <vgoyal@...hat.com> wrote:
> > >
> > > On Sun, Aug 15, 2021 at 05:14:06PM +0300, Amir Goldstein wrote:
> > > > Hi Greg,
> > > >
> > > > Sorry for the late reply, I have some questions about this change...
> > > >
> > > > On Fri, May 21, 2021 at 9:12 AM Greg Kurz <groug@...d.org> wrote:
> > > > >
> > > > > Even if POSIX doesn't mandate it, linux users legitimately expect
> > > > > sync() to flush all data and metadata to physical storage when it
> > > > > is located on the same system. This isn't happening with virtiofs
> > > > > though : sync() inside the guest returns right away even though
> > > > > data still needs to be flushed from the host page cache.
> > > > >
> > > > > This is easily demonstrated by doing the following in the guest:
> > > > >
> > > > > $ dd if=/dev/zero of=/mnt/foo bs=1M count=5K ; strace -T -e sync sync
> > > > > 5120+0 records in
> > > > > 5120+0 records out
> > > > > 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 5.22224 s, 1.0 GB/s
> > > > > sync()                                  = 0 <0.024068>
> > > > > +++ exited with 0 +++
> > > > >
> > > > > and start the following in the host when the 'dd' command completes
> > > > > in the guest:
> > > > >
> > > > > $ strace -T -e fsync /usr/bin/sync virtiofs/foo
> > > > > fsync(3)                                = 0 <10.371640>
> > > > > +++ exited with 0 +++
> > > > >
> > > > > There are no good reasons not to honor the expected behavior of
> > > > > sync() actually : it gives an unrealistic impression that virtiofs
> > > > > is super fast and that data has safely landed on HW, which isn't
> > > > > the case obviously.
> > > > >
> > > > > Implement a ->sync_fs() superblock operation that sends a new
> > > > > FUSE_SYNCFS request type for this purpose. Provision a 64-bit
> > > > > placeholder for possible future extensions. Since the file
> > > > > server cannot handle the wait == 0 case, we skip it to avoid a
> > > > > gratuitous roundtrip. Note that this is per-superblock : a
> > > > > FUSE_SYNCFS is send for the root mount and for each submount.
> > > > >
> > > > > Like with FUSE_FSYNC and FUSE_FSYNCDIR, lack of support for
> > > > > FUSE_SYNCFS in the file server is treated as permanent success.
> > > > > This ensures compatibility with older file servers : the client
> > > > > will get the current behavior of sync() not being propagated to
> > > > > the file server.
> > > >
> > > > I wonder - even if the server does not support SYNCFS or if the kernel
> > > > does not trust the server with SYNCFS, fuse_sync_fs() can wait
> > > > until all pending requests up to this call have been completed, either
> > > > before or after submitting the SYNCFS request. No?
> > >
> > > >
> > > > Does virtiofsd track all requests prior to SYNCFS request to make
> > > > sure that they were executed on the host filesystem before calling
> > > > syncfs() on the host filesystem?
> > >
> > > Hi Amir,
> > >
> > > I don't think virtiofsd has any such notion. I would think, that
> > > client should make sure all pending writes have completed and
> > > then send SYNCFS request.
> > >
> > > Looking at the sync_filesystem(), I am assuming vfs will take care
> > > of flushing out all dirty pages and then call ->sync_fs.
> > >
> > > Having said that, I think fuse queues the writeback request internally
> > > and signals completion of writeback to mm(end_page_writeback()). And
> > > that's why fuse_fsync() has notion of waiting for all pending
> > > writes to finish on an inode (fuse_sync_writes()).
> > >
> > > So I think you have raised a good point. That is if there are pending
> > > writes at the time of syncfs(), we don't seem to have a notion of
> > > first waiting for all these writes to finish before we send
> > > FUSE_SYNCFS request to server.
> >
> > Maybe, but I was not referring to inode writeback requests.
> > I had assumed that those were handled correctly.
> > I was referring to pending metadata requests.
> >
> > ->sync_fs() in local fs also takes care of flushing metadata
> > (e.g. journal). I assumed that virtiofsd implements FUSE_SYNCFS
> > request by calling syncfs() on host fs,
>
> Yes virtiofsd calls syncfs() on host fs.
>
> > but it is does that than
> > there is no guarantee that all metadata requests have reached the
> > host fs from virtiofs unless client or server take care of waiting
> > for all pending metadata requests before issuing FUSE_SYNCFS.
>
> We don't have any journal in virtiofs. In fact we don't seem to
> cache any metadta. Except probably the case when "-o writeback"
> where we can trust local time stamps.
>
> If "-o writeback" is not enabled, i am not sure what metadata
> we will be caching that we will need to worry about. Do you have
> something specific in mind. (Atleast from virtiofs point of view,
> I can't seem to think what metadata we are caching which we need
> to worry about).

No, I don't see a problem.
I guess I was confused by the semantics.
Thanks for clarifying.

Amir.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ