lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 19 Feb 2023 19:13:44 -0600
From:   Eric Van Hensbergen <ericvh@...il.com>
To:     Christian Schoenebeck <linux_oss@...debyte.com>
Cc:     v9fs-developer@...ts.sourceforge.net, asmadeus@...ewreck.org,
        rminnich@...il.com, lucho@...kov.net,
        Eric Van Hensbergen <ericvh@...nel.org>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v4 00/11] Performance fixes for 9p filesystem

Glad to hear bugs disappeared.  writeback having a different
performance than mmap is confusing as they should be equivalent.

The huge blocksize on your dd is an interesting choice -- it will
completely get rid of any impact of readahead.  To see impact of
readahead, choose a blocksize of
less than msize (like 4k) to actually see the perf of readahead.  The
mmap degradation is likely due to stricter coherence (open-to-close
consistency means we wait on writeout), but I'd probably need to go in
and trace to verify (which probably isn't a bad idea overall).
probably a similar situation for loose and writeback.  Essentially,
before close consistency it didn't have to wait for the final write to
complete before it returns so you see a faster time (even though data
wasn't actually written all the way through so you aren't measuring
the last little bit of the write (which can be quite large of a big
msize).

I'm going to take a pass through tomorrow making some fixups that
Dominiquee suggested and trying to reproduce/fix the fscache problems.

      -eric

On Sun, Feb 19, 2023 at 3:36 PM Christian Schoenebeck
<linux_oss@...debyte.com> wrote:
>
> On Saturday, February 18, 2023 1:33:12 AM CET Eric Van Hensbergen wrote:
> > This is the fourth version of a patch series which adds a number
> > of features to improve read/write performance in the 9p filesystem.
> > Mostly it focuses on fixing caching to help utilize the recently
> > increased MSIZE limits and also fixes some problematic behavior
> > within the writeback code.
> >
> > All together, these show roughly 10x speed increases on simple
> > file transfers over no caching for readahead mode.  Future patch
> > sets will improve cache consistency and directory caching, which
> > should benefit loose mode.
> >
> > This iteration of the patch incorporates an important fix for
> > writeback which uses a stronger mechanism to flush writeback on
> > close of files and addresses observed bugs in previous versions of
> > the patch for writeback, mmap, and loose cache modes.
> >
> > These patches are also available on github:
> > https://github.com/v9fs/linux/tree/ericvh/for-next
> > and on kernel.org:
> > https://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git
> >
> > Tested against qemu, cpu, and diod with fsx, dbench, and postmark
> > in every caching mode.
> >
> > I'm gonna definitely submit the first couple patches as they are
> > fairly harmless - but would like to submit the whole series to the
> > upcoming merge window.  Would appreciate reviews.
>
> I tested this version thoroughly today (msize=512k in all tests). Good news
> first: the previous problems of v3 are gone. Great! But I'm still trying to
> make sense of the performance numbers I get with these patches.
>
> So when doing some compilations with 9p, performance of mmap, writeback and
> readahead are basically all the same, and only loose being 6x faster than the
> other cache modes. Expected performance results? No errors at least. Good!
>
> Then I tested simple linear file I/O. First linear writing a 12GB file
> (time dd if=/dev/zero of=test.data bs=1G count=12):
>
> writeback    3m10s [this series - v4]
> readahead    0m11s [this series - v4]
> mmap         0m11s [this series - v4]
> mmap         0m11s [master]
> loose        2m50s [this series - v4]
> loose        2m19s [master]
>
> That's a bit surprising. Why is loose and writeback slower?
>
> Next linear reading a 12GB file
> (time cat test.data > /dev/null):
>
> writeback    0m24s [this series - v4]
> readahead    0m25s [this series - v4]
> mmap         0m25s [this series - v4]
> mmap         0m9s  [master]
> loose        0m24s [this series - v4]
> loose        0m24s [master]
>
> mmap degredation sticks out here, and no improvement with the other modes?
>
> I always performed a guest reboot between each run BTW.
>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ