[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150913231258.GS26895@dastard>
Date: Mon, 14 Sep 2015 09:12:58 +1000
From: Dave Chinner <david@...morbit.com>
To: Chris Mason <clm@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Josef Bacik <jbacik@...com>,
LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Neil Brown <neilb@...e.de>, Jan Kara <jack@...e.cz>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH] fs-writeback: drop wb->list_lock during blk_finish_plug()
On Sat, Sep 12, 2015 at 07:00:27PM -0400, Chris Mason wrote:
> On Fri, Sep 11, 2015 at 04:36:39PM -0700, Linus Torvalds wrote:
> > On Fri, Sep 11, 2015 at 4:16 PM, Chris Mason <clm@...com> wrote:
> > >
> > > For 4.3 timeframes, what runs do you want to see numbers for:
> > >
> > > 1) revert
> > > 2) my hack
> > > 3) plug over multiple sbs (on different devices)
> > > 4) ?
> >
> > Just 2 or 3.
> >
> > I don't think the plain revert is all that interesting, and I think
> > the "anything else" is far too late for this merge window.
>
> I did the plain revert as well, just to have a baseline. This box is a
> little different from Dave's. Bare metal two socket box (E5-2660 v2 @
> 2.20Ghz) with 144GB of ram. I have two pcie flash devices, one nvme and
> one fusionio, and I put a one FS on each device (two mounts total).
>
> The test created 1.6M files, 4K each. I used Dave's fs_mark command
> line, spread out over 16 directories from each mounted filesystem. In
> btrfs they are spread over subvolumes to cut down lock contention.
>
> I need to change around the dirty ratios more to smooth out the IO, and
> I had trouble with both XFS and btrfs getting runs that were not CPU
> bound. I included the time to run sync at the end of the run because
> the results were not very consistent without it.
>
> The XFS runs generally had one CPU pegged at 100%, and I think this is
> throwing off the results. On Monday I'll redo them with two (four?)
> filesystems per flash device, hopefully that'll break things up.
>
> The btrfs runs generally had all the CPUs pegged at 100%. I switched to
> mount -o nodatasum and squeezed out an extra 50K files/sec at much lower
> CPU utilization.
>
> wall time fs_mark files/sec bytes written/sec
>
> XFS:
> baseline v4.2: 5m6s 118,578 272MB/s
> Dave's patch: 4m46s 151,421 294MB/s
> my hack: 5m5s 150,714 275MB/s
> Linus plug: 5m15s 147,735 266MB/s
>
>
> Btrfs (nodatasum):
> baseline v4.2: 4m39s 242,643 313MB/s
> Dave's patch: 3m46s 252,452 389MB/s
> my hack: 3m48s 257,924 379MB/s
> Linus plug: 3m58s 247,528 369MB/s
Really need to run these numbers on slower disks where block layer
merging makes a difference to performance. The high level plugging
improves performance on spinning disks by a huge amount on XFS
because the merging reduces the number of IOs issued to disk by 2
orders of magnitude. Plugging makes comparitively little difference
on devices that can sustain extremely high IOPS and hence sink the
tens to hundreds of thousand individual 4k IOs that this workload
generates through writeback.
i.e. while throughput increases, that's not the numbers that matters
here - it's the change in write IO behaviour that needs to be
categorised and measured by the different patches...
(I'm on holidays, so I'm not going to get to this any time soon)
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists