[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1390512936.1198.76.camel@ret.masoncoding.com>
Date: Thu, 23 Jan 2014 21:34:08 +0000
From: Chris Mason <clm@...com>
To: "jlbec@...lplan.org" <jlbec@...lplan.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-ide@...r.kernel.org" <linux-ide@...r.kernel.org>,
"lsf-pc@...ts.linux-foundation.org"
<lsf-pc@...ts.linux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"rwheeler@...hat.com" <rwheeler@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"James.Bottomley@...senPartnership.com"
<James.Bottomley@...senPartnership.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"mgorman@...e.de" <mgorman@...e.de>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] really large storage sectors - going
beyond 4096 bytes
On Thu, 2014-01-23 at 13:27 -0800, Joel Becker wrote:
> On Wed, Jan 22, 2014 at 10:47:01AM -0800, James Bottomley wrote:
> > On Wed, 2014-01-22 at 18:37 +0000, Chris Mason wrote:
> > > On Wed, 2014-01-22 at 10:13 -0800, James Bottomley wrote:
> > > > On Wed, 2014-01-22 at 18:02 +0000, Chris Mason wrote:
> > [agreement cut because it's boring for the reader]
> > > > Realistically, if you look at what the I/O schedulers output on a
> > > > standard (spinning rust) workload, it's mostly large transfers.
> > > > Obviously these are misalgned at the ends, but we can fix some of that
> > > > in the scheduler. Particularly if the FS helps us with layout. My
> > > > instinct tells me that we can fix 99% of this with layout on the FS + io
> > > > schedulers ... the remaining 1% goes to the drive as needing to do RMW
> > > > in the device, but the net impact to our throughput shouldn't be that
> > > > great.
> > >
> > > There are a few workloads where the VM and the FS would team up to make
> > > this fairly miserable
> > >
> > > Small files. Delayed allocation fixes a lot of this, but the VM doesn't
> > > realize that fileA, fileB, fileC, and fileD all need to be written at
> > > the same time to avoid RMW. Btrfs and MD have setup plugging callbacks
> > > to accumulate full stripes as much as possible, but it still hurts.
> > >
> > > Metadata. These writes are very latency sensitive and we'll gain a lot
> > > if the FS is explicitly trying to build full sector IOs.
> >
> > OK, so these two cases I buy ... the question is can we do something
> > about them today without increasing the block size?
> >
> > The metadata problem, in particular, might be block independent: we
> > still have a lot of small chunks to write out at fractured locations.
> > With a large block size, the FS knows it's been bad and can expect the
> > rolled up newspaper, but it's not clear what it could do about it.
> >
> > The small files issue looks like something we should be tackling today
> > since writing out adjacent files would actually help us get bigger
> > transfers.
>
> ocfs2 can actually take significant advantage here, because we store
> small file data in-inode. This would grow our in-inode size from ~3K to
> ~15K or ~63K. We'd actually have to do more work to start putting more
> than one inode in a block (thought that would be a promising avenue too
> once the coordination is solved generically.
Btrfs already defaults to 16K metadata and can go as high as 64k. The
part we don't do is multi-page sectors for data blocks.
I'd tend to leverage the read/modify/write engine from the raid code for
that.
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists