[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140120221843.GJ18112@dastard>
Date: Tue, 21 Jan 2014 09:18:43 +1100
From: Dave Chinner <david@...morbit.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Sergey Meirovich <rathamahata@...il.com>, xfs@....sgi.com,
Jan Kara <jack@...e.cz>,
linux-scsi <linux-scsi@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Gluk <git.user@...il.com>
Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN
environment. ~3 times slower then Solars 10 with the same HBA/Storage.
On Mon, Jan 20, 2014 at 05:58:55AM -0800, Christoph Hellwig wrote:
> On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
> > Yes, I think it can be done relatively simply. We'd have to change
> > the code in xfs_file_aio_write_checks() to check whether EOF zeroing
> > was required rather than always taking an exclusive lock (for block
> > aligned IO at EOF sub-block zeroing isn't required),
>
> That's not even required for supporting aio appends, just a further
> optimization for it.
Oh, right, I got an off-by-one when reading the code - the EOF
zeroing only occurs when the offset is beyond EOF, not at or beyond
EOF...
> > and then we'd
> > have to modify the direct IO code to set the is_async flag
> > appropriately. We'd probably need a new flag to say tell the DIO
> > code that AIO beyond EOF is OK, but that isn't hard to do....
>
> Yep, need a flag to allow appending writes and then defer them.
>
> > Christoph, are you going to get any time to look at doing this in
> > the next few days?
>
> I'll probably need at least another week before I can get to it. If you
> wanna pick it up before than feel free.
I'm probably not going to get to it before then, either, so check
back in a week?
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists