[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140107155830.GA28395@infradead.org>
Date: Tue, 7 Jan 2014 07:58:30 -0800
From: Christoph Hellwig <hch@...radead.org>
To: Jan Kara <jack@...e.cz>
Cc: Sergey Meirovich <rathamahata@...il.com>,
linux-scsi <linux-scsi@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Gluk <git.user@...il.com>
Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN
environment. ~3 times slower then Solars 10 with the same HBA/Storage.
On Mon, Jan 06, 2014 at 09:10:32PM +0100, Jan Kara wrote:
> This is likely a problem of Linux direct IO implementation. The thing is
> that in Linux when you are doing appending direct IO (i.e., direct IO which
> changes file size), the IO is performed synchronously so that we have our
> life simpler with inode size update etc. (and frankly our current locking
> rules make inode size update on IO completion almost impossible). Since
> appending direct IO isn't very common, we seem to get away with this
> simplification just fine...
Shouldn't be too much of a problem at least for XFS and maybe even ext4
with the workqueue based I/O end handler. For XFS we protect size
updates by the ilock which we already taken in that handler, not sure
what ext4 would do there.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists