[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C6C34E0.3050601@vlnb.net>
Date: Wed, 18 Aug 2010 23:30:40 +0400
From: Vladislav Bolkhovitin <vst@...b.net>
To: Tejun Heo <tj@...nel.org>
CC: jaxboe@...ionio.com, linux-fsdevel@...r.kernel.org,
linux-scsi@...r.kernel.org, linux-ide@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
hch@....de, James.Bottomley@...e.de, tytso@....edu,
chris.mason@...cle.com, swhiteho@...hat.com,
konishi.ryusuke@....ntt.co.jp, dm-devel@...hat.com, jack@...e.cz,
rwheeler@...hat.com, hare@...e.de
Subject: Re: [PATCHSET block#for-2.6.36-post] block: replace barrier with
sequenced flush
Hello,
Tejun Heo, on 08/13/2010 05:21 PM wrote:
>> If requested, I can develop the interface further.
>
> I still think the benefit of ordering by tag would be marginal at
> best, and what have you guys measured there? Under the current
> framework, there's no easy way to measure full ordered-by-tag
> implementation. The mechanism for filesystems to communicate the
> ordering information (which would be a partially ordered graph) just
> isn't there and there is no way the current usage of ordering-by-tag
> only for barrier sequence can achieve anything close to that level of
> difference.
Basically, I measured how iSCSI link utilization depends from amount of
queued commands and queued data size. This is why I made it as a table.
From it you can see which improvement you will have removing queue
draining after 1, 2, 4, etc. commands depending of commands sizes.
For instance, on my previous XFS rm example, where rm of 4 files took
3.5 minutes with nobarrier option, I could see that XFS was sending 1-3
32K commands in a row. From my table you can see that if it sent all
them at once without draining, it would have about 150-200% speed increase.
Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists