[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1303778790.3981.283.camel@sli10-conroe>
Date: Tue, 26 Apr 2011 08:46:30 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: Tejun Heo <htejun@...il.com>
Cc: lkml <linux-kernel@...r.kernel.org>,
linux-ide <linux-ide@...r.kernel.org>,
Jens Axboe <jaxboe@...ionio.com>,
Jeff Garzik <jgarzik@...ox.com>,
Christoph Hellwig <hch@...radead.org>,
"Darrick J. Wong" <djwong@...ibm.com>
Subject: Re: [PATCH 1/2]block: optimize non-queueable flush request drive
On Mon, 2011-04-25 at 17:13 +0800, Tejun Heo wrote:
> Hello,
>
> On Mon, Apr 25, 2011 at 10:58:27AM +0200, Tejun Heo wrote:
> > Eh, wasn't your optimization only applicable if flush is not
> > queueable? IIUC, what your optimization achieves is merging
> > back-to-back flushes and you're achieving that in a _very_ non-obvious
> > round-about way. Do it in straight-forward way even if that costs
> > more lines of code.
>
> To add a bit more, here, flush exclusivity gives you an extra ordering
> contraint that while flush is in progress no other request can proceed
> and thus if there's another flush queued, it can be completed
> together, right? If so, teach block layer the whole thing - let block
> layer hold further requests while flush is in progress and complete
> back-to-back flushes together on completion and then resume normal
> queue processing.
the blk-flush is part of block layer. If what you mean is the libata
part, block layer doesn't know if flush is queueable without knowledge
from driver.
> Also, another interesting thing to investigate is why the two flushes
> didn't get merged in the first place. The two flushes apparently
> didn't have any ordering requirement between them. Why didn't they
> get merged in the first place? If the first flush were slightly
> delayed, the second would have been issued together from the beginning
> and we wouldn't have to think about merging them afterwards. Maybe
> what we really need is better algorithm than C1/2/3 described in the
> comment?
the sysbench fileio does a 16 threads write-fsync, so it's quite normal
a flush is running and another flush is added into pending list.
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists