[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100621232208.GA10175@redhat.com>
Date: Mon, 21 Jun 2010 19:22:08 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: Jeff Moyer <jmoyer@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] cfq: allow dispatching of both sync and async I/O
together
On Mon, Jun 21, 2010 at 09:59:48PM +0200, Jens Axboe wrote:
> On 21/06/10 21.49, Jeff Moyer wrote:
> > Hi,
> >
> > In testing a workload that has a single fsync-ing process and another
> > process that does a sequential buffered read, I was unable to tune CFQ
> > to reach the throughput of deadline. This patch, along with the previous
> > one, brought CFQ in line with deadline when setting slice_idle to 0.
> >
> > I'm not sure what the original reason for not allowing sync and async
> > I/O to be dispatched together was. If there is a workload I should be
> > testing that shows the inherent problems of this, please point me at it
> > and I will resume testing. Until and unless that workload is identified,
> > please consider applying this patch.
>
> The problematic case is/was a normal SATA drive with a buffered
> writer and an occasional reader. I'll have to double check my
> mail tomorrow, but iirc the issue was that the occasional reader
> would suffer great latencies since service times for that single
> IO would be delayed at the drive side. It could perhaps just be
> a bug in how we handle the slice idling on the read side when the
> IO gets delayed initially.
>
> So if my memory is correct, google for the fsync madness and
> interactiveness thread that we had some months ago and which
> caused a lot of tweaking. The commit adding this is
> 5ad531db6e0f3c3c985666e83d3c1c4d53acccf9 and was added back
> in July last year. So it was around that time that the mails went
> around.
Hi Jens,
I suspect we might have introduced this patch because mike galbraith
had issues which application interactiveness (reading data back from swap)
in the prence of heavy writeout on SATA disk.
After this patch we did two enhancements.
- You introduced the logic of building write queue depth gradually.
- Corrado introduced the logic of idling on the random reader service
tree.
In the past random reader were not protected from WRITES as there was no
idling on random readers. But with corrado's changes of idling on
sync-noidle service tree, I think this problem might have been solved to
a great extent.
Getting rid of this exclusivity of either SYNC/ASYNC requests in request
queue might help us with throughput on storage arrys without loosing
protection for random reader on SATA.
I will do some testing with and without patch and see if above is true
or not.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists