[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0904051227120.4023@localhost.localdomain>
Date: Sun, 5 Apr 2009 12:34:32 -0700 (PDT)
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Arjan van de Ven <arjan@...radead.org>
cc: Theodore Tso <tytso@....edu>, Jens Axboe <jens.axboe@...cle.com>,
Linux Kernel Developers List <linux-kernel@...r.kernel.org>,
Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: [GIT PULL] Ext3 latency fixes
On Sun, 5 Apr 2009, Arjan van de Ven wrote:
>
> > See get_request():
>
> our default number of requests is so low that we very regularly hit the
> limit. In addition to setting kjournald to higher priority, I tend to
> set the number of requests to 4096 or so to improve interactive
> performance on my own systems. That way at least the elevator has a
> chance to see the requests ;-)
That's insane. Long queues make the problem harder to hit, yes. But it
also tends to make the problem them a million times worse when you _do_
hit it.
I would suggest looking instead at trying to have separate allocation
pools for bulk and "sync" IO. Instead of having just one rq->rq_pool, we
could easily have a rq->rq_bulk_pool and rq->rq_sync_pool.
We might even _save_ memory by having two pools simply because that may
make it much less important to have a big pool. Most subsystems don't
really need that many requests in flight anyway, and the advantage to the
elevator of huge pools is rather dubious.
So you obviously need more requests than the hardware can have in flight
(since you want to be able to feed the hardware new requests and overlap
the refill with the ones executing), but 4096 sounds excessive if you're
doing something like SATA that can only have 32 actual outstanding
requests at the hardware.
But yes, if a synchronous request gets blocked just because we've already
used all the requests, latency will be suffer.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists