[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45929658.3030507@cfl.rr.com>
Date: Wed, 27 Dec 2006 10:50:48 -0500
From: Phillip Susi <psusi@....rr.com>
To: Bill Davidsen <davidsen@....com>
CC: Manish Regmi <regmi.manish@...il.com>,
linux-kernel@...r.kernel.org, kernelnewbies@...linux.org
Subject: Re: Linux disk performance.
Bill Davidsen wrote:
> Quite honestly, the main place I have found O_DIRECT useful is in
> keeping programs doing large i/o quantities from blowing the buffers and
> making the other applications run like crap. If you application is
> running alone, unless you are very short of CPU or memory avoiding the
> copy to an o/s buffer will be down in the measurement noise.
>
> I had a news (usenet) server which normally did 120 art/sec (~480 tps),
> which dropped to about 50 tps when doing large file copies even at low
> priority. By using O_DIRECT the impact essentially vanished, at the cost
> of the copy running about 10-15% slower. Changing various programs to
> use O_DIRECT only helped when really large blocks of data were involved,
> and only when i/o clould be done in a way to satisfy the alignment and
> size requirements of O_DIRECT.
>
> If you upgrade to a newer kernel you can try other i/o scheduler
> options, default cfq or even deadline might be helpful.
I would point out that if you are looking for optimal throughput and
reduced cpu overhead, and avoid blowing out the kernel fs cache, you
need to couple aio with O_DIRECT. By itself O_DIRECT will lower
throughput because there will be brief pauses between each IO while the
application prepares the next buffer. You can overcome this by posting
a few pending buffers concurrently with aio, allowing the kernel to
always have a buffer ready for the next io as soon as the previous one
completes.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists