[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANejiEXwt7zdWRD0gTg1JXNPZ52vU1Z4cQsb3xztqDLccyTPSQ@mail.gmail.com>
Date: Mon, 30 Jan 2012 18:31:34 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: Herbert Poetzl <herbert@...hfloor.at>
Cc: Wu Fengguang <wfg@...ux.intel.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>, Tejun Heo <tj@...nel.org>
Subject: Re: Bad SSD performance with recent kernels
2012/1/30 Shaohua Li <shaohua.li@...el.com>:
> On Mon, 2012-01-30 at 08:36 +0100, Herbert Poetzl wrote:
>> On Mon, Jan 30, 2012 at 03:22:38PM +0800, Shaohua Li wrote:
>> > On Mon, 2012-01-30 at 08:13 +0100, Herbert Poetzl wrote:
>> >> On Mon, Jan 30, 2012 at 11:17:38AM +0800, Shaohua Li wrote:
>> >>> 2012/1/30 Wu Fengguang <wfg@...ux.intel.com>:
>> >>>> On Sun, Jan 29, 2012 at 02:13:51PM +0100, Eric Dumazet wrote:
>> >>>>> Le dimanche 29 janvier 2012 à 19:16 +0800, Wu Fengguang a écrit :
>>
>> >>>>>> Note that as long as buffered read(2) is used, it makes almost no
>> >>>>>> difference (well, at least for now) to do "dd bs=128k" or "dd bs=2MB":
>> >>>>>> the 128kb readahead size will be used underneath to submit read IO.
>>
>> >>>>> Hmm...
>>
>> >>>>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
>> >>>>> 32768+0 enregistrements lus
>> >>>>> 32768+0 enregistrements écrits
>> >>>>> 4294967296 octets (4,3 GB) copiés, 20,7718 s, 207 MB/s
>>
>>
>> >>>>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
>> >>>>> 2048+0 enregistrements lus
>> >>>>> 2048+0 enregistrements écrits
>> >>>>> 4294967296 octets (4,3 GB) copiés, 27,7824 s, 155 MB/s
>>
>> >>>> Interesting. Here are my test results:
>>
>> >>>> root@...-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768
>> >>>> 32768+0 records in
>> >>>> 32768+0 records out
>> >>>> 4294967296 bytes (4.3 GB) copied, 19.0121 s, 226 MB/s
>> >>>> root@...-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048
>> >>>> 2048+0 records in
>> >>>> 2048+0 records out
>> >>>> 4294967296 bytes (4.3 GB) copied, 19.0214 s, 226 MB/s
>>
>> >>>> Maybe the /dev/sda performance bug on your machine is sensitive to timing?
>> >>> I got similar result:
>> >>> 128k: 224M/s
>> >>> 1M: 182M/s
>>
>> >>> 1M block size is slow, I guess it's CPU related.
>>
>> >>> And as for the big regression with newer kernel than 2.6.38,
>> >>> please check if idle=poll helps. CPU idle dramatically impacts
>> >>> disk performance and even latest cpuidle governor doesn't help
>> >>> for some CPUs.
>>
>> >> here are the tests with idle=poll and after switching to 128k
>> >> (instead of 1M) blocksize (same amount of data transferred)
>>
>> >> kernel ------------ read /dev/sda -------------
>> >> --- noop --- - deadline - ---- cfs ---
>> >> [MB/s] %CPU [MB/s] %CPU [MB/s] %CPU
>> >> --------------------------------------------------
>> >> 3.2.2 45.82 3.7 44.85 3.6 45.04 3.4
>> >> 3.2.2i 45.59 2.3 51.78 2.6 46.03 2.2
>> >> 3.2.2i128 250.24 20.9 252.68 21.3 250.00 21.6
>>
>> >> kernel -- write --- ------------------read -----------------
>> >> --- noop --- --- noop --- - deadline - ---- cfs ---
>> >> [MB/s] %CPU [MB/s] %CPU [MB/s] %CPU [MB/s] %CPU
>> >> ----------------------------------------------------------------
>> >> 3.2.2 270.95 42.6 162.36 9.9 162.63 9.9 162.65 10.1
>> >> 3.2.2i 269.10 41.4 170.82 6.6 171.20 6.6 170.91 6.7
>> >> 3.2.2i128 270.38 67.7 162.35 10.2 163.01 10.3 162.34 10.7
>>
>> > What's 3.2.2i and 3.2.2i128?
>>
>> 3.2.2 ...... kernel with default options (bs=1M)
>> 3.2.2i ..... kernel with idle=poll (bs=1M)
>> 3.2.2i128 .. kernel with idle=poll (bs=128k)
>>
>> > does idle=poll help?
>>
>> doesn't look like, at least to me ...
> what's your /sys/block/sdx/queue/max_sectors_kb? if you make it smaller,
> does the performance increase? In my system, a smaller max_sectors_kb
> makes bs=2M and bs=128k have similar performance, which makes me think
> it's CPU doesn't catch up quickly after a request finishes.
Looks the 2.6.39 block plug introduces some latency here. deleting
blk_start_plug/blk_finish_plug in generic_file_aio_read seems
workaround
the issue. The plug seems not good for sequential IO, because readahead
code already has plug and has fine grained control.
On the other hand, ondemand_readahead seems not handle the case
that req_size is big well.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists