[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120131222217.GE4378@redhat.com>
Date: Tue, 31 Jan 2012 17:22:17 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Shaohua Li <shaohua.li@...el.com>,
lkml <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, Jens Axboe <axboe@...nel.dk>,
Herbert Poetzl <herbert@...hfloor.at>,
Eric Dumazet <eric.dumazet@...il.com>,
Wu Fengguang <wfg@...ux.intel.com>
Subject: Re: [PATCH] fix readahead pipeline break caused by block plug
On Tue, Jan 31, 2012 at 02:13:01PM -0800, Andrew Morton wrote:
[..]
> > For me, this patch helps only so much and does not get back all the
> > performance lost in case of raw disk read. It does improve the throughput
> > from around 85-90 MB/s to 110-120 MB/s but running the same dd with
> > iflag=direct, gets me more than 250MB/s.
> >
> > # echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/sdb of=/dev/null bs=1M count=1K
> > 1024+0 records in
> > 1024+0 records out
> > 1073741824 bytes (1.1 GB) copied, 9.03305 s, 119 MB/s
> >
> > echo 3 > /proc/sys/vm/drop_caches
> > # dd if=/dev/sdb of=/dev/null bs=1M count=1K iflag=direct
> > 1024+0 records in
> > 1024+0 records out
> > 1073741824 bytes (1.1 GB) copied, 4.07426 s, 264 MB/s
>
> Buffered I/O against the block device has a tradition of doing Weird
> Things. Do you see the same behavior when reading from a regular file?
No. Reading file on ext4 file system is working just fine.
>
> > I think it is happening because in case of raw read we are submitting
> > one page at a time to request queue
>
> (That's not a raw read - it's using pagecache. Please get the terms right!)
Ok.
>
> We've never really bothered making the /dev/sda[X] I/O very efficient
> for large I/O's under the (probably wrong) assumption that it isn't a
> very interesting case. Regular files will (or should) use the mpage
> functions, via address_space_operations.readpages(). fs/blockdev.c
> doesn't even implement it.
>
> > and by the time all the pages
> > are submitted and one big merged request is formed it wates lot of time.
>
> But that was the case in eariler kernels too. Why did it change?
Actually, I assumed that the case of reading /dev/sda[X] worked well in
earlier kernels. Sorry about that. Will build a 2.6.38 kernel tonight
and run the test case again to make sure we had same overhead and
relatively poor performance while reading /dev/sda[X].
I think I got confused with Eric's result in another mail where he was
reading /dev/sda and getting around 265MB/s with plug removed. And I was
wondering that why am I not getting same results.
# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sdb of=/dev/null bs=2M
# count=2048
2048+0 enregistrements lus
2048+0 enregistrements écrits
4294967296 octets (4,3 GB) copiés, 16,2309 s, 265 MB/s
Maybe something to do with SSD. I will test it anyway with older kernel.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists