lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 May 2009 10:51:23 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-mm@...ck.org
Subject: Re: [PATCH] readahead:add blk_run_backing_dev

On Mon, May 18, 2009 at 07:53:00PM +0200, Jens Axboe wrote:
> On Mon, May 18 2009, Hisashi Hifumi wrote:
> > Hi.
> > 
> > I wrote a patch that adds blk_run_backing_dev on page_cache_async_readahead
> > so readahead I/O is unpluged to improve throughput.
> > 
> > Following is the test result with dd.
> > 
> > #dd if=testdir/testfile of=/dev/null bs=16384
> > 
> > -2.6.30-rc6
> > 1048576+0 records in
> > 1048576+0 records out
> > 17179869184 bytes (17 GB) copied, 224.182 seconds, 76.6 MB/s
> > 
> > -2.6.30-rc6-patched
> > 1048576+0 records in
> > 1048576+0 records out
> > 17179869184 bytes (17 GB) copied, 206.465 seconds, 83.2 MB/s
> > 
> > Sequential read performance on a big file was improved.
> > Please merge my patch.
> > 
> > Thanks.
> > 
> > Signed-off-by: Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>
> > 
> > diff -Nrup linux-2.6.30-rc6.org/mm/readahead.c linux-2.6.30-rc6.unplug/mm/readahead.c
> > --- linux-2.6.30-rc6.org/mm/readahead.c	2009-05-18 10:46:15.000000000 +0900
> > +++ linux-2.6.30-rc6.unplug/mm/readahead.c	2009-05-18 13:00:42.000000000 +0900
> > @@ -490,5 +490,7 @@ page_cache_async_readahead(struct addres
> >  
> >  	/* do read-ahead */
> >  	ondemand_readahead(mapping, ra, filp, true, offset, req_size);
> > +
> > +	blk_run_backing_dev(mapping->backing_dev_info, NULL);
> >  }
> >  EXPORT_SYMBOL_GPL(page_cache_async_readahead);
> 
> I'm surprised this makes much of a difference. It seems correct to me to
> NOT unplug the device, since it will get unplugged when someone ends up
> actually waiting for a page. And that will then kick off the remaining
> IO as well. For this dd case, you'll be hitting lock_page() for the
> readahead page really soon, definitely not long enough to warrant such a
> big difference in speed.

The possible timing change of this patch is (assuming readahead size=100):

T0   read(100), which triggers readahead(200, 100)
T1   read(101)
T2   read(102)
...
T100 read(200), find_get_page(200) => readahead(300, 100)
                lock_page(200) => implicit unplug

The readahead(200, 100) submitted at time T0 *might* be delayed to the
unplug time of T100.

But that is only a possibility. In normal cases, the read(200) would
be blocking and there will be a lock_page(200) that will immediately
unplug device for readahead(300, 100).

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ