lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Mar 2010 21:09:32 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
Cc:	Jens Axboe <jens.axboe@...cle.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Ronald <intercommit@...il.com>,
	Bart Van Assche <bart.vanassche@...il.com>,
	Vladislav Bolkhovitin <vst@...b.net>,
	Randy Dunlap <randy.dunlap@...cle.com>
Subject: Re: [RFC PATCH] Fix Readahead stalling by plugged device queues

> --- linux.orig/mm/readahead.c
> +++ linux/mm/readahead.c
> @@ -188,8 +188,11 @@ __do_page_cache_readahead(struct address
>  	 * uptodate then the caller will launch readpage again, and
>  	 * will then handle the error.
>  	 */
> -	if (ret)
> +	if (ret) {
>  		read_pages(mapping, filp, &page_pool, ret);
> +		/* unplug backing dev to avoid latencies */
> +		blk_run_address_space(mapping);
> +	}

Christian, did you notice this commit for 2.6.33?

commit 65a80b4c61f5b5f6eb0f5669c8fb120893bfb388
Author: Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>
Date:   Thu Dec 17 15:27:26 2009 -0800

    readahead: add blk_run_backing_dev

--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -547,5 +547,17 @@ page_cache_async_readahead(struct address_space *mapping,
 
        /* do read-ahead */
        ondemand_readahead(mapping, ra, filp, true, offset, req_size);
+
+#ifdef CONFIG_BLOCK
+       /*
+        * Normally the current page is !uptodate and lock_page() will be
+        * immediately called to implicitly unplug the device. However this
+        * is not always true for RAID conifgurations, where data arrives
+        * not strictly in their submission order. In this case we need to
+        * explicitly kick off the IO.
+        */
+       if (PageUptodate(page))
+               blk_run_backing_dev(mapping->backing_dev_info, NULL);
+#endif
 }

It should at least improve performance between .32 and .33, because
once two readahead requests are merged into one single IO request,
the PageUptodate() will be true at next readahead, and hence
blk_run_backing_dev() get called to break out of the suboptimal
situation. 

Your patch does reduce the possible readahead submit latency to 0.

Is your workload a simple dd on a single disk? If so, it sounds like
something illogical hidden in the block layer.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ