lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090408004410.GA18679@localhost>
Date:	Wed, 8 Apr 2009 08:44:10 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Jos Houtman <jos@...es.nl>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>
Subject: Re: [PATCH 0/7] Per-bdi writeback flusher threads

[CC Jens]

On Tue, Apr 07, 2009 at 10:03:38PM +0800, Jos Houtman wrote:
> 
> I tried the write-back branch from the 2.6-block tree.
> 
> And I can atleast confirm that it works, atleast in relation to the
> writeback not keeping up when the device was congested before it wrote a
> 1024 pages. 
> 
> See: http://lkml.org/lkml/2009/3/22/83  for a bit more information.

Hi Jos, you said that this simple patch solved the problem, however you
mentioned somehow suboptimal performance. Can you elaborate that?  So
that I can push or improve it.

Thanks,
Fengguang
---
 fs/fs-writeback.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- mm.orig/fs/fs-writeback.c
+++ mm/fs/fs-writeback.c
@@ -325,7 +325,8 @@ __sync_single_inode(struct inode *inode,
 				 * soon as the queue becomes uncongested.
 				 */
 				inode->i_state |= I_DIRTY_PAGES;
-				if (wbc->nr_to_write <= 0) {
+				if (wbc->nr_to_write <= 0 ||
+				    wbc->encountered_congestion) {
 					/*
 					 * slice used up: queue for next turn
 					 */

> But the second problem seen in that thread, a write-starve-read problem does
> not seem to solved. In this problem the writes of the writeback algorithm
> starve the ongoing reads, no matter what io-scheduler is picked.
> 
> For good measure I also applied the blk-latency patches on top of the
> writeback branch, this did not improve anything. Nor did lowering
> max_sectors_kb, as linus suggested in the IO latency thread.
> 
> 
> As for a reproducible test-case: the simplest I could come up with was
> modifying the fsync-tester not to fsync, but letting the normal writeback
> handle it. And starting a separate process that tries to sequentially read a
> file from the same device. The read performance drops to a bare minimum as
> soon as the writeback algorithm kicks in.
> 
> 
> Jos
> 
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ