lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17646.32332.572865.919526@cse.unsw.edu.au>
Date:	Fri, 25 Aug 2006 14:36:28 +1000
From:	Neil Brown <neilb@...e.de>
To:	Jens Axboe <axboe@...e.de>
Cc:	David Chinner <dgc@....com>, Andi Kleen <ak@...e.de>,
	linux-kernel@...r.kernel.org, akpm@...l.org
Subject: Re: RFC - how to balance Dirty+Writeback in the face of slow  writeback.

On Monday August 21, axboe@...e.de wrote:
> 
> But these numbers are in no way tied to the hardware. It may be totally
> reasonable to have 3GiB of dirty data on one system, and it may be
> totally unreasonable to have 96MiB of dirty data on another. I've always
> thought that assuming any kind of reliable throttling at the queue level
> is broken and that the vm should handle this completely.

I keep changing my mind about this.  Sometimes I see it that way,
sometimes it seems very sensible for throttling to happen at the
device queue.

Can I ask a question:  Why do we have a 'nr_requests' maximum?  Why
not just allocate request structures whenever a request is made?
If there some reason relating to making the block layer work more
efficiently? or is it just because the VM requires it.


I'm beginning to think that the current scheme really works very well
- except for a few 'bugs'(*).
The one change that might make sense would be for the VM to be able to
tune the queue size of each backing dev.  Exactly how that would work
I'm not sure, but the goal would be to get the sum of the active queue
sizes to about 1 half of dirty_threshold.

The 'bugs' I am currently aware of are:
 - nfs doesn't put a limit on the request queue
 - the ext3 journal often writes out dirty data without clearing
   the Dirty flag on the page - so the nr_dirty count ends up wrong.
   ext3 writes the buffers out and marks them clean.  So when
   the VM tried to flush a page, it finds all the buffers are clean
   and so marks the page clean, so the nr_dirty count eventually
   gets correct again, but I think this can cause write throttling to
   be very unfair at times.

I think we need a queue limit on NFS requests.....

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ