lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060821135132.GG4290@suse.de>
Date:	Mon, 21 Aug 2006 15:51:32 +0200
From:	Jens Axboe <axboe@...e.de>
To:	Neil Brown <neilb@...e.de>
Cc:	David Chinner <dgc@....com>, Andi Kleen <ak@...e.de>,
	linux-kernel@...r.kernel.org, akpm@...l.org
Subject: Re: RFC - how to balance Dirty+Writeback in the face of slow  writeback.

On Mon, Aug 21 2006, Neil Brown wrote:
> Jens:  Was it s SuSE kernel or a mainline kernel on which you
>    experienced this slowdown with an external USB drive?

Note that this was on the old days, on 2.4 kernels. It was (and still
is) a generic 2.4 problem, which is quite apparent when you have larger
slow devices. Larger, because then you can have a lot of dirty memory in
flight for that device. The case I most often saw reported was on
DVD-RAM atapi or scsi devices, which write at 3-400kb/sec. An external
usb hard drive over usb 1.x would be almost as bad, I suppose.

I haven't heard any complaints for 2.6 in this area for a long time.

> Then we just need to deal with the case where the some of the queue
> limits of all devices exceeds the dirty threshold....
> Maybe writeout queues need to auto-adjust their queue length when some
> system-wide situation is detected.... sounds messy.

Queue length is a little tricky, so it's basically controlled by two
parameters - nr_requests and max_sectors_kb. Most SATA drives can do
32MiB requests, so in theory a system that sets max_sectors_kb to
max_hw_sectors_kb and retains a default nr_requests of 128, can see up
to 32 * (128 * 3) / 2 == 6144MiB per disk in flight. Auch. By default we
only allow 512KiB per requests, which brings us to a more reasonable
96MiB per disk.

But these numbers are in no way tied to the hardware. It may be totally
reasonable to have 3GiB of dirty data on one system, and it may be
totally unreasonable to have 96MiB of dirty data on another. I've always
thought that assuming any kind of reliable throttling at the queue level
is broken and that the vm should handle this completely.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ