[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131104122613.GB24407@amd.pavel.ucw.cz>
Date: Mon, 4 Nov 2013 13:26:13 +0100
From: Pavel Machek <pavel@....cz>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
Theodore Ts'o <tytso@....edu>,
"Artem S. Tashkinov" <t.artem@...os.com>,
Wu Fengguang <fengguang.wu@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...e.de>,
Maxim Patlasov <mpatlasov@...allels.com>
Subject: Re: Disabling in-memory write cache for x86-64 in Linux II
Hi!
> >> - temp-files may not be written out at all.
> >>
> >> Quite frankly, if you have multi-hundred-megabyte temptiles, you've
> >> got issues
> > Actually people do stuff like this e.g. when generating ISO images before
> > burning them.
>
> Yes, but then the temp-file is long-lived enough that it *will* hit
> the disk anyway. So it's only the "create temporary file and pretty
> much immediately delete it" case that changes behavior (ie compiler
> assembly files etc).
>
> If the temp-file is for something like burning an ISO image, the
> burning part is slow enough that the temp-file will hit the disk
> regardless of when we start writing it.
It will hit the disk, but with proposed change, burning still will be
slower.
Before:
create 700MB iso
burn the CD, at the same time writing the iso to disk
After:
create 700MB iso and write most of it to disk
burn the CD, writing the rest.
But yes, limiting dirty ammounts is good idea.
> That said, I'd certainly like it even *more* if the limits really were
> per-BDI, and the global limit was in addition to the per-bdi ones.
> Because when you have a USB device that gets maybe 10MB/s on
> contiguous writes, and 100kB/s on random 4k writes, I think it would
> make more sense to make the "start writeout" limits be 1MB/2MB, not
Actually I believe I seen 10kB/sec on an SD card... would expect that
from USB sticks, too.
And yes, there are actually real problems with this at least on N900.
You do apt-get install <big package>. apt internally does fsyncs. It
results in big enough latencies that watchdogs kick in and kill the
machine.
http://pavelmachek.livejournal.com/117089.html
People are doing
echo 3 > /proc/sys/vm/dirty_ratio
echo 3 > /proc/sys/vm/dirty_background_ratio
echo 100 > /proc/sys/vm/dirty_writeback_centisecs
echo 100 > /proc/sys/vm/dirty_expire_centisecs
echo 4096 > /proc/sys/vm/min_free_kbytes
echo 50 > /proc/sys/vm/swappiness
echo 200 > /proc/sys/vm/vfs_cache_pressure
echo 8 > /proc/sys/vm/page-cluster
echo 4 > /sys/block/mmcblk0/queue/nr_requests
echo 4 > /sys/block/mmcblk1/queue/nr_requests
.. to avoid it, but IIRC it only makes the watchdog reset less likely
:-(.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists