[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070325154143.54e40200.akpm@linux-foundation.org>
Date: Sun, 25 Mar 2007 15:41:43 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Miklos Szeredi <miklos@...redi.hu>
Cc: dgc@....com, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/3] remove throttle_vm_writeout()
On Sat, 24 Mar 2007 22:57:34 +0100 Miklos Szeredi <miklos@...redi.hu> wrote:
> From: Miklos Szeredi <mszeredi@...e.cz>
>
> Remove this function. It's purpose was to limit the global number of
> writeback pages from submitted by direct reclaim. But this is equally
> well accomplished by limited queue lengths. When this function was
> added, the device queues had much larger default lengths (8192
> requests, now it's 128), causing problems.
This changelog is wrong.
Yes, the _default_ depth of CFQ was decreased. But it is trivial for the
user to inrcrease the queue depth again, and we'd prefer that the VM not
shit itself in response.
Plus, more significantly, the queue is per-disk. A system with many disks
will easily be able to cover all memory with under-writeback pages.
Which is why the level of under-writeback memory is controlled at a higher
level.
These variables are why we must not reply upon per-queue throttling in
VFS/MM/VM.
All that being said, the patch (with a new, correct changlog) is OK. This
is because we now control the amount of dirty memory in the machine by
running balance_dirty_pages() at first-write-fault time. So we can no
longer get into the situation where all memory is dirty+writeback.
That being said, this patch is only correct if we don't apply "[patch 1/3]
fix illogical behavior in balance_dirty_pages()". If we _do_ apply that
patch then we can again get all memory under-writeback and we again need
throttle_vm_writeout().
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists