[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0709121540370.4067@schroedinger.engr.sgi.com>
Date: Wed, 12 Sep 2007 15:47:14 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc: Nick Piggin <npiggin@...e.de>,
Daniel Phillips <phillips@...nq.net>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
dkegel@...gle.com, David Miller <davem@...emloft.net>
Subject: Re: [RFC 0/3] Recursive reclaim (on __PF_MEMALLOC)
On Wed, 12 Sep 2007, Peter Zijlstra wrote:
> > assumes single critical user of memory. There are other consumers of
> > memory and if you have a load that depends on other things than networking
> > then you should not kill the other things that want memory.
>
> The VM is a _critical_ user of memory. And I dare say it is the _most_
> important user.
The users of memory are various subsystems. The VM itself of course also
uses memory to manage memory but the important thing is that the VM
provides services to other subsystems
> Every user of memory relies on the VM, and we only get into trouble if
> the VM in turn relies on one of these users. Traditionally that has only
> been the block layer, and we special cased that using mempools and
> PF_MEMALLOC.
>
> Why do you object to me doing a similar thing for networking?
I have not seen you using mempools for the networking layer. I would not
object to such a solution. It already exists for other subsystems.
> The problem of circular dependancies on and with the VM is rather
> limited to kernel IO subsystems, and we only have a limited amount of
> them.
The kernel has to use the filesystems and other subsystems for I/O. These
subsystems compete for memory in order to make progress. I would not
consider strictly them part of the VM. The kernel reclaim may trigger I/O
in multiple I/O subsystems simultaneously.
> You talk about something generic, do you mean an approach that is
> generic across all these subsystems?
Yes an approach that is fair and does not allow one single subsystem to
hog all of memory.
> If so, my approach would be it, I can replace mempools as we have them
> with the reserve system I introduce.
Replacing the mempools for the block layer sounds pretty good. But how do
these various subsystems that may live in different portions of the system
for various devices avoid global serialization and livelock through your
system? And how is fairness addresses? I may want to run a fileserver on
some nodes and a HPC application that relies on a fiberchannel connection
on other nodes. How do we guarantee that the HPC application is not
impacted if the network services of the fileserver flood the system with
messages and exhaust memory?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists