[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0708101041040.12758@schroedinger.engr.sgi.com>
Date: Fri, 10 Aug 2007 10:46:30 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Daniel Phillips <daniel.raymond.phillips@...il.com>
cc: Daniel Phillips <phillips@...nq.net>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Matt Mackall <mpm@...enic.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>
Subject: Re: [PATCH 02/10] mm: system wide ALLOC_NO_WATERMARK
On Fri, 10 Aug 2007, Daniel Phillips wrote:
> It is quite clear what is in your patch. Instead of just grabbing a
> page off the buddy free lists in a critical allocation situation you
> go invoke shrink_caches. Why oh why? All the memory needed to get
Because we get to the code of interest when we have no memory on the
buddy free lists and need to reclaim memory to fill them up again.
> You do not do anything to prevent mixing of ordinary slab allocations
> of unknown duration with critical allocations of controlled duration.
> This is _very important_ for sk_alloc. How are you going to take
> care of that?
It is not necessary because you can reclaim memory as needed.
> There are certainly improvements that can be made to the posted patch
> set. Running off and learning from scratch how to do this is not
> really helpful.
The idea of adding code to deal with "I have no memory" situations
in a kernel that based on have as much memory as possible in use at all
times is plainly the wrong approach. If you need memory then memory needs
to be reclaimed. That is the basic way that things work and following that
through brings about a much less invasive solution without all the issues
that the proposed solution creates.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists