[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0709101315020.25407@schroedinger.engr.sgi.com>
Date: Mon, 10 Sep 2007 13:17:58 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc: Nick Piggin <npiggin@...e.de>,
Daniel Phillips <phillips@...nq.net>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
dkegel@...gle.com, David Miller <davem@...emloft.net>
Subject: Re: [RFC 0/3] Recursive reclaim (on __PF_MEMALLOC)
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
> > Allright maybe you can get the kernel to be stable in the face of having
> > no memory and debug all the fallback paths in the kernel when an OOM
> > condition occurs.
> >
> > But system calls will fail? Like fork/exec? etc? There may be daemons
> > running that are essential for the system to survive and that cannot
> > easily take an OOM condition? Various reclaim paths also need memory and
> > if the allocation fails then reclaim cannot continue.
>
> I'm not making any of these paths significantly more likely to occur
> than they already are. Lots and lots of users run swap heavy loads day
> in day out - they don't get funny systems (well sometimes they do, and
> theoretically we can easily run out of the PF_MEMALLOC reserves -
> HOWEVER in practise it seems to work quite reliably).
>
The patchset increases these failures significantly since there will be a
longer time period where these allocations can fail.
The swap loads are fine as long as we do not exhaust the reserve pools.
IMHO the right solution is to throttle the networking layer to not do
unbounded allocations. You can likely do this by checking certain VM
counters like SLAB_UNRECLAIMABLE. If need be we can add a new category of
SLAB_TEMPORARY for temporary allocs and track these. If they get too large
then throttle.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists