[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0708201210440.29092@schroedinger.engr.sgi.com>
Date: Mon, 20 Aug 2007 12:15:01 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc: Nick Piggin <npiggin@...e.de>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
dkegel@...gle.com, David Miller <davem@...emloft.net>,
Daniel Phillips <phillips@...gle.com>
Subject: Re: [RFC 0/3] Recursive reclaim (on __PF_MEMALLOC)
On Mon, 20 Aug 2007, Peter Zijlstra wrote:
> > > <> What Christoph is proposing is doing recursive reclaim and not
> > > initiating writeout. This will only work _IFF_ there are clean pages
> > > about. Which in the general case need not be true (memory might be
> > > packed with anonymous pages - consider an MPI cluster doing computation
> > > stuff). So this gets us a workload dependant solution - which IMHO is
> > > bad!
> >
> > Although you will quite likely have at least a couple of MB worth of
> > clean program text. The important part of recursive reclaim is that it
> > doesn't so easily allow reclaim to blow all memory reserves (including
> > interrupt context). Sure you still have theoretical deadlocks, but if
> > I understand correctly, they are going to be lessened. I would be
> > really interested to see if even just these recursive reclaim patches
> > eliminate the problem in practice.
>
> were we much bothered by the buffered write deadlock? - why accept a
> known deadlock if a solid solution is quite attainable?
Buffered write deadlock? How does that exactly occur? Memory allocation in
the writeout path while we hold locks?
There are many worst case scenarios in the current reclaim implementation
that are not addressed and we so far have not addressed these because the
code is very sensitive and it is not clear that the complexity introduced
by these changes is offset by the benefits gained.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists