[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070821003212.GC8414@wotan.suse.de>
Date: Tue, 21 Aug 2007 02:32:12 +0200
From: Nick Piggin <npiggin@...e.de>
To: Christoph Lameter <clameter@....com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
dkegel@...gle.com, David Miller <davem@...emloft.net>,
Daniel Phillips <phillips@...gle.com>
Subject: Re: [RFC 0/3] Recursive reclaim (on __PF_MEMALLOC)
On Mon, Aug 20, 2007 at 12:15:01PM -0700, Christoph Lameter wrote:
> On Mon, 20 Aug 2007, Peter Zijlstra wrote:
>
> > > > <> What Christoph is proposing is doing recursive reclaim and not
> > > > initiating writeout. This will only work _IFF_ there are clean pages
> > > > about. Which in the general case need not be true (memory might be
> > > > packed with anonymous pages - consider an MPI cluster doing computation
> > > > stuff). So this gets us a workload dependant solution - which IMHO is
> > > > bad!
> > >
> > > Although you will quite likely have at least a couple of MB worth of
> > > clean program text. The important part of recursive reclaim is that it
> > > doesn't so easily allow reclaim to blow all memory reserves (including
> > > interrupt context). Sure you still have theoretical deadlocks, but if
> > > I understand correctly, they are going to be lessened. I would be
> > > really interested to see if even just these recursive reclaim patches
> > > eliminate the problem in practice.
> >
> > were we much bothered by the buffered write deadlock? - why accept a
> > known deadlock if a solid solution is quite attainable?
>
> Buffered write deadlock? How does that exactly occur? Memory allocation in
> the writeout path while we hold locks?
Different topic. Peter was talking about the write(2) write deadlock
where we take a page fault while holding a page lock (which leads to
lock inversion, taking the lock twice etc.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists